May 13 12:59:43.843787 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 11:28:50 -00 2025 May 13 12:59:43.843808 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:59:43.843819 kernel: BIOS-provided physical RAM map: May 13 12:59:43.843825 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 12:59:43.843832 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 12:59:43.843846 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 12:59:43.843853 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 12:59:43.843860 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 12:59:43.843869 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 12:59:43.843875 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 12:59:43.843882 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 13 12:59:43.843888 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 12:59:43.843895 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 12:59:43.843901 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 12:59:43.843911 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 12:59:43.843919 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 12:59:43.843926 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 13 12:59:43.843932 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 13 12:59:43.843939 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 13 12:59:43.843946 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 13 12:59:43.843953 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 12:59:43.843960 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 12:59:43.843967 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 12:59:43.843974 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 12:59:43.843981 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 12:59:43.843992 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 12:59:43.843999 kernel: NX (Execute Disable) protection: active May 13 12:59:43.844008 kernel: APIC: Static calls initialized May 13 12:59:43.844015 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 13 12:59:43.844022 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 13 12:59:43.844029 kernel: extended physical RAM map: May 13 12:59:43.844036 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 12:59:43.844043 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 12:59:43.844050 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 12:59:43.844057 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 12:59:43.844064 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 12:59:43.844073 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 12:59:43.844080 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 12:59:43.844087 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 13 12:59:43.844094 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 13 12:59:43.844104 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 13 12:59:43.844111 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 13 12:59:43.844121 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 13 12:59:43.844128 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 12:59:43.844135 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 12:59:43.844143 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 12:59:43.844150 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 12:59:43.844157 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 12:59:43.844164 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 13 12:59:43.844171 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 13 12:59:43.844179 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 13 12:59:43.844188 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 13 12:59:43.844195 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 12:59:43.844203 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 12:59:43.844210 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 12:59:43.844217 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 12:59:43.844224 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 12:59:43.844231 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 12:59:43.844238 kernel: efi: EFI v2.7 by EDK II May 13 12:59:43.844246 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 13 12:59:43.844253 kernel: random: crng init done May 13 12:59:43.844260 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 13 12:59:43.844268 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 13 12:59:43.844277 kernel: secureboot: Secure boot disabled May 13 12:59:43.844284 kernel: SMBIOS 2.8 present. May 13 12:59:43.844292 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 13 12:59:43.844299 kernel: DMI: Memory slots populated: 1/1 May 13 12:59:43.844306 kernel: Hypervisor detected: KVM May 13 12:59:43.844313 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 12:59:43.844320 kernel: kvm-clock: using sched offset of 3504350360 cycles May 13 12:59:43.844343 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 12:59:43.844351 kernel: tsc: Detected 2794.748 MHz processor May 13 12:59:43.844359 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 12:59:43.844366 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 12:59:43.844376 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 13 12:59:43.844384 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 12:59:43.844391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 12:59:43.844399 kernel: Using GB pages for direct mapping May 13 12:59:43.844406 kernel: ACPI: Early table checksum verification disabled May 13 12:59:43.844414 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 12:59:43.844421 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 12:59:43.844429 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844437 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844446 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 12:59:43.844454 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844461 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844468 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844476 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:59:43.844483 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 12:59:43.844490 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 12:59:43.844498 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 12:59:43.844507 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 12:59:43.844515 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 12:59:43.844522 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 12:59:43.844529 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 12:59:43.844537 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 12:59:43.844544 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 12:59:43.844551 kernel: No NUMA configuration found May 13 12:59:43.844559 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 13 12:59:43.844566 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 13 12:59:43.844574 kernel: Zone ranges: May 13 12:59:43.844583 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 12:59:43.844591 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 13 12:59:43.844598 kernel: Normal empty May 13 12:59:43.844605 kernel: Device empty May 13 12:59:43.844612 kernel: Movable zone start for each node May 13 12:59:43.844620 kernel: Early memory node ranges May 13 12:59:43.844627 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 12:59:43.844634 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 12:59:43.844642 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 12:59:43.844651 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 13 12:59:43.844659 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 13 12:59:43.844666 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 13 12:59:43.844673 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 13 12:59:43.844681 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 13 12:59:43.844688 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 13 12:59:43.844695 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:59:43.844703 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 12:59:43.844720 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 12:59:43.844728 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:59:43.844736 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 13 12:59:43.844743 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 13 12:59:43.844753 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 13 12:59:43.844761 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 13 12:59:43.844768 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 13 12:59:43.844776 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 12:59:43.844784 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 12:59:43.844793 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 12:59:43.844801 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 12:59:43.844809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 12:59:43.844817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 12:59:43.844824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 12:59:43.844832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 12:59:43.844846 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 12:59:43.844854 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 12:59:43.844862 kernel: TSC deadline timer available May 13 12:59:43.844872 kernel: CPU topo: Max. logical packages: 1 May 13 12:59:43.844879 kernel: CPU topo: Max. logical dies: 1 May 13 12:59:43.844887 kernel: CPU topo: Max. dies per package: 1 May 13 12:59:43.844895 kernel: CPU topo: Max. threads per core: 1 May 13 12:59:43.844902 kernel: CPU topo: Num. cores per package: 4 May 13 12:59:43.844910 kernel: CPU topo: Num. threads per package: 4 May 13 12:59:43.844917 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 13 12:59:43.844925 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 12:59:43.844933 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 12:59:43.844941 kernel: kvm-guest: setup PV sched yield May 13 12:59:43.844950 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 13 12:59:43.844958 kernel: Booting paravirtualized kernel on KVM May 13 12:59:43.844966 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 12:59:43.844974 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 12:59:43.844982 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 13 12:59:43.844989 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 13 12:59:43.844997 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 12:59:43.845005 kernel: kvm-guest: PV spinlocks enabled May 13 12:59:43.845012 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 12:59:43.845023 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:59:43.845031 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:59:43.845039 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:59:43.845047 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:59:43.845055 kernel: Fallback order for Node 0: 0 May 13 12:59:43.845062 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 13 12:59:43.845070 kernel: Policy zone: DMA32 May 13 12:59:43.845078 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:59:43.845087 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:59:43.845095 kernel: ftrace: allocating 40071 entries in 157 pages May 13 12:59:43.845103 kernel: ftrace: allocated 157 pages with 5 groups May 13 12:59:43.845110 kernel: Dynamic Preempt: voluntary May 13 12:59:43.845118 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:59:43.845126 kernel: rcu: RCU event tracing is enabled. May 13 12:59:43.845134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:59:43.845142 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:59:43.845161 kernel: Rude variant of Tasks RCU enabled. May 13 12:59:43.845171 kernel: Tracing variant of Tasks RCU enabled. May 13 12:59:43.845179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:59:43.845193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:59:43.845209 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:59:43.845218 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:59:43.845226 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:59:43.845234 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 12:59:43.845242 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:59:43.845249 kernel: Console: colour dummy device 80x25 May 13 12:59:43.845260 kernel: printk: legacy console [ttyS0] enabled May 13 12:59:43.845275 kernel: ACPI: Core revision 20240827 May 13 12:59:43.845290 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 12:59:43.845298 kernel: APIC: Switch to symmetric I/O mode setup May 13 12:59:43.845306 kernel: x2apic enabled May 13 12:59:43.845314 kernel: APIC: Switched APIC routing to: physical x2apic May 13 12:59:43.845322 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 12:59:43.845342 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 12:59:43.845349 kernel: kvm-guest: setup PV IPIs May 13 12:59:43.845357 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 12:59:43.845368 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 12:59:43.845376 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 12:59:43.845384 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 12:59:43.845391 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 12:59:43.845399 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 12:59:43.845407 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 12:59:43.845415 kernel: Spectre V2 : Mitigation: Retpolines May 13 12:59:43.845422 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 12:59:43.845432 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 12:59:43.845440 kernel: RETBleed: Mitigation: untrained return thunk May 13 12:59:43.845448 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 12:59:43.845456 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 12:59:43.845464 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 12:59:43.845472 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 12:59:43.845480 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 12:59:43.845488 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 12:59:43.845496 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 12:59:43.845505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 12:59:43.845513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 12:59:43.845521 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 12:59:43.845529 kernel: Freeing SMP alternatives memory: 32K May 13 12:59:43.845536 kernel: pid_max: default: 32768 minimum: 301 May 13 12:59:43.845544 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:59:43.845551 kernel: landlock: Up and running. May 13 12:59:43.845559 kernel: SELinux: Initializing. May 13 12:59:43.845567 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:59:43.845577 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:59:43.845585 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 12:59:43.845592 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 12:59:43.845600 kernel: ... version: 0 May 13 12:59:43.845608 kernel: ... bit width: 48 May 13 12:59:43.845615 kernel: ... generic registers: 6 May 13 12:59:43.845623 kernel: ... value mask: 0000ffffffffffff May 13 12:59:43.845631 kernel: ... max period: 00007fffffffffff May 13 12:59:43.845638 kernel: ... fixed-purpose events: 0 May 13 12:59:43.845648 kernel: ... event mask: 000000000000003f May 13 12:59:43.845655 kernel: signal: max sigframe size: 1776 May 13 12:59:43.845663 kernel: rcu: Hierarchical SRCU implementation. May 13 12:59:43.845671 kernel: rcu: Max phase no-delay instances is 400. May 13 12:59:43.845679 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:59:43.845686 kernel: smp: Bringing up secondary CPUs ... May 13 12:59:43.845694 kernel: smpboot: x86: Booting SMP configuration: May 13 12:59:43.845702 kernel: .... node #0, CPUs: #1 #2 #3 May 13 12:59:43.845709 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:59:43.845719 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 12:59:43.845727 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 137196K reserved, 0K cma-reserved) May 13 12:59:43.845735 kernel: devtmpfs: initialized May 13 12:59:43.845742 kernel: x86/mm: Memory block size: 128MB May 13 12:59:43.845750 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 12:59:43.845758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 12:59:43.845766 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 13 12:59:43.845774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 12:59:43.845782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 13 12:59:43.845791 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 12:59:43.845799 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:59:43.845807 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:59:43.845814 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:59:43.845822 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:59:43.845830 kernel: audit: initializing netlink subsys (disabled) May 13 12:59:43.845844 kernel: audit: type=2000 audit(1747141182.189:1): state=initialized audit_enabled=0 res=1 May 13 12:59:43.845852 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:59:43.845861 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 12:59:43.845869 kernel: cpuidle: using governor menu May 13 12:59:43.845877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:59:43.845885 kernel: dca service started, version 1.12.1 May 13 12:59:43.845893 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 13 12:59:43.845901 kernel: PCI: Using configuration type 1 for base access May 13 12:59:43.845908 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 12:59:43.845916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:59:43.845924 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:59:43.845934 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:59:43.845941 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:59:43.845949 kernel: ACPI: Added _OSI(Module Device) May 13 12:59:43.845957 kernel: ACPI: Added _OSI(Processor Device) May 13 12:59:43.845964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:59:43.845972 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:59:43.845980 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:59:43.845989 kernel: ACPI: Interpreter enabled May 13 12:59:43.845998 kernel: ACPI: PM: (supports S0 S3 S5) May 13 12:59:43.846009 kernel: ACPI: Using IOAPIC for interrupt routing May 13 12:59:43.846018 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 12:59:43.846025 kernel: PCI: Using E820 reservations for host bridge windows May 13 12:59:43.846033 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 12:59:43.846041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:59:43.846219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:59:43.846355 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 12:59:43.846473 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 12:59:43.846487 kernel: PCI host bridge to bus 0000:00 May 13 12:59:43.846647 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 12:59:43.846790 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 12:59:43.846981 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 12:59:43.847109 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 13 12:59:43.847216 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 13 12:59:43.847323 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 13 12:59:43.847462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:59:43.847595 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 13 12:59:43.847722 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 13 12:59:43.847849 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 13 12:59:43.847967 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 13 12:59:43.848087 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 13 12:59:43.848205 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 12:59:43.848349 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:59:43.848471 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 13 12:59:43.848588 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 13 12:59:43.848733 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 13 12:59:43.848901 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 13 12:59:43.849020 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 13 12:59:43.849141 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 13 12:59:43.849257 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 13 12:59:43.849416 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 13 12:59:43.849538 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 13 12:59:43.849654 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 13 12:59:43.849769 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 13 12:59:43.849904 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 13 12:59:43.850054 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 13 12:59:43.850172 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 12:59:43.850302 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 13 12:59:43.850439 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 13 12:59:43.850555 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 13 12:59:43.850678 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 13 12:59:43.850800 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 13 12:59:43.850810 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 12:59:43.850818 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 12:59:43.850826 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 12:59:43.850842 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 12:59:43.850850 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 12:59:43.850857 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 12:59:43.850865 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 12:59:43.850873 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 12:59:43.850884 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 12:59:43.850891 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 12:59:43.850899 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 12:59:43.850907 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 12:59:43.850914 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 12:59:43.850922 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 12:59:43.850930 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 12:59:43.850938 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 12:59:43.850945 kernel: iommu: Default domain type: Translated May 13 12:59:43.850955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 12:59:43.850962 kernel: efivars: Registered efivars operations May 13 12:59:43.850970 kernel: PCI: Using ACPI for IRQ routing May 13 12:59:43.850978 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 12:59:43.850985 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 12:59:43.850993 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 13 12:59:43.851000 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 13 12:59:43.851008 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 13 12:59:43.851015 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 13 12:59:43.851025 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 13 12:59:43.851032 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 13 12:59:43.851040 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 13 12:59:43.851158 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 12:59:43.851273 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 12:59:43.851417 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 12:59:43.851429 kernel: vgaarb: loaded May 13 12:59:43.851436 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 12:59:43.851447 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 12:59:43.851455 kernel: clocksource: Switched to clocksource kvm-clock May 13 12:59:43.851462 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:59:43.851470 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:59:43.851478 kernel: pnp: PnP ACPI init May 13 12:59:43.851604 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 13 12:59:43.851630 kernel: pnp: PnP ACPI: found 6 devices May 13 12:59:43.851640 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 12:59:43.851650 kernel: NET: Registered PF_INET protocol family May 13 12:59:43.851658 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:59:43.851666 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:59:43.851674 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:59:43.851682 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:59:43.851692 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:59:43.851700 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:59:43.851708 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:59:43.851718 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:59:43.851726 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:59:43.851734 kernel: NET: Registered PF_XDP protocol family May 13 12:59:43.851869 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 13 12:59:43.851987 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 13 12:59:43.852099 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 12:59:43.852204 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 12:59:43.852309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 12:59:43.852447 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 13 12:59:43.852555 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 13 12:59:43.852660 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 13 12:59:43.852670 kernel: PCI: CLS 0 bytes, default 64 May 13 12:59:43.852678 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 13 12:59:43.852686 kernel: Initialise system trusted keyrings May 13 12:59:43.852695 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:59:43.852703 kernel: Key type asymmetric registered May 13 12:59:43.852714 kernel: Asymmetric key parser 'x509' registered May 13 12:59:43.852722 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 12:59:43.852730 kernel: io scheduler mq-deadline registered May 13 12:59:43.852738 kernel: io scheduler kyber registered May 13 12:59:43.852746 kernel: io scheduler bfq registered May 13 12:59:43.852754 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 12:59:43.852763 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 12:59:43.852773 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 12:59:43.852781 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 12:59:43.852789 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:59:43.852797 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 12:59:43.852805 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 12:59:43.852813 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 12:59:43.852821 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 12:59:43.852949 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 12:59:43.852963 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 12:59:43.853074 kernel: rtc_cmos 00:04: registered as rtc0 May 13 12:59:43.853210 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T12:59:43 UTC (1747141183) May 13 12:59:43.853321 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 12:59:43.853360 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 12:59:43.853368 kernel: efifb: probing for efifb May 13 12:59:43.853376 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 12:59:43.853384 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 12:59:43.853392 kernel: efifb: scrolling: redraw May 13 12:59:43.853404 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 12:59:43.853412 kernel: Console: switching to colour frame buffer device 160x50 May 13 12:59:43.853420 kernel: fb0: EFI VGA frame buffer device May 13 12:59:43.853428 kernel: pstore: Using crash dump compression: deflate May 13 12:59:43.853436 kernel: pstore: Registered efi_pstore as persistent store backend May 13 12:59:43.853444 kernel: NET: Registered PF_INET6 protocol family May 13 12:59:43.853452 kernel: Segment Routing with IPv6 May 13 12:59:43.853460 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:59:43.853468 kernel: NET: Registered PF_PACKET protocol family May 13 12:59:43.853478 kernel: Key type dns_resolver registered May 13 12:59:43.853485 kernel: IPI shorthand broadcast: enabled May 13 12:59:43.853493 kernel: sched_clock: Marking stable (2807001813, 158542215)->(2982271148, -16727120) May 13 12:59:43.853501 kernel: registered taskstats version 1 May 13 12:59:43.853509 kernel: Loading compiled-in X.509 certificates May 13 12:59:43.853521 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d81efc2839896c91a2830d4cfad7b0572af8b26a' May 13 12:59:43.853529 kernel: Demotion targets for Node 0: null May 13 12:59:43.853537 kernel: Key type .fscrypt registered May 13 12:59:43.853545 kernel: Key type fscrypt-provisioning registered May 13 12:59:43.853555 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:59:43.853563 kernel: ima: Allocated hash algorithm: sha1 May 13 12:59:43.853571 kernel: ima: No architecture policies found May 13 12:59:43.853579 kernel: clk: Disabling unused clocks May 13 12:59:43.853587 kernel: Warning: unable to open an initial console. May 13 12:59:43.853595 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 12:59:43.853603 kernel: Write protecting the kernel read-only data: 24576k May 13 12:59:43.853611 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 12:59:43.853621 kernel: Run /init as init process May 13 12:59:43.853629 kernel: with arguments: May 13 12:59:43.853637 kernel: /init May 13 12:59:43.853644 kernel: with environment: May 13 12:59:43.853652 kernel: HOME=/ May 13 12:59:43.853660 kernel: TERM=linux May 13 12:59:43.853668 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:59:43.853676 systemd[1]: Successfully made /usr/ read-only. May 13 12:59:43.853688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:59:43.853699 systemd[1]: Detected virtualization kvm. May 13 12:59:43.853707 systemd[1]: Detected architecture x86-64. May 13 12:59:43.853715 systemd[1]: Running in initrd. May 13 12:59:43.853723 systemd[1]: No hostname configured, using default hostname. May 13 12:59:43.853732 systemd[1]: Hostname set to . May 13 12:59:43.853740 systemd[1]: Initializing machine ID from VM UUID. May 13 12:59:43.853749 systemd[1]: Queued start job for default target initrd.target. May 13 12:59:43.853757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:59:43.853768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:59:43.853777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:59:43.853785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:59:43.853794 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:59:43.853803 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:59:43.853813 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:59:43.853824 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:59:43.853841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:59:43.853850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:59:43.853858 systemd[1]: Reached target paths.target - Path Units. May 13 12:59:43.853867 systemd[1]: Reached target slices.target - Slice Units. May 13 12:59:43.853876 systemd[1]: Reached target swap.target - Swaps. May 13 12:59:43.853884 systemd[1]: Reached target timers.target - Timer Units. May 13 12:59:43.853893 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:59:43.853901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:59:43.853912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:59:43.853921 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:59:43.853929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:59:43.853938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:59:43.853947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:59:43.853955 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:59:43.853964 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:59:43.853972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:59:43.853983 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:59:43.853992 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:59:43.854001 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:59:43.854009 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:59:43.854018 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:59:43.854028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:43.854037 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:59:43.854048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:59:43.854057 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:59:43.854066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:59:43.854095 systemd-journald[219]: Collecting audit messages is disabled. May 13 12:59:43.854117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:43.854126 systemd-journald[219]: Journal started May 13 12:59:43.854145 systemd-journald[219]: Runtime Journal (/run/log/journal/434dd170d7424d8ab39f2d2e80bf6a7b) is 6M, max 48.5M, 42.4M free. May 13 12:59:43.847123 systemd-modules-load[220]: Inserted module 'overlay' May 13 12:59:43.857745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:59:43.860123 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:59:43.869550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:59:43.876539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:59:43.877510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:59:43.884504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:59:43.886298 systemd-modules-load[220]: Inserted module 'br_netfilter' May 13 12:59:43.886423 kernel: Bridge firewalling registered May 13 12:59:43.888321 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:59:43.889616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:59:43.889944 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:59:43.894860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:59:43.906568 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:59:43.908194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:59:43.912164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:59:43.925444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:59:43.927188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:59:43.943096 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:59:43.966157 systemd-resolved[262]: Positive Trust Anchors: May 13 12:59:43.966172 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:59:43.966204 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:59:43.968776 systemd-resolved[262]: Defaulting to hostname 'linux'. May 13 12:59:43.969829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:59:43.975881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:59:44.054372 kernel: SCSI subsystem initialized May 13 12:59:44.063358 kernel: Loading iSCSI transport class v2.0-870. May 13 12:59:44.074362 kernel: iscsi: registered transport (tcp) May 13 12:59:44.094679 kernel: iscsi: registered transport (qla4xxx) May 13 12:59:44.094721 kernel: QLogic iSCSI HBA Driver May 13 12:59:44.115058 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:59:44.145422 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:59:44.149393 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:59:44.207203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:59:44.210206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:59:44.276383 kernel: raid6: avx2x4 gen() 27508 MB/s May 13 12:59:44.293367 kernel: raid6: avx2x2 gen() 26100 MB/s May 13 12:59:44.310724 kernel: raid6: avx2x1 gen() 21771 MB/s May 13 12:59:44.310776 kernel: raid6: using algorithm avx2x4 gen() 27508 MB/s May 13 12:59:44.328678 kernel: raid6: .... xor() 7056 MB/s, rmw enabled May 13 12:59:44.328734 kernel: raid6: using avx2x2 recovery algorithm May 13 12:59:44.350385 kernel: xor: automatically using best checksumming function avx May 13 12:59:44.521369 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:59:44.530622 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:59:44.533602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:59:44.566923 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 13 12:59:44.572896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:59:44.576838 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:59:44.602324 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation May 13 12:59:44.631959 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:59:44.635619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:59:44.712791 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:59:44.717456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:59:44.752362 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 12:59:44.758448 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:59:44.764566 kernel: cryptd: max_cpu_qlen set to 1000 May 13 12:59:44.778607 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:59:44.778654 kernel: GPT:9289727 != 19775487 May 13 12:59:44.778671 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:59:44.778685 kernel: GPT:9289727 != 19775487 May 13 12:59:44.778697 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:59:44.778710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:59:44.785363 kernel: libata version 3.00 loaded. May 13 12:59:44.787354 kernel: AES CTR mode by8 optimization enabled May 13 12:59:44.789459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:59:44.794633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:44.798937 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 12:59:44.797412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:44.803164 kernel: ahci 0000:00:1f.2: version 3.0 May 13 12:59:44.803480 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 12:59:44.805319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:44.811115 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 13 12:59:44.811396 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 13 12:59:44.811620 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 12:59:44.829353 kernel: scsi host0: ahci May 13 12:59:44.837696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:44.845702 kernel: scsi host1: ahci May 13 12:59:44.846034 kernel: scsi host2: ahci May 13 12:59:44.846236 kernel: scsi host3: ahci May 13 12:59:44.846471 kernel: scsi host4: ahci May 13 12:59:44.847378 kernel: scsi host5: ahci May 13 12:59:44.847606 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 13 12:59:44.849409 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 13 12:59:44.849434 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 13 12:59:44.851316 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 13 12:59:44.851400 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 13 12:59:44.852257 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 13 12:59:44.854882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:59:44.879134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:59:44.894297 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:59:44.895770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:59:44.906903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:59:44.908752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:59:44.909559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:59:44.909611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:44.915581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:44.929238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:44.930968 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:59:44.939003 disk-uuid[637]: Primary Header is updated. May 13 12:59:44.939003 disk-uuid[637]: Secondary Entries is updated. May 13 12:59:44.939003 disk-uuid[637]: Secondary Header is updated. May 13 12:59:44.941356 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:59:44.956546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:45.164550 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 12:59:45.164625 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 12:59:45.164636 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 12:59:45.166367 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 12:59:45.166393 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 12:59:45.167355 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 12:59:45.168367 kernel: ata3.00: applying bridge limits May 13 12:59:45.169367 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 12:59:45.169389 kernel: ata3.00: configured for UDMA/100 May 13 12:59:45.170365 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 12:59:45.230361 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 12:59:45.230586 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 12:59:45.256358 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 12:59:45.617315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:59:45.617885 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:59:45.622128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:59:45.622221 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:59:45.626844 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:59:45.651659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:59:45.964398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:59:45.964466 disk-uuid[640]: The operation has completed successfully. May 13 12:59:45.994559 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:59:45.994680 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:59:46.024962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:59:46.048186 sh[672]: Success May 13 12:59:46.065365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:59:46.065397 kernel: device-mapper: uevent: version 1.0.3 May 13 12:59:46.067363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:59:46.075400 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 13 12:59:46.104507 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:59:46.107266 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:59:46.121167 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:59:46.127585 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:59:46.127617 kernel: BTRFS: device fsid 3042589c-b63f-42f0-9a6f-a4369b1889f9 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (684) May 13 12:59:46.128847 kernel: BTRFS info (device dm-0): first mount of filesystem 3042589c-b63f-42f0-9a6f-a4369b1889f9 May 13 12:59:46.129723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 12:59:46.129738 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:59:46.134374 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:59:46.134882 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:59:46.136099 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:59:46.136914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:59:46.141358 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:59:46.172366 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (717) May 13 12:59:46.172421 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:59:46.174119 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:59:46.174144 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:59:46.180364 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:59:46.181491 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:59:46.184437 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:59:46.264857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:59:46.266685 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:59:46.265512 ignition[766]: Ignition 2.21.0 May 13 12:59:46.265519 ignition[766]: Stage: fetch-offline May 13 12:59:46.265553 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 12:59:46.265563 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:46.265647 ignition[766]: parsed url from cmdline: "" May 13 12:59:46.265651 ignition[766]: no config URL provided May 13 12:59:46.265657 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:59:46.265668 ignition[766]: no config at "/usr/lib/ignition/user.ign" May 13 12:59:46.265693 ignition[766]: op(1): [started] loading QEMU firmware config module May 13 12:59:46.265699 ignition[766]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:59:46.273431 ignition[766]: op(1): [finished] loading QEMU firmware config module May 13 12:59:46.316316 systemd-networkd[860]: lo: Link UP May 13 12:59:46.316325 systemd-networkd[860]: lo: Gained carrier May 13 12:59:46.317770 systemd-networkd[860]: Enumeration completed May 13 12:59:46.317835 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:59:46.318620 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:59:46.322403 ignition[766]: parsing config with SHA512: 69f6c5ca9d72f11f93958433065622355629242199a5342c2ed06f4e2082835814c704ec132087304a547208e3d54e225863d0e5d320899afc2fdc03889edabe May 13 12:59:46.318624 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:59:46.319114 systemd-networkd[860]: eth0: Link UP May 13 12:59:46.326514 ignition[766]: fetch-offline: fetch-offline passed May 13 12:59:46.319118 systemd-networkd[860]: eth0: Gained carrier May 13 12:59:46.326574 ignition[766]: Ignition finished successfully May 13 12:59:46.319127 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:59:46.320686 systemd[1]: Reached target network.target - Network. May 13 12:59:46.326138 unknown[766]: fetched base config from "system" May 13 12:59:46.326145 unknown[766]: fetched user config from "qemu" May 13 12:59:46.328370 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:59:46.330146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:59:46.331648 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:59:46.332502 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:59:46.367899 ignition[865]: Ignition 2.21.0 May 13 12:59:46.367912 ignition[865]: Stage: kargs May 13 12:59:46.368046 ignition[865]: no configs at "/usr/lib/ignition/base.d" May 13 12:59:46.368058 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:46.370759 ignition[865]: kargs: kargs passed May 13 12:59:46.370815 ignition[865]: Ignition finished successfully May 13 12:59:46.376661 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:59:46.378683 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:59:46.413541 ignition[874]: Ignition 2.21.0 May 13 12:59:46.413550 ignition[874]: Stage: disks May 13 12:59:46.413661 ignition[874]: no configs at "/usr/lib/ignition/base.d" May 13 12:59:46.413670 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:46.416120 ignition[874]: disks: disks passed May 13 12:59:46.416249 ignition[874]: Ignition finished successfully May 13 12:59:46.420217 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:59:46.422233 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:59:46.422387 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:59:46.426674 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:59:46.426774 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:59:46.428628 systemd[1]: Reached target basic.target - Basic System. May 13 12:59:46.432324 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:59:46.465339 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:59:46.472527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:59:46.476832 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:59:46.585361 kernel: EXT4-fs (vda9): mounted filesystem ebf7ca75-051f-4154-b098-5ec24084105d r/w with ordered data mode. Quota mode: none. May 13 12:59:46.586180 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:59:46.586822 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:59:46.589156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:59:46.591876 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:59:46.593245 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:59:46.593289 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:59:46.593312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:59:46.603617 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:59:46.606272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:59:46.612785 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (892) May 13 12:59:46.612809 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:59:46.612821 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:59:46.612831 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:59:46.616700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:59:46.643869 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:59:46.648674 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory May 13 12:59:46.653048 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:59:46.657409 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:59:46.737045 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:59:46.739223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:59:46.739930 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:59:46.756371 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:59:46.767742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:59:46.782502 ignition[1006]: INFO : Ignition 2.21.0 May 13 12:59:46.782502 ignition[1006]: INFO : Stage: mount May 13 12:59:46.784405 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:59:46.784405 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:46.784405 ignition[1006]: INFO : mount: mount passed May 13 12:59:46.784405 ignition[1006]: INFO : Ignition finished successfully May 13 12:59:46.786498 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:59:46.788068 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:59:47.127312 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:59:47.129538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:59:47.161354 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1018) May 13 12:59:47.163437 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:59:47.163452 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:59:47.163463 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:59:47.167634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:59:47.195357 ignition[1035]: INFO : Ignition 2.21.0 May 13 12:59:47.195357 ignition[1035]: INFO : Stage: files May 13 12:59:47.197253 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:59:47.197253 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:47.200631 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping May 13 12:59:47.202338 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:59:47.202338 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:59:47.205283 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:59:47.205283 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:59:47.208202 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:59:47.205415 unknown[1035]: wrote ssh authorized keys file for user: core May 13 12:59:47.211114 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 12:59:47.211114 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 12:59:47.266494 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:59:47.373896 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 12:59:47.375891 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:59:47.375891 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 12:59:47.685449 systemd-networkd[860]: eth0: Gained IPv6LL May 13 12:59:47.853163 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 12:59:47.943255 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:59:47.943255 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:59:47.947270 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:59:48.032698 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:59:48.034930 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:59:48.034930 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:59:48.039588 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:59:48.039588 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:59:48.039588 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 12:59:48.334254 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 12:59:48.634802 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:59:48.634802 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 12:59:48.638939 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:59:48.641576 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:59:48.641576 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 12:59:48.641576 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 12:59:48.646503 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:59:48.646503 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:59:48.646503 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 12:59:48.646503 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:59:48.661947 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:59:48.665464 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:59:48.667261 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:59:48.667261 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 12:59:48.670088 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:59:48.671612 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:59:48.673397 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:59:48.673397 ignition[1035]: INFO : files: files passed May 13 12:59:48.673397 ignition[1035]: INFO : Ignition finished successfully May 13 12:59:48.675239 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:59:48.678188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:59:48.680872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:59:48.701993 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:59:48.702127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:59:48.705688 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:59:48.708644 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:59:48.708644 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:59:48.712396 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:59:48.715937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:59:48.716227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:59:48.720692 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:59:48.770106 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:59:48.770289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:59:48.773843 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:59:48.773930 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:59:48.775937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:59:48.777889 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:59:48.812223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:59:48.815212 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:59:48.836579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:59:48.836742 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:59:48.840098 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:59:48.841272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:59:48.841403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:59:48.842117 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:59:48.842626 systemd[1]: Stopped target basic.target - Basic System. May 13 12:59:48.842963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:59:48.843293 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:59:48.843807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:59:48.844131 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:59:48.844635 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:59:48.844968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:59:48.845310 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:59:48.845822 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:59:48.846144 systemd[1]: Stopped target swap.target - Swaps. May 13 12:59:48.846615 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:59:48.846728 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:59:48.869184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:59:48.869392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:59:48.869831 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:59:48.873435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:59:48.874510 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:59:48.874678 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:59:48.877446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:59:48.877602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:59:48.879864 systemd[1]: Stopped target paths.target - Path Units. May 13 12:59:48.881828 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:59:48.883687 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:59:48.884621 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:59:48.884960 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:59:48.885637 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:59:48.885739 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:59:48.891526 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:59:48.891607 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:59:48.892513 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:59:48.892630 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:59:48.894241 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:59:48.894373 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:59:48.900185 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:59:48.901158 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:59:48.901274 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:59:48.916948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:59:48.917866 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:59:48.917982 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:59:48.920232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:59:48.920683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:59:48.928842 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:59:48.930108 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:59:48.937895 ignition[1090]: INFO : Ignition 2.21.0 May 13 12:59:48.937895 ignition[1090]: INFO : Stage: umount May 13 12:59:48.939757 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:59:48.939757 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:59:48.939757 ignition[1090]: INFO : umount: umount passed May 13 12:59:48.939757 ignition[1090]: INFO : Ignition finished successfully May 13 12:59:48.941001 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:59:48.941116 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:59:48.943651 systemd[1]: Stopped target network.target - Network. May 13 12:59:48.944299 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:59:48.944414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:59:48.944850 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:59:48.944908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:59:48.945173 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:59:48.945224 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:59:48.945541 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:59:48.945585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:59:48.945978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:59:48.946243 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:59:48.950950 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:59:48.962582 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:59:48.962752 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:59:48.969068 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:59:48.969818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:59:48.969897 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:59:48.975561 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:59:48.978078 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:59:48.978232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:59:48.982370 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:59:48.982549 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:59:48.985720 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:59:48.985772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:59:48.989004 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:59:48.989945 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:59:48.989998 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:59:48.990230 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:59:48.990271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:59:48.995152 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:59:48.995202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:59:48.996299 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:59:48.999154 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:59:49.016550 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:59:49.016730 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:59:49.018029 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:59:49.018219 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:59:49.021260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:59:49.021363 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:59:49.021681 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:59:49.021736 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:59:49.022001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:59:49.022056 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:59:49.022832 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:59:49.022884 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:59:49.023658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:59:49.023729 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:59:49.025151 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:59:49.033838 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:59:49.033893 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:59:49.039071 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:59:49.039130 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:59:49.043633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:59:49.043682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:49.056187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:59:49.056310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:59:49.204898 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:59:49.205040 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:59:49.206313 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:59:49.208812 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:59:49.208872 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:59:49.209749 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:59:49.231075 systemd[1]: Switching root. May 13 12:59:49.265698 systemd-journald[219]: Journal stopped May 13 12:59:50.582899 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 13 12:59:50.582963 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:59:50.582977 kernel: SELinux: policy capability open_perms=1 May 13 12:59:50.582989 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:59:50.583000 kernel: SELinux: policy capability always_check_network=0 May 13 12:59:50.583014 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:59:50.583030 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:59:50.583041 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:59:50.583052 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:59:50.583063 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:59:50.583079 kernel: audit: type=1403 audit(1747141189.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:59:50.583092 systemd[1]: Successfully loaded SELinux policy in 48.521ms. May 13 12:59:50.583114 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.627ms. May 13 12:59:50.583128 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:59:50.583142 systemd[1]: Detected virtualization kvm. May 13 12:59:50.583153 systemd[1]: Detected architecture x86-64. May 13 12:59:50.583165 systemd[1]: Detected first boot. May 13 12:59:50.583177 systemd[1]: Initializing machine ID from VM UUID. May 13 12:59:50.583189 zram_generator::config[1136]: No configuration found. May 13 12:59:50.583203 kernel: Guest personality initialized and is inactive May 13 12:59:50.583214 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 12:59:50.583225 kernel: Initialized host personality May 13 12:59:50.583236 kernel: NET: Registered PF_VSOCK protocol family May 13 12:59:50.583249 systemd[1]: Populated /etc with preset unit settings. May 13 12:59:50.583262 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:59:50.583274 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:59:50.583286 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:59:50.583299 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:59:50.583311 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:59:50.583323 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:59:50.583347 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:59:50.583362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:59:50.583374 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:59:50.583386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:59:50.583398 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:59:50.583410 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:59:50.583424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:59:50.583437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:59:50.583449 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:59:50.583462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:59:50.583484 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:59:50.583496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:59:50.583508 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 12:59:50.583520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:59:50.583532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:59:50.583544 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:59:50.583556 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:59:50.583568 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:59:50.583582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:59:50.583594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:59:50.583606 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:59:50.583618 systemd[1]: Reached target slices.target - Slice Units. May 13 12:59:50.583630 systemd[1]: Reached target swap.target - Swaps. May 13 12:59:50.583652 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:59:50.583664 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:59:50.583676 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:59:50.583689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:59:50.583706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:59:50.583720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:59:50.583732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:59:50.583744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:59:50.583756 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:59:50.583768 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:59:50.583780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:50.583793 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:59:50.583805 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:59:50.583818 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:59:50.583830 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:59:50.583843 systemd[1]: Reached target machines.target - Containers. May 13 12:59:50.583855 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:59:50.583867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:59:50.583879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:59:50.583891 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:59:50.583904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:59:50.583918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:59:50.583930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:59:50.583942 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:59:50.583954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:59:50.583967 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:59:50.583979 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:59:50.583992 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:59:50.584003 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:59:50.584017 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:59:50.584030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:59:50.584042 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:59:50.584055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:59:50.584066 kernel: fuse: init (API version 7.41) May 13 12:59:50.584077 kernel: loop: module loaded May 13 12:59:50.584089 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:59:50.584101 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:59:50.584113 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:59:50.584127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:59:50.584139 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:59:50.584151 systemd[1]: Stopped verity-setup.service. May 13 12:59:50.584164 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:50.584176 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:59:50.584190 kernel: ACPI: bus type drm_connector registered May 13 12:59:50.584201 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:59:50.584213 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:59:50.584229 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:59:50.584241 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:59:50.584275 systemd-journald[1211]: Collecting audit messages is disabled. May 13 12:59:50.584299 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:59:50.584311 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:59:50.584324 systemd-journald[1211]: Journal started May 13 12:59:50.584368 systemd-journald[1211]: Runtime Journal (/run/log/journal/434dd170d7424d8ab39f2d2e80bf6a7b) is 6M, max 48.5M, 42.4M free. May 13 12:59:50.326189 systemd[1]: Queued start job for default target multi-user.target. May 13 12:59:50.347140 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:59:50.347581 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:59:50.587356 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:59:50.588810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:59:50.590387 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:59:50.590613 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:59:50.592079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:59:50.592309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:59:50.593779 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:59:50.593994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:59:50.595359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:59:50.595577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:59:50.597123 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:59:50.597352 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:59:50.598810 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:59:50.599015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:59:50.600424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:59:50.601848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:59:50.603415 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:59:50.604962 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:59:50.618561 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:59:50.621149 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:59:50.623413 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:59:50.624668 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:59:50.624705 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:59:50.626814 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:59:50.637156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:59:50.638920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:59:50.640454 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:59:50.642942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:59:50.644340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:59:50.645708 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:59:50.646916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:59:50.648053 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:59:50.651173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:59:50.654468 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:59:50.662187 systemd-journald[1211]: Time spent on flushing to /var/log/journal/434dd170d7424d8ab39f2d2e80bf6a7b is 21.307ms for 1067 entries. May 13 12:59:50.662187 systemd-journald[1211]: System Journal (/var/log/journal/434dd170d7424d8ab39f2d2e80bf6a7b) is 8M, max 195.6M, 187.6M free. May 13 12:59:50.697513 systemd-journald[1211]: Received client request to flush runtime journal. May 13 12:59:50.697568 kernel: loop0: detected capacity change from 0 to 113872 May 13 12:59:50.697600 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:59:50.657558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:59:50.660233 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:59:50.661686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:59:50.664573 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:59:50.669970 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:59:50.674484 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:59:50.684516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:59:50.700622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:59:50.714597 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:59:50.717899 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:59:50.719691 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:59:50.730415 kernel: loop1: detected capacity change from 0 to 146240 May 13 12:59:50.753991 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 13 12:59:50.754018 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. May 13 12:59:50.763767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:59:50.765598 kernel: loop2: detected capacity change from 0 to 205544 May 13 12:59:50.793366 kernel: loop3: detected capacity change from 0 to 113872 May 13 12:59:50.802374 kernel: loop4: detected capacity change from 0 to 146240 May 13 12:59:50.816381 kernel: loop5: detected capacity change from 0 to 205544 May 13 12:59:50.823414 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:59:50.824067 (sd-merge)[1277]: Merged extensions into '/usr'. May 13 12:59:50.828289 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:59:50.828304 systemd[1]: Reloading... May 13 12:59:50.886409 zram_generator::config[1302]: No configuration found. May 13 12:59:50.989040 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:59:50.993514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:59:51.077064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:59:51.077286 systemd[1]: Reloading finished in 248 ms. May 13 12:59:51.109879 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:59:51.111599 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:59:51.130768 systemd[1]: Starting ensure-sysext.service... May 13 12:59:51.132643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:59:51.146581 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... May 13 12:59:51.146599 systemd[1]: Reloading... May 13 12:59:51.156597 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:59:51.156644 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:59:51.156926 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:59:51.157181 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:59:51.158075 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:59:51.158381 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. May 13 12:59:51.158459 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. May 13 12:59:51.162818 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:59:51.162832 systemd-tmpfiles[1341]: Skipping /boot May 13 12:59:51.176933 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:59:51.177083 systemd-tmpfiles[1341]: Skipping /boot May 13 12:59:51.200368 zram_generator::config[1368]: No configuration found. May 13 12:59:51.302607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:59:51.385071 systemd[1]: Reloading finished in 238 ms. May 13 12:59:51.407112 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:59:51.430316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:59:51.440286 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:59:51.442701 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:59:51.472431 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:59:51.476103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:59:51.478683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:59:51.481024 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:59:51.488791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:51.488956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:59:51.490856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:59:51.494141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:59:51.496981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:59:51.498243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:59:51.498414 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:59:51.501703 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:59:51.502996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:51.504428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:59:51.510505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:59:51.512704 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:59:51.514446 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:59:51.514649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:59:51.516239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:59:51.516474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:59:51.523445 systemd-udevd[1414]: Using default interface naming scheme 'v255'. May 13 12:59:51.532834 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:59:51.539080 systemd[1]: Finished ensure-sysext.service. May 13 12:59:51.539977 augenrules[1441]: No rules May 13 12:59:51.541248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:51.541532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:59:51.543727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:59:51.548459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:59:51.554297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:59:51.557124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:59:51.558554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:59:51.558610 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:59:51.563447 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:59:51.571094 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:59:51.572399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:59:51.572824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:59:51.575799 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:59:51.576063 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:59:51.577603 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:59:51.579280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:59:51.583765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:59:51.585669 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:59:51.587446 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:59:51.587691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:59:51.590803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:59:51.591366 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:59:51.593203 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:59:51.593429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:59:51.617268 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:59:51.627941 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 12:59:51.633047 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:59:51.634431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:59:51.634494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:59:51.634517 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:59:51.681036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:59:51.683571 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:59:51.701369 kernel: mousedev: PS/2 mouse device common for all mice May 13 12:59:51.710176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:59:51.711637 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 13 12:59:51.718356 kernel: ACPI: button: Power Button [PWRF] May 13 12:59:51.743705 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 12:59:51.743992 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 12:59:51.744151 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 12:59:51.801671 systemd-networkd[1495]: lo: Link UP May 13 12:59:51.801683 systemd-networkd[1495]: lo: Gained carrier May 13 12:59:51.804038 systemd-networkd[1495]: Enumeration completed May 13 12:59:51.804138 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:59:51.804948 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:59:51.804952 systemd-networkd[1495]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:59:51.806063 systemd-networkd[1495]: eth0: Link UP May 13 12:59:51.806202 systemd-networkd[1495]: eth0: Gained carrier May 13 12:59:51.806215 systemd-networkd[1495]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:59:51.808583 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:59:51.813545 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:59:51.821455 systemd-networkd[1495]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:59:51.825523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:51.851377 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:59:51.855963 systemd-resolved[1411]: Positive Trust Anchors: May 13 12:59:51.855981 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:59:51.856016 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:59:51.859731 systemd-resolved[1411]: Defaulting to hostname 'linux'. May 13 12:59:51.861671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:59:51.868136 systemd[1]: Reached target network.target - Network. May 13 12:59:51.869131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:59:53.401867 systemd-resolved[1411]: Clock change detected. Flushing caches. May 13 12:59:53.401947 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:59:53.402000 systemd-timesyncd[1460]: Initial clock synchronization to Tue 2025-05-13 12:59:53.401824 UTC. May 13 12:59:53.402475 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:59:53.405057 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:59:53.418408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:59:53.418840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:53.424734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:59:53.430168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:59:53.460199 kernel: kvm_amd: TSC scaling supported May 13 12:59:53.460277 kernel: kvm_amd: Nested Virtualization enabled May 13 12:59:53.460294 kernel: kvm_amd: Nested Paging enabled May 13 12:59:53.460306 kernel: kvm_amd: LBR virtualization supported May 13 12:59:53.461381 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 12:59:53.461400 kernel: kvm_amd: Virtual GIF supported May 13 12:59:53.502006 kernel: EDAC MC: Ver: 3.0.0 May 13 12:59:53.516080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:59:53.518279 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:59:53.519774 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:59:53.521359 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:59:53.522798 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 12:59:53.524436 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:59:53.526059 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:59:53.527583 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:59:53.529101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:59:53.529138 systemd[1]: Reached target paths.target - Path Units. May 13 12:59:53.530288 systemd[1]: Reached target timers.target - Timer Units. May 13 12:59:53.532608 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:59:53.535840 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:59:53.539523 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:59:53.541172 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:59:53.542698 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:59:53.557543 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:59:53.559394 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:59:53.561377 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:59:53.563396 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:59:53.564574 systemd[1]: Reached target basic.target - Basic System. May 13 12:59:53.565727 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:59:53.565754 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:59:53.566776 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:59:53.569000 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:59:53.570976 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:59:53.578709 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:59:53.580911 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:59:53.582091 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:59:53.583090 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 12:59:53.586113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:59:53.586525 jq[1545]: false May 13 12:59:53.588993 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:59:53.592709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:59:53.596139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:59:53.600123 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache May 13 12:59:53.600556 oslogin_cache_refresh[1547]: Refreshing passwd entry cache May 13 12:59:53.601247 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:59:53.603370 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:59:53.603934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:59:53.604575 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:59:53.607605 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:59:53.611250 extend-filesystems[1546]: Found loop3 May 13 12:59:53.612286 extend-filesystems[1546]: Found loop4 May 13 12:59:53.612286 extend-filesystems[1546]: Found loop5 May 13 12:59:53.612286 extend-filesystems[1546]: Found sr0 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda May 13 12:59:53.612286 extend-filesystems[1546]: Found vda1 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda2 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda3 May 13 12:59:53.612286 extend-filesystems[1546]: Found usr May 13 12:59:53.612286 extend-filesystems[1546]: Found vda4 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda6 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda7 May 13 12:59:53.612286 extend-filesystems[1546]: Found vda9 May 13 12:59:53.612286 extend-filesystems[1546]: Checking size of /dev/vda9 May 13 12:59:53.619582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:59:53.629987 extend-filesystems[1546]: Resized partition /dev/vda9 May 13 12:59:53.623599 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:59:53.623828 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:59:53.624145 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:59:53.624412 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:59:53.631336 jq[1560]: true May 13 12:59:53.626978 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:59:53.627262 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:59:53.634626 extend-filesystems[1569]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:59:53.635896 update_engine[1559]: I20250513 12:59:53.635238 1559 main.cc:92] Flatcar Update Engine starting May 13 12:59:53.647567 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting May 13 12:59:53.647567 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:59:53.647196 oslogin_cache_refresh[1547]: Failure getting users, quitting May 13 12:59:53.647227 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:59:53.648232 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:59:53.650036 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache May 13 12:59:53.650030 oslogin_cache_refresh[1547]: Refreshing group entry cache May 13 12:59:53.651072 jq[1571]: true May 13 12:59:53.660574 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:59:53.661766 tar[1568]: linux-amd64/helm May 13 12:59:53.700173 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:59:53.700240 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting May 13 12:59:53.700240 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:59:53.688465 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:59:53.688295 dbus-daemon[1543]: [system] SELinux support is enabled May 13 12:59:53.724762 update_engine[1559]: I20250513 12:59:53.695480 1559 update_check_scheduler.cc:74] Next update check in 5m54s May 13 12:59:53.724800 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:59:53.724800 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:59:53.724800 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:59:53.691835 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:59:53.697070 oslogin_cache_refresh[1547]: Failure getting groups, quitting May 13 12:59:53.733324 extend-filesystems[1546]: Resized filesystem in /dev/vda9 May 13 12:59:53.691859 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:59:53.697085 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:59:53.693469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:59:53.693481 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:59:53.697802 systemd[1]: Started update-engine.service - Update Engine. May 13 12:59:53.701183 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:59:53.721194 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 12:59:53.721477 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 12:59:53.723034 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:59:53.723271 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:59:53.729248 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) May 13 12:59:53.729267 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 12:59:53.729667 systemd-logind[1555]: New seat seat0. May 13 12:59:53.731639 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:59:53.761195 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:59:53.764696 bash[1604]: Updated "/home/core/.ssh/authorized_keys" May 13 12:59:53.766196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:59:53.769425 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:59:53.859764 containerd[1580]: time="2025-05-13T12:59:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:59:53.862862 containerd[1580]: time="2025-05-13T12:59:53.862819367Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:59:53.874491 containerd[1580]: time="2025-05-13T12:59:53.874440034Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.202µs" May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874598161Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874621254Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874824986Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874838912Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874864079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874926547Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.874937427Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.875259822Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.875274199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.875284328Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.875291752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:59:53.875727 containerd[1580]: time="2025-05-13T12:59:53.875388092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.875618965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.875646046Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.875656416Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.875695018Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.875908478Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:59:53.876063 containerd[1580]: time="2025-05-13T12:59:53.876001353Z" level=info msg="metadata content store policy set" policy=shared May 13 12:59:53.882428 containerd[1580]: time="2025-05-13T12:59:53.882376711Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:59:53.882520 containerd[1580]: time="2025-05-13T12:59:53.882456090Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:59:53.882571 containerd[1580]: time="2025-05-13T12:59:53.882484543Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:59:53.882571 containerd[1580]: time="2025-05-13T12:59:53.882563541Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:59:53.882610 containerd[1580]: time="2025-05-13T12:59:53.882583369Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:59:53.882610 containerd[1580]: time="2025-05-13T12:59:53.882597024Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:59:53.882711 containerd[1580]: time="2025-05-13T12:59:53.882674009Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:59:53.882739 containerd[1580]: time="2025-05-13T12:59:53.882718231Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:59:53.882739 containerd[1580]: time="2025-05-13T12:59:53.882736345Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:59:53.882777 containerd[1580]: time="2025-05-13T12:59:53.882750472Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:59:53.882777 containerd[1580]: time="2025-05-13T12:59:53.882764218Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:59:53.882813 containerd[1580]: time="2025-05-13T12:59:53.882781099Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:59:53.882993 containerd[1580]: time="2025-05-13T12:59:53.882967920Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:59:53.883017 containerd[1580]: time="2025-05-13T12:59:53.882999329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:59:53.883046 containerd[1580]: time="2025-05-13T12:59:53.883019937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:59:53.883046 containerd[1580]: time="2025-05-13T12:59:53.883037160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:59:53.883086 containerd[1580]: time="2025-05-13T12:59:53.883051937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:59:53.883086 containerd[1580]: time="2025-05-13T12:59:53.883067106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:59:53.883126 containerd[1580]: time="2025-05-13T12:59:53.883083406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:59:53.883126 containerd[1580]: time="2025-05-13T12:59:53.883098154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:59:53.883126 containerd[1580]: time="2025-05-13T12:59:53.883113773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:59:53.883188 containerd[1580]: time="2025-05-13T12:59:53.883129783Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:59:53.883188 containerd[1580]: time="2025-05-13T12:59:53.883145613Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:59:53.883263 containerd[1580]: time="2025-05-13T12:59:53.883235582Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:59:53.883294 containerd[1580]: time="2025-05-13T12:59:53.883265668Z" level=info msg="Start snapshots syncer" May 13 12:59:53.883314 containerd[1580]: time="2025-05-13T12:59:53.883295394Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:59:53.883639 containerd[1580]: time="2025-05-13T12:59:53.883578705Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:59:53.883747 containerd[1580]: time="2025-05-13T12:59:53.883651221Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:59:53.883769 containerd[1580]: time="2025-05-13T12:59:53.883758122Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:59:53.883904 containerd[1580]: time="2025-05-13T12:59:53.883871545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:59:53.883934 containerd[1580]: time="2025-05-13T12:59:53.883900419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:59:53.883934 containerd[1580]: time="2025-05-13T12:59:53.883920927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:59:53.883985 containerd[1580]: time="2025-05-13T12:59:53.883936416Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:59:53.883985 containerd[1580]: time="2025-05-13T12:59:53.883966292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:59:53.883985 containerd[1580]: time="2025-05-13T12:59:53.883980389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:59:53.884038 containerd[1580]: time="2025-05-13T12:59:53.883994706Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:59:53.884038 containerd[1580]: time="2025-05-13T12:59:53.884023059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:59:53.884038 containerd[1580]: time="2025-05-13T12:59:53.884034921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:59:53.884132 containerd[1580]: time="2025-05-13T12:59:53.884048056Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:59:53.884971 containerd[1580]: time="2025-05-13T12:59:53.884912577Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:59:53.885028 containerd[1580]: time="2025-05-13T12:59:53.884942834Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:59:53.885028 containerd[1580]: time="2025-05-13T12:59:53.885025319Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:59:53.885083 containerd[1580]: time="2025-05-13T12:59:53.885039926Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:59:53.885083 containerd[1580]: time="2025-05-13T12:59:53.885051248Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:59:53.885083 containerd[1580]: time="2025-05-13T12:59:53.885064242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:59:53.885083 containerd[1580]: time="2025-05-13T12:59:53.885077847Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:59:53.885192 containerd[1580]: time="2025-05-13T12:59:53.885097845Z" level=info msg="runtime interface created" May 13 12:59:53.885192 containerd[1580]: time="2025-05-13T12:59:53.885103465Z" level=info msg="created NRI interface" May 13 12:59:53.885192 containerd[1580]: time="2025-05-13T12:59:53.885111010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:59:53.885192 containerd[1580]: time="2025-05-13T12:59:53.885120718Z" level=info msg="Connect containerd service" May 13 12:59:53.885192 containerd[1580]: time="2025-05-13T12:59:53.885141146Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:59:53.886188 containerd[1580]: time="2025-05-13T12:59:53.886165157Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:59:53.886289 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:59:53.912663 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:59:53.916464 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:59:53.940925 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:59:53.941223 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:59:53.944607 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:59:53.965862 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:59:53.969242 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:59:53.972710 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 12:59:53.974179 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:59:53.986354 containerd[1580]: time="2025-05-13T12:59:53.986314839Z" level=info msg="Start subscribing containerd event" May 13 12:59:53.986856 containerd[1580]: time="2025-05-13T12:59:53.986799138Z" level=info msg="Start recovering state" May 13 12:59:53.986888 containerd[1580]: time="2025-05-13T12:59:53.986855032Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:59:53.986948 containerd[1580]: time="2025-05-13T12:59:53.986922268Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:59:53.988289 containerd[1580]: time="2025-05-13T12:59:53.988268373Z" level=info msg="Start event monitor" May 13 12:59:53.988336 containerd[1580]: time="2025-05-13T12:59:53.988291146Z" level=info msg="Start cni network conf syncer for default" May 13 12:59:53.988336 containerd[1580]: time="2025-05-13T12:59:53.988299652Z" level=info msg="Start streaming server" May 13 12:59:53.988336 containerd[1580]: time="2025-05-13T12:59:53.988313398Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:59:53.988336 containerd[1580]: time="2025-05-13T12:59:53.988322315Z" level=info msg="runtime interface starting up..." May 13 12:59:53.988336 containerd[1580]: time="2025-05-13T12:59:53.988329228Z" level=info msg="starting plugins..." May 13 12:59:53.988434 containerd[1580]: time="2025-05-13T12:59:53.988350006Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:59:53.988633 containerd[1580]: time="2025-05-13T12:59:53.988617138Z" level=info msg="containerd successfully booted in 0.129415s" May 13 12:59:53.988711 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:59:54.091254 tar[1568]: linux-amd64/LICENSE May 13 12:59:54.091347 tar[1568]: linux-amd64/README.md May 13 12:59:54.120438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:59:54.657141 systemd-networkd[1495]: eth0: Gained IPv6LL May 13 12:59:54.660449 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:59:54.662278 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:59:54.664856 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:59:54.667363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:59:54.688438 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:59:54.711609 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:59:54.711892 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:59:54.713873 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:59:54.716113 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:59:55.291898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:59:55.293551 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:59:55.294920 systemd[1]: Startup finished in 2.883s (kernel) + 6.163s (initrd) + 4.003s (userspace) = 13.050s. May 13 12:59:55.295823 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:59:55.685896 kubelet[1674]: E0513 12:59:55.685813 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:59:55.689342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:59:55.689532 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:59:55.689881 systemd[1]: kubelet.service: Consumed 873ms CPU time, 234.4M memory peak. May 13 12:59:58.806017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:59:58.807236 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:40476.service - OpenSSH per-connection server daemon (10.0.0.1:40476). May 13 12:59:58.879509 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 40476 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:58.881189 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:58.887329 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:59:58.888413 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:59:58.894275 systemd-logind[1555]: New session 1 of user core. May 13 12:59:58.918474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:59:58.920838 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:59:58.940178 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:59:58.942504 systemd-logind[1555]: New session c1 of user core. May 13 12:59:59.086990 systemd[1692]: Queued start job for default target default.target. May 13 12:59:59.111156 systemd[1692]: Created slice app.slice - User Application Slice. May 13 12:59:59.111181 systemd[1692]: Reached target paths.target - Paths. May 13 12:59:59.111220 systemd[1692]: Reached target timers.target - Timers. May 13 12:59:59.112701 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:59:59.123127 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:59:59.123246 systemd[1692]: Reached target sockets.target - Sockets. May 13 12:59:59.123284 systemd[1692]: Reached target basic.target - Basic System. May 13 12:59:59.123326 systemd[1692]: Reached target default.target - Main User Target. May 13 12:59:59.123362 systemd[1692]: Startup finished in 174ms. May 13 12:59:59.123623 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:59:59.125135 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:59:59.193405 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:40490.service - OpenSSH per-connection server daemon (10.0.0.1:40490). May 13 12:59:59.240101 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 40490 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:59.241447 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:59.245726 systemd-logind[1555]: New session 2 of user core. May 13 12:59:59.256109 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:59:59.307826 sshd[1705]: Connection closed by 10.0.0.1 port 40490 May 13 12:59:59.308196 sshd-session[1703]: pam_unix(sshd:session): session closed for user core May 13 12:59:59.320442 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:40490.service: Deactivated successfully. May 13 12:59:59.322174 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:59:59.322912 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. May 13 12:59:59.325611 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:40502.service - OpenSSH per-connection server daemon (10.0.0.1:40502). May 13 12:59:59.326317 systemd-logind[1555]: Removed session 2. May 13 12:59:59.373830 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:59.375021 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:59.378795 systemd-logind[1555]: New session 3 of user core. May 13 12:59:59.385096 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:59:59.433222 sshd[1713]: Connection closed by 10.0.0.1 port 40502 May 13 12:59:59.433556 sshd-session[1711]: pam_unix(sshd:session): session closed for user core May 13 12:59:59.448687 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:40502.service: Deactivated successfully. May 13 12:59:59.450459 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:59:59.451252 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. May 13 12:59:59.454029 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:40506.service - OpenSSH per-connection server daemon (10.0.0.1:40506). May 13 12:59:59.454585 systemd-logind[1555]: Removed session 3. May 13 12:59:59.500022 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 40506 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:59.501335 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:59.505310 systemd-logind[1555]: New session 4 of user core. May 13 12:59:59.515066 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:59:59.566640 sshd[1721]: Connection closed by 10.0.0.1 port 40506 May 13 12:59:59.566983 sshd-session[1719]: pam_unix(sshd:session): session closed for user core May 13 12:59:59.577228 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:40506.service: Deactivated successfully. May 13 12:59:59.578776 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:59:59.579592 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. May 13 12:59:59.582404 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:40514.service - OpenSSH per-connection server daemon (10.0.0.1:40514). May 13 12:59:59.583115 systemd-logind[1555]: Removed session 4. May 13 12:59:59.625961 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 40514 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:59.627535 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:59.631579 systemd-logind[1555]: New session 5 of user core. May 13 12:59:59.642081 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:59:59.697797 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:59:59.698118 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:59:59.712808 sudo[1730]: pam_unix(sudo:session): session closed for user root May 13 12:59:59.714462 sshd[1729]: Connection closed by 10.0.0.1 port 40514 May 13 12:59:59.714825 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 13 12:59:59.736283 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:40514.service: Deactivated successfully. May 13 12:59:59.737763 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:59:59.738444 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. May 13 12:59:59.740880 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:40518.service - OpenSSH per-connection server daemon (10.0.0.1:40518). May 13 12:59:59.741488 systemd-logind[1555]: Removed session 5. May 13 12:59:59.785299 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 40518 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 12:59:59.786613 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:59.790698 systemd-logind[1555]: New session 6 of user core. May 13 12:59:59.800069 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:59:59.852272 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:59:59.852568 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 13:00:00.039282 sudo[1741]: pam_unix(sudo:session): session closed for user root May 13 13:00:00.046219 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 13:00:00.046581 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 13:00:00.056232 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 13:00:00.103738 augenrules[1763]: No rules May 13 13:00:00.105531 systemd[1]: audit-rules.service: Deactivated successfully. May 13 13:00:00.105820 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 13:00:00.107208 sudo[1740]: pam_unix(sudo:session): session closed for user root May 13 13:00:00.108673 sshd[1739]: Connection closed by 10.0.0.1 port 40518 May 13 13:00:00.108932 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 13 13:00:00.128450 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:40518.service: Deactivated successfully. May 13 13:00:00.130721 systemd[1]: session-6.scope: Deactivated successfully. May 13 13:00:00.131512 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. May 13 13:00:00.135334 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:40534.service - OpenSSH per-connection server daemon (10.0.0.1:40534). May 13 13:00:00.136044 systemd-logind[1555]: Removed session 6. May 13 13:00:00.206973 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 40534 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:00:00.208646 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:00:00.213330 systemd-logind[1555]: New session 7 of user core. May 13 13:00:00.226099 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 13:00:00.278488 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 13:00:00.278785 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 13:00:00.574415 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 13:00:00.596300 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 13:00:00.811350 dockerd[1795]: time="2025-05-13T13:00:00.811285183Z" level=info msg="Starting up" May 13 13:00:00.813216 dockerd[1795]: time="2025-05-13T13:00:00.813181700Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 13:00:01.178086 dockerd[1795]: time="2025-05-13T13:00:01.177907121Z" level=info msg="Loading containers: start." May 13 13:00:01.188980 kernel: Initializing XFRM netlink socket May 13 13:00:01.425454 systemd-networkd[1495]: docker0: Link UP May 13 13:00:01.432380 dockerd[1795]: time="2025-05-13T13:00:01.432229477Z" level=info msg="Loading containers: done." May 13 13:00:01.445680 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2844721889-merged.mount: Deactivated successfully. May 13 13:00:01.447140 dockerd[1795]: time="2025-05-13T13:00:01.447084460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 13:00:01.447277 dockerd[1795]: time="2025-05-13T13:00:01.447162687Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 13:00:01.447277 dockerd[1795]: time="2025-05-13T13:00:01.447266352Z" level=info msg="Initializing buildkit" May 13 13:00:01.477250 dockerd[1795]: time="2025-05-13T13:00:01.477212221Z" level=info msg="Completed buildkit initialization" May 13 13:00:01.483911 dockerd[1795]: time="2025-05-13T13:00:01.483856765Z" level=info msg="Daemon has completed initialization" May 13 13:00:01.484061 dockerd[1795]: time="2025-05-13T13:00:01.483970799Z" level=info msg="API listen on /run/docker.sock" May 13 13:00:01.484121 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 13:00:02.405864 containerd[1580]: time="2025-05-13T13:00:02.405811168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 13:00:03.027645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773927810.mount: Deactivated successfully. May 13 13:00:03.884520 containerd[1580]: time="2025-05-13T13:00:03.884465420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:03.885222 containerd[1580]: time="2025-05-13T13:00:03.885184669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 13:00:03.886466 containerd[1580]: time="2025-05-13T13:00:03.886417061Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:03.888787 containerd[1580]: time="2025-05-13T13:00:03.888736091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:03.889679 containerd[1580]: time="2025-05-13T13:00:03.889645236Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.483789655s" May 13 13:00:03.889734 containerd[1580]: time="2025-05-13T13:00:03.889680843Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 13:00:03.891051 containerd[1580]: time="2025-05-13T13:00:03.891026878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 13:00:04.954134 containerd[1580]: time="2025-05-13T13:00:04.954074682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:04.954866 containerd[1580]: time="2025-05-13T13:00:04.954836981Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 13:00:04.956071 containerd[1580]: time="2025-05-13T13:00:04.956025711Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:04.958653 containerd[1580]: time="2025-05-13T13:00:04.958620458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:04.959469 containerd[1580]: time="2025-05-13T13:00:04.959410640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.068358465s" May 13 13:00:04.959469 containerd[1580]: time="2025-05-13T13:00:04.959454713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 13:00:04.959978 containerd[1580]: time="2025-05-13T13:00:04.959888366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 13:00:05.861574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 13:00:05.867153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:06.679779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:06.708278 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 13:00:06.760415 containerd[1580]: time="2025-05-13T13:00:06.760339388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:06.761554 containerd[1580]: time="2025-05-13T13:00:06.761513270Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 13:00:06.762723 containerd[1580]: time="2025-05-13T13:00:06.762410914Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:06.766227 containerd[1580]: time="2025-05-13T13:00:06.766179833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:06.767486 containerd[1580]: time="2025-05-13T13:00:06.767444515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.807520492s" May 13 13:00:06.767486 containerd[1580]: time="2025-05-13T13:00:06.767482767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 13:00:06.767913 kubelet[2076]: E0513 13:00:06.767869 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 13:00:06.768205 containerd[1580]: time="2025-05-13T13:00:06.767985189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 13:00:06.774395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 13:00:06.774623 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 13:00:06.775026 systemd[1]: kubelet.service: Consumed 213ms CPU time, 96.3M memory peak. May 13 13:00:07.709618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193356090.mount: Deactivated successfully. May 13 13:00:07.978770 containerd[1580]: time="2025-05-13T13:00:07.978622048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:07.979345 containerd[1580]: time="2025-05-13T13:00:07.979310399Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 13:00:07.980524 containerd[1580]: time="2025-05-13T13:00:07.980478440Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:07.982521 containerd[1580]: time="2025-05-13T13:00:07.982470286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:07.983124 containerd[1580]: time="2025-05-13T13:00:07.983074990Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.21506808s" May 13 13:00:07.983343 containerd[1580]: time="2025-05-13T13:00:07.983271539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 13:00:07.985407 containerd[1580]: time="2025-05-13T13:00:07.985373592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 13:00:08.913921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691221802.mount: Deactivated successfully. May 13 13:00:11.577193 containerd[1580]: time="2025-05-13T13:00:11.577143696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:11.577931 containerd[1580]: time="2025-05-13T13:00:11.577885968Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 13:00:11.579027 containerd[1580]: time="2025-05-13T13:00:11.578989077Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:11.581285 containerd[1580]: time="2025-05-13T13:00:11.581237705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:11.582117 containerd[1580]: time="2025-05-13T13:00:11.582080326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.5966675s" May 13 13:00:11.582158 containerd[1580]: time="2025-05-13T13:00:11.582116323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 13:00:11.582639 containerd[1580]: time="2025-05-13T13:00:11.582578811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 13:00:12.109739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129555918.mount: Deactivated successfully. May 13 13:00:12.116328 containerd[1580]: time="2025-05-13T13:00:12.116283318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 13:00:12.117223 containerd[1580]: time="2025-05-13T13:00:12.117192734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 13:00:12.118304 containerd[1580]: time="2025-05-13T13:00:12.118275505Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 13:00:12.120297 containerd[1580]: time="2025-05-13T13:00:12.120265026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 13:00:12.120778 containerd[1580]: time="2025-05-13T13:00:12.120750597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 538.145948ms" May 13 13:00:12.120778 containerd[1580]: time="2025-05-13T13:00:12.120774602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 13:00:12.121299 containerd[1580]: time="2025-05-13T13:00:12.121260674Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 13:00:13.091843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515367034.mount: Deactivated successfully. May 13 13:00:16.861484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 13:00:16.863170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:17.919865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:17.924205 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 13:00:18.282913 kubelet[2201]: E0513 13:00:18.282784 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 13:00:18.286547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 13:00:18.286737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 13:00:18.287115 systemd[1]: kubelet.service: Consumed 182ms CPU time, 96.3M memory peak. May 13 13:00:18.630816 containerd[1580]: time="2025-05-13T13:00:18.630745888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:18.631562 containerd[1580]: time="2025-05-13T13:00:18.631511494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 13:00:18.632864 containerd[1580]: time="2025-05-13T13:00:18.632823014Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:18.635417 containerd[1580]: time="2025-05-13T13:00:18.635384799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:18.636753 containerd[1580]: time="2025-05-13T13:00:18.636696850Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.515402994s" May 13 13:00:18.636753 containerd[1580]: time="2025-05-13T13:00:18.636742586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 13:00:20.701585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:20.701784 systemd[1]: kubelet.service: Consumed 182ms CPU time, 96.3M memory peak. May 13 13:00:20.704332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:20.728321 systemd[1]: Reload requested from client PID 2241 ('systemctl') (unit session-7.scope)... May 13 13:00:20.728336 systemd[1]: Reloading... May 13 13:00:20.809988 zram_generator::config[2286]: No configuration found. May 13 13:00:21.008577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 13:00:21.122987 systemd[1]: Reloading finished in 394 ms. May 13 13:00:21.170705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:21.175041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:21.180634 systemd[1]: kubelet.service: Deactivated successfully. May 13 13:00:21.180901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:21.180949 systemd[1]: kubelet.service: Consumed 131ms CPU time, 83.6M memory peak. May 13 13:00:21.182561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:21.345330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:21.357292 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 13:00:21.393305 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 13:00:21.393305 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 13:00:21.393305 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 13:00:21.393699 kubelet[2333]: I0513 13:00:21.393397 2333 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 13:00:21.698271 kubelet[2333]: I0513 13:00:21.698222 2333 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 13:00:21.698271 kubelet[2333]: I0513 13:00:21.698255 2333 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 13:00:21.698512 kubelet[2333]: I0513 13:00:21.698489 2333 server.go:929] "Client rotation is on, will bootstrap in background" May 13 13:00:21.718152 kubelet[2333]: I0513 13:00:21.718103 2333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 13:00:21.718724 kubelet[2333]: E0513 13:00:21.718669 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:21.725988 kubelet[2333]: I0513 13:00:21.725965 2333 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 13:00:21.731592 kubelet[2333]: I0513 13:00:21.731565 2333 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 13:00:21.732391 kubelet[2333]: I0513 13:00:21.732363 2333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 13:00:21.732555 kubelet[2333]: I0513 13:00:21.732518 2333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 13:00:21.732722 kubelet[2333]: I0513 13:00:21.732544 2333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 13:00:21.732820 kubelet[2333]: I0513 13:00:21.732725 2333 topology_manager.go:138] "Creating topology manager with none policy" May 13 13:00:21.732820 kubelet[2333]: I0513 13:00:21.732733 2333 container_manager_linux.go:300] "Creating device plugin manager" May 13 13:00:21.732864 kubelet[2333]: I0513 13:00:21.732839 2333 state_mem.go:36] "Initialized new in-memory state store" May 13 13:00:21.734037 kubelet[2333]: I0513 13:00:21.734011 2333 kubelet.go:408] "Attempting to sync node with API server" May 13 13:00:21.734037 kubelet[2333]: I0513 13:00:21.734037 2333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 13:00:21.734096 kubelet[2333]: I0513 13:00:21.734067 2333 kubelet.go:314] "Adding apiserver pod source" May 13 13:00:21.734096 kubelet[2333]: I0513 13:00:21.734076 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 13:00:21.738404 kubelet[2333]: W0513 13:00:21.738346 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:21.738404 kubelet[2333]: E0513 13:00:21.738401 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:21.739226 kubelet[2333]: W0513 13:00:21.739184 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:21.739283 kubelet[2333]: E0513 13:00:21.739231 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:21.739996 kubelet[2333]: I0513 13:00:21.739977 2333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 13:00:21.741573 kubelet[2333]: I0513 13:00:21.741551 2333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 13:00:21.742021 kubelet[2333]: W0513 13:00:21.741999 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 13:00:21.742569 kubelet[2333]: I0513 13:00:21.742550 2333 server.go:1269] "Started kubelet" May 13 13:00:21.742701 kubelet[2333]: I0513 13:00:21.742657 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 13:00:21.743163 kubelet[2333]: I0513 13:00:21.743144 2333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 13:00:21.743284 kubelet[2333]: I0513 13:00:21.743267 2333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 13:00:21.743989 kubelet[2333]: I0513 13:00:21.743964 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 13:00:21.744318 kubelet[2333]: I0513 13:00:21.743820 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 13:00:21.744512 kubelet[2333]: I0513 13:00:21.744492 2333 server.go:460] "Adding debug handlers to kubelet server" May 13 13:00:21.746201 kubelet[2333]: I0513 13:00:21.746057 2333 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 13:00:21.746353 kubelet[2333]: I0513 13:00:21.746325 2333 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 13:00:21.746396 kubelet[2333]: I0513 13:00:21.746385 2333 reconciler.go:26] "Reconciler: start to sync state" May 13 13:00:21.746713 kubelet[2333]: W0513 13:00:21.746625 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:21.746713 kubelet[2333]: E0513 13:00:21.746671 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:21.746906 kubelet[2333]: E0513 13:00:21.745084 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f17af3667cd35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 13:00:21.742529845 +0000 UTC m=+0.381458188,LastTimestamp:2025-05-13 13:00:21.742529845 +0000 UTC m=+0.381458188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 13:00:21.747208 kubelet[2333]: I0513 13:00:21.747063 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 13:00:21.747208 kubelet[2333]: E0513 13:00:21.747080 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:21.747208 kubelet[2333]: E0513 13:00:21.747149 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" May 13 13:00:21.748014 kubelet[2333]: E0513 13:00:21.747909 2333 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 13:00:21.748224 kubelet[2333]: I0513 13:00:21.748192 2333 factory.go:221] Registration of the containerd container factory successfully May 13 13:00:21.748224 kubelet[2333]: I0513 13:00:21.748208 2333 factory.go:221] Registration of the systemd container factory successfully May 13 13:00:21.762556 kubelet[2333]: I0513 13:00:21.762497 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 13:00:21.762661 kubelet[2333]: I0513 13:00:21.762623 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 13:00:21.762722 kubelet[2333]: I0513 13:00:21.762711 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 13:00:21.762774 kubelet[2333]: I0513 13:00:21.762764 2333 state_mem.go:36] "Initialized new in-memory state store" May 13 13:00:21.763733 kubelet[2333]: I0513 13:00:21.763706 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 13:00:21.763772 kubelet[2333]: I0513 13:00:21.763751 2333 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 13:00:21.763772 kubelet[2333]: I0513 13:00:21.763768 2333 kubelet.go:2321] "Starting kubelet main sync loop" May 13 13:00:21.763820 kubelet[2333]: E0513 13:00:21.763802 2333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 13:00:21.764901 kubelet[2333]: W0513 13:00:21.764675 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:21.764901 kubelet[2333]: E0513 13:00:21.764723 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:21.848036 kubelet[2333]: E0513 13:00:21.847996 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:21.864256 kubelet[2333]: E0513 13:00:21.864232 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 13:00:21.947739 kubelet[2333]: E0513 13:00:21.947705 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" May 13 13:00:21.948263 kubelet[2333]: E0513 13:00:21.948245 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.048723 kubelet[2333]: E0513 13:00:22.048654 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.064872 kubelet[2333]: E0513 13:00:22.064826 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 13:00:22.149279 kubelet[2333]: E0513 13:00:22.149221 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.250183 kubelet[2333]: E0513 13:00:22.250149 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.348858 kubelet[2333]: E0513 13:00:22.348757 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" May 13 13:00:22.350985 kubelet[2333]: E0513 13:00:22.350931 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.451468 kubelet[2333]: E0513 13:00:22.451438 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.465689 kubelet[2333]: E0513 13:00:22.465624 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 13:00:22.552160 kubelet[2333]: E0513 13:00:22.552122 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.652703 kubelet[2333]: E0513 13:00:22.652666 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.753093 kubelet[2333]: E0513 13:00:22.753056 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.820829 kubelet[2333]: W0513 13:00:22.820749 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:22.821000 kubelet[2333]: E0513 13:00:22.820833 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:22.853470 kubelet[2333]: E0513 13:00:22.853433 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:22.954066 kubelet[2333]: E0513 13:00:22.953973 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.034549 kubelet[2333]: W0513 13:00:23.034492 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:23.034611 kubelet[2333]: E0513 13:00:23.034556 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:23.054201 kubelet[2333]: E0513 13:00:23.054172 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.094818 kubelet[2333]: W0513 13:00:23.094789 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:23.094865 kubelet[2333]: E0513 13:00:23.094853 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:23.149692 kubelet[2333]: E0513 13:00:23.149662 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" May 13 13:00:23.154943 kubelet[2333]: E0513 13:00:23.154897 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.236657 kubelet[2333]: W0513 13:00:23.236542 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:23.236657 kubelet[2333]: E0513 13:00:23.236586 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:23.255427 kubelet[2333]: E0513 13:00:23.255390 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.266609 kubelet[2333]: E0513 13:00:23.266577 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 13:00:23.356108 kubelet[2333]: E0513 13:00:23.356073 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.456628 kubelet[2333]: E0513 13:00:23.456590 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.557206 kubelet[2333]: E0513 13:00:23.557083 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.657636 kubelet[2333]: E0513 13:00:23.657596 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.758008 kubelet[2333]: E0513 13:00:23.757979 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.827239 kubelet[2333]: I0513 13:00:23.827167 2333 policy_none.go:49] "None policy: Start" May 13 13:00:23.827873 kubelet[2333]: I0513 13:00:23.827852 2333 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 13:00:23.827926 kubelet[2333]: I0513 13:00:23.827882 2333 state_mem.go:35] "Initializing new in-memory state store" May 13 13:00:23.847658 kubelet[2333]: E0513 13:00:23.847632 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:23.859100 kubelet[2333]: E0513 13:00:23.859071 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.959662 kubelet[2333]: E0513 13:00:23.959624 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:23.982734 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 13:00:23.995837 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 13:00:23.999154 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 13:00:24.007819 kubelet[2333]: I0513 13:00:24.007787 2333 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 13:00:24.008025 kubelet[2333]: I0513 13:00:24.008003 2333 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 13:00:24.008058 kubelet[2333]: I0513 13:00:24.008016 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 13:00:24.008248 kubelet[2333]: I0513 13:00:24.008179 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 13:00:24.009277 kubelet[2333]: E0513 13:00:24.009248 2333 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 13:00:24.109535 kubelet[2333]: I0513 13:00:24.109430 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:24.109830 kubelet[2333]: E0513 13:00:24.109787 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 13 13:00:24.311718 kubelet[2333]: I0513 13:00:24.311677 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:24.312059 kubelet[2333]: E0513 13:00:24.312026 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 13 13:00:24.713819 kubelet[2333]: I0513 13:00:24.713756 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:24.714280 kubelet[2333]: E0513 13:00:24.714102 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 13 13:00:24.750756 kubelet[2333]: E0513 13:00:24.750686 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="3.2s" May 13 13:00:24.874541 systemd[1]: Created slice kubepods-burstable-poda1e99c7da5068838e0fb18dc96c416d8.slice - libcontainer container kubepods-burstable-poda1e99c7da5068838e0fb18dc96c416d8.slice. May 13 13:00:24.899467 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 13:00:24.912709 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 13:00:24.963638 kubelet[2333]: I0513 13:00:24.963601 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:24.963703 kubelet[2333]: I0513 13:00:24.963659 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:24.963703 kubelet[2333]: I0513 13:00:24.963684 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 13:00:24.963764 kubelet[2333]: I0513 13:00:24.963702 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:24.963764 kubelet[2333]: I0513 13:00:24.963720 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:24.963764 kubelet[2333]: I0513 13:00:24.963738 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:24.963764 kubelet[2333]: I0513 13:00:24.963757 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:24.963890 kubelet[2333]: I0513 13:00:24.963775 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:24.963890 kubelet[2333]: I0513 13:00:24.963793 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:25.197769 kubelet[2333]: E0513 13:00:25.197734 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.198434 containerd[1580]: time="2025-05-13T13:00:25.198401101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1e99c7da5068838e0fb18dc96c416d8,Namespace:kube-system,Attempt:0,}" May 13 13:00:25.210678 kubelet[2333]: E0513 13:00:25.210639 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.211105 containerd[1580]: time="2025-05-13T13:00:25.211072369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 13:00:25.215408 kubelet[2333]: E0513 13:00:25.215329 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.215641 containerd[1580]: time="2025-05-13T13:00:25.215617354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 13:00:25.221393 kubelet[2333]: W0513 13:00:25.221355 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:25.221471 kubelet[2333]: E0513 13:00:25.221404 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:25.226447 containerd[1580]: time="2025-05-13T13:00:25.226373259Z" level=info msg="connecting to shim 6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883" address="unix:///run/containerd/s/4e9fcb99a42b8bd875750111017186c1d9d9e9cf0623ecc3264fa7d2ffb92e5c" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:25.248676 containerd[1580]: time="2025-05-13T13:00:25.248608316Z" level=info msg="connecting to shim 96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f" address="unix:///run/containerd/s/7b2ba27e43a4dcf767bab225e2673fd695db7883ad56064602c16b43de07faa2" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:25.255643 containerd[1580]: time="2025-05-13T13:00:25.255590773Z" level=info msg="connecting to shim dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded" address="unix:///run/containerd/s/b15b7b0a78a3ebc2549711db1e0114339d77aa9d376b7cd3b1c5639afa64fe3a" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:25.257180 systemd[1]: Started cri-containerd-6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883.scope - libcontainer container 6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883. May 13 13:00:25.280101 systemd[1]: Started cri-containerd-96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f.scope - libcontainer container 96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f. May 13 13:00:25.283595 kubelet[2333]: W0513 13:00:25.283564 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:25.283809 kubelet[2333]: E0513 13:00:25.283603 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:25.285975 systemd[1]: Started cri-containerd-dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded.scope - libcontainer container dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded. May 13 13:00:25.324456 containerd[1580]: time="2025-05-13T13:00:25.324417202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1e99c7da5068838e0fb18dc96c416d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883\"" May 13 13:00:25.326562 kubelet[2333]: E0513 13:00:25.326538 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.329169 containerd[1580]: time="2025-05-13T13:00:25.329134550Z" level=info msg="CreateContainer within sandbox \"6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 13:00:25.331867 containerd[1580]: time="2025-05-13T13:00:25.331826910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f\"" May 13 13:00:25.332592 kubelet[2333]: E0513 13:00:25.332516 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.334383 containerd[1580]: time="2025-05-13T13:00:25.334356916Z" level=info msg="CreateContainer within sandbox \"96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 13:00:25.352726 containerd[1580]: time="2025-05-13T13:00:25.352690294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded\"" May 13 13:00:25.353187 kubelet[2333]: E0513 13:00:25.353160 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:25.354342 containerd[1580]: time="2025-05-13T13:00:25.354303149Z" level=info msg="CreateContainer within sandbox \"dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 13:00:25.516201 kubelet[2333]: I0513 13:00:25.516100 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:25.516516 kubelet[2333]: E0513 13:00:25.516440 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 13 13:00:25.571035 containerd[1580]: time="2025-05-13T13:00:25.570994059Z" level=info msg="Container f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:25.592303 kubelet[2333]: W0513 13:00:25.592258 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:25.592303 kubelet[2333]: E0513 13:00:25.592297 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:25.673247 containerd[1580]: time="2025-05-13T13:00:25.673207052Z" level=info msg="Container 0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:25.720494 kubelet[2333]: W0513 13:00:25.720453 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 13 13:00:25.720494 kubelet[2333]: E0513 13:00:25.720482 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 13 13:00:25.811257 containerd[1580]: time="2025-05-13T13:00:25.811178286Z" level=info msg="Container 7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:26.193452 containerd[1580]: time="2025-05-13T13:00:26.193414330Z" level=info msg="CreateContainer within sandbox \"96ee37e92a2db5255da432ee7af16fab13ce120e8d1d25f44050512b971d360f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4\"" May 13 13:00:26.193940 containerd[1580]: time="2025-05-13T13:00:26.193917397Z" level=info msg="StartContainer for \"0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4\"" May 13 13:00:26.195122 containerd[1580]: time="2025-05-13T13:00:26.195093910Z" level=info msg="connecting to shim 0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4" address="unix:///run/containerd/s/7b2ba27e43a4dcf767bab225e2673fd695db7883ad56064602c16b43de07faa2" protocol=ttrpc version=3 May 13 13:00:26.215084 systemd[1]: Started cri-containerd-0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4.scope - libcontainer container 0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4. May 13 13:00:26.489053 containerd[1580]: time="2025-05-13T13:00:26.488806964Z" level=info msg="CreateContainer within sandbox \"6035d04d4c717d67b949b29044af4796074321d8ffacb69fd2e8ec8e104b7883\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4\"" May 13 13:00:26.490021 containerd[1580]: time="2025-05-13T13:00:26.489531678Z" level=info msg="StartContainer for \"f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4\"" May 13 13:00:26.490021 containerd[1580]: time="2025-05-13T13:00:26.489779775Z" level=info msg="CreateContainer within sandbox \"dd237801662d5ccb536b68414be3308094ecd714f498ff2b617657d3381b7ded\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8\"" May 13 13:00:26.490980 containerd[1580]: time="2025-05-13T13:00:26.490471205Z" level=info msg="StartContainer for \"0e6645b568077ae468c67215c1cda6ff922bbc87d721db6412931a8bdd8870c4\" returns successfully" May 13 13:00:26.490980 containerd[1580]: time="2025-05-13T13:00:26.490589072Z" level=info msg="StartContainer for \"7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8\"" May 13 13:00:26.491335 containerd[1580]: time="2025-05-13T13:00:26.491142597Z" level=info msg="connecting to shim f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4" address="unix:///run/containerd/s/4e9fcb99a42b8bd875750111017186c1d9d9e9cf0623ecc3264fa7d2ffb92e5c" protocol=ttrpc version=3 May 13 13:00:26.491707 containerd[1580]: time="2025-05-13T13:00:26.491685321Z" level=info msg="connecting to shim 7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8" address="unix:///run/containerd/s/b15b7b0a78a3ebc2549711db1e0114339d77aa9d376b7cd3b1c5639afa64fe3a" protocol=ttrpc version=3 May 13 13:00:26.514084 systemd[1]: Started cri-containerd-7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8.scope - libcontainer container 7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8. May 13 13:00:26.517802 systemd[1]: Started cri-containerd-f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4.scope - libcontainer container f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4. May 13 13:00:26.763295 containerd[1580]: time="2025-05-13T13:00:26.763180128Z" level=info msg="StartContainer for \"7b43d69a94c581a4b5cb40abcf7916b4254cd9f24c826b66dba82311c1ea96c8\" returns successfully" May 13 13:00:26.765447 containerd[1580]: time="2025-05-13T13:00:26.765419706Z" level=info msg="StartContainer for \"f1780b6379fb0f540a8ad608e2919059d5cea28e7964ed7fba03fa8eacb107d4\" returns successfully" May 13 13:00:26.781973 kubelet[2333]: E0513 13:00:26.780599 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:26.781973 kubelet[2333]: E0513 13:00:26.780798 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:26.786569 kubelet[2333]: E0513 13:00:26.786533 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:27.119030 kubelet[2333]: I0513 13:00:27.118986 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:27.788096 kubelet[2333]: E0513 13:00:27.788054 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:27.788525 kubelet[2333]: E0513 13:00:27.788264 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:27.788833 kubelet[2333]: E0513 13:00:27.788801 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:27.839772 kubelet[2333]: I0513 13:00:27.839719 2333 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 13:00:27.839772 kubelet[2333]: E0513 13:00:27.839744 2333 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 13:00:28.065143 kubelet[2333]: E0513 13:00:28.064747 2333 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f17af3667cd35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 13:00:21.742529845 +0000 UTC m=+0.381458188,LastTimestamp:2025-05-13 13:00:21.742529845 +0000 UTC m=+0.381458188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 13:00:28.255183 kubelet[2333]: E0513 13:00:28.255129 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.356036 kubelet[2333]: E0513 13:00:28.355894 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.456498 kubelet[2333]: E0513 13:00:28.456461 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.557045 kubelet[2333]: E0513 13:00:28.557008 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.657697 kubelet[2333]: E0513 13:00:28.657643 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.758784 kubelet[2333]: E0513 13:00:28.758741 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 13:00:28.789389 kubelet[2333]: E0513 13:00:28.789337 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:29.743288 kubelet[2333]: I0513 13:00:29.743236 2333 apiserver.go:52] "Watching apiserver" May 13 13:00:29.746434 kubelet[2333]: I0513 13:00:29.746406 2333 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 13:00:29.786391 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... May 13 13:00:29.786407 systemd[1]: Reloading... May 13 13:00:29.866996 zram_generator::config[2666]: No configuration found. May 13 13:00:29.950566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 13:00:30.078461 systemd[1]: Reloading finished in 291 ms. May 13 13:00:30.104331 kubelet[2333]: I0513 13:00:30.104282 2333 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 13:00:30.104441 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:30.128665 systemd[1]: kubelet.service: Deactivated successfully. May 13 13:00:30.129078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:30.129150 systemd[1]: kubelet.service: Consumed 807ms CPU time, 116.7M memory peak. May 13 13:00:30.131586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 13:00:30.325386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 13:00:30.333370 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 13:00:30.389446 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 13:00:30.389446 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 13:00:30.389446 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 13:00:30.389834 kubelet[2702]: I0513 13:00:30.389525 2702 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 13:00:30.395690 kubelet[2702]: I0513 13:00:30.395656 2702 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 13:00:30.395690 kubelet[2702]: I0513 13:00:30.395681 2702 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 13:00:30.396634 kubelet[2702]: I0513 13:00:30.396610 2702 server.go:929] "Client rotation is on, will bootstrap in background" May 13 13:00:30.398543 kubelet[2702]: I0513 13:00:30.398519 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 13:00:30.401755 kubelet[2702]: I0513 13:00:30.401665 2702 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 13:00:30.406609 kubelet[2702]: I0513 13:00:30.406578 2702 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 13:00:30.410808 kubelet[2702]: I0513 13:00:30.410767 2702 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 13:00:30.410864 kubelet[2702]: I0513 13:00:30.410858 2702 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 13:00:30.411032 kubelet[2702]: I0513 13:00:30.411001 2702 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 13:00:30.411181 kubelet[2702]: I0513 13:00:30.411028 2702 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 13:00:30.411181 kubelet[2702]: I0513 13:00:30.411176 2702 topology_manager.go:138] "Creating topology manager with none policy" May 13 13:00:30.411181 kubelet[2702]: I0513 13:00:30.411184 2702 container_manager_linux.go:300] "Creating device plugin manager" May 13 13:00:30.411318 kubelet[2702]: I0513 13:00:30.411213 2702 state_mem.go:36] "Initialized new in-memory state store" May 13 13:00:30.411318 kubelet[2702]: I0513 13:00:30.411316 2702 kubelet.go:408] "Attempting to sync node with API server" May 13 13:00:30.411373 kubelet[2702]: I0513 13:00:30.411326 2702 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 13:00:30.411373 kubelet[2702]: I0513 13:00:30.411353 2702 kubelet.go:314] "Adding apiserver pod source" May 13 13:00:30.412024 kubelet[2702]: I0513 13:00:30.411972 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 13:00:30.413971 kubelet[2702]: I0513 13:00:30.412721 2702 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 13:00:30.413971 kubelet[2702]: I0513 13:00:30.413311 2702 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 13:00:30.414125 kubelet[2702]: I0513 13:00:30.414101 2702 server.go:1269] "Started kubelet" May 13 13:00:30.417246 kubelet[2702]: I0513 13:00:30.417219 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 13:00:30.420813 kubelet[2702]: I0513 13:00:30.420398 2702 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 13:00:30.420813 kubelet[2702]: I0513 13:00:30.420747 2702 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 13:00:30.420899 kubelet[2702]: I0513 13:00:30.420883 2702 reconciler.go:26] "Reconciler: start to sync state" May 13 13:00:30.422518 kubelet[2702]: I0513 13:00:30.422491 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 13:00:30.422746 kubelet[2702]: I0513 13:00:30.422712 2702 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 13:00:30.423332 kubelet[2702]: I0513 13:00:30.423262 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 13:00:30.423685 kubelet[2702]: I0513 13:00:30.423579 2702 server.go:460] "Adding debug handlers to kubelet server" May 13 13:00:30.423685 kubelet[2702]: I0513 13:00:30.423591 2702 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 13:00:30.426548 kubelet[2702]: I0513 13:00:30.426156 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 13:00:30.428018 kubelet[2702]: E0513 13:00:30.427827 2702 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 13:00:30.429283 kubelet[2702]: I0513 13:00:30.429152 2702 factory.go:221] Registration of the containerd container factory successfully May 13 13:00:30.429283 kubelet[2702]: I0513 13:00:30.429278 2702 factory.go:221] Registration of the systemd container factory successfully May 13 13:00:30.430916 kubelet[2702]: I0513 13:00:30.430871 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 13:00:30.432022 kubelet[2702]: I0513 13:00:30.431993 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 13:00:30.432074 kubelet[2702]: I0513 13:00:30.432025 2702 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 13:00:30.432074 kubelet[2702]: I0513 13:00:30.432046 2702 kubelet.go:2321] "Starting kubelet main sync loop" May 13 13:00:30.432121 kubelet[2702]: E0513 13:00:30.432086 2702 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 13:00:30.465062 kubelet[2702]: I0513 13:00:30.465037 2702 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 13:00:30.465429 kubelet[2702]: I0513 13:00:30.465113 2702 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 13:00:30.465429 kubelet[2702]: I0513 13:00:30.465129 2702 state_mem.go:36] "Initialized new in-memory state store" May 13 13:00:30.465429 kubelet[2702]: I0513 13:00:30.465255 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 13:00:30.465429 kubelet[2702]: I0513 13:00:30.465264 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 13:00:30.465429 kubelet[2702]: I0513 13:00:30.465288 2702 policy_none.go:49] "None policy: Start" May 13 13:00:30.465784 kubelet[2702]: I0513 13:00:30.465756 2702 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 13:00:30.465839 kubelet[2702]: I0513 13:00:30.465831 2702 state_mem.go:35] "Initializing new in-memory state store" May 13 13:00:30.466073 kubelet[2702]: I0513 13:00:30.466041 2702 state_mem.go:75] "Updated machine memory state" May 13 13:00:30.470512 kubelet[2702]: I0513 13:00:30.470491 2702 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 13:00:30.470717 kubelet[2702]: I0513 13:00:30.470649 2702 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 13:00:30.470717 kubelet[2702]: I0513 13:00:30.470672 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 13:00:30.470854 kubelet[2702]: I0513 13:00:30.470818 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 13:00:30.577988 kubelet[2702]: I0513 13:00:30.577926 2702 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 13:00:30.582818 kubelet[2702]: I0513 13:00:30.582798 2702 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 13:00:30.582885 kubelet[2702]: I0513 13:00:30.582862 2702 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 13:00:30.722062 kubelet[2702]: I0513 13:00:30.721998 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:30.722062 kubelet[2702]: I0513 13:00:30.722030 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:30.722332 kubelet[2702]: I0513 13:00:30.722102 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1e99c7da5068838e0fb18dc96c416d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1e99c7da5068838e0fb18dc96c416d8\") " pod="kube-system/kube-apiserver-localhost" May 13 13:00:30.722332 kubelet[2702]: I0513 13:00:30.722161 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:30.722332 kubelet[2702]: I0513 13:00:30.722199 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:30.722332 kubelet[2702]: I0513 13:00:30.722216 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 13:00:30.722332 kubelet[2702]: I0513 13:00:30.722238 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:30.722480 kubelet[2702]: I0513 13:00:30.722265 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:30.722480 kubelet[2702]: I0513 13:00:30.722303 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 13:00:30.789254 sudo[2736]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 13:00:30.789583 sudo[2736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 13:00:30.840494 kubelet[2702]: E0513 13:00:30.840452 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:30.841280 kubelet[2702]: E0513 13:00:30.841129 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:30.841280 kubelet[2702]: E0513 13:00:30.841204 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:31.241715 sudo[2736]: pam_unix(sudo:session): session closed for user root May 13 13:00:31.412484 kubelet[2702]: I0513 13:00:31.412436 2702 apiserver.go:52] "Watching apiserver" May 13 13:00:31.420968 kubelet[2702]: I0513 13:00:31.420915 2702 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 13:00:31.446896 kubelet[2702]: E0513 13:00:31.446862 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:31.447846 kubelet[2702]: E0513 13:00:31.447408 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:31.447846 kubelet[2702]: E0513 13:00:31.447552 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:31.467002 kubelet[2702]: I0513 13:00:31.466907 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.466885567 podStartE2EDuration="1.466885567s" podCreationTimestamp="2025-05-13 13:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:00:31.466654636 +0000 UTC m=+1.120614477" watchObservedRunningTime="2025-05-13 13:00:31.466885567 +0000 UTC m=+1.120845408" May 13 13:00:31.483424 kubelet[2702]: I0513 13:00:31.483374 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.483350811 podStartE2EDuration="1.483350811s" podCreationTimestamp="2025-05-13 13:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:00:31.483312467 +0000 UTC m=+1.137272309" watchObservedRunningTime="2025-05-13 13:00:31.483350811 +0000 UTC m=+1.137310642" May 13 13:00:31.483556 kubelet[2702]: I0513 13:00:31.483444 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.483441705 podStartE2EDuration="1.483441705s" podCreationTimestamp="2025-05-13 13:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:00:31.476040215 +0000 UTC m=+1.130000056" watchObservedRunningTime="2025-05-13 13:00:31.483441705 +0000 UTC m=+1.137401536" May 13 13:00:32.447766 kubelet[2702]: E0513 13:00:32.447736 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:32.477559 sudo[1775]: pam_unix(sudo:session): session closed for user root May 13 13:00:32.478934 sshd[1774]: Connection closed by 10.0.0.1 port 40534 May 13 13:00:32.479363 sshd-session[1772]: pam_unix(sshd:session): session closed for user core May 13 13:00:32.483739 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:40534.service: Deactivated successfully. May 13 13:00:32.485987 systemd[1]: session-7.scope: Deactivated successfully. May 13 13:00:32.486203 systemd[1]: session-7.scope: Consumed 3.806s CPU time, 265.6M memory peak. May 13 13:00:32.487418 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. May 13 13:00:32.488782 systemd-logind[1555]: Removed session 7. May 13 13:00:33.415029 kubelet[2702]: E0513 13:00:33.414947 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:33.449862 kubelet[2702]: E0513 13:00:33.449813 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:36.311383 kubelet[2702]: I0513 13:00:36.311335 2702 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 13:00:36.312143 containerd[1580]: time="2025-05-13T13:00:36.312113946Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 13:00:36.312972 kubelet[2702]: I0513 13:00:36.312549 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 13:00:36.674964 systemd[1]: Created slice kubepods-besteffort-podc542fbc4_e79e_46b7_bf97_ab20b1bf1d20.slice - libcontainer container kubepods-besteffort-podc542fbc4_e79e_46b7_bf97_ab20b1bf1d20.slice. May 13 13:00:36.680128 kubelet[2702]: E0513 13:00:36.679471 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:36.690075 systemd[1]: Created slice kubepods-burstable-pod4be89790_d1c6_4b4d_8215_5932ce70bb39.slice - libcontainer container kubepods-burstable-pod4be89790_d1c6_4b4d_8215_5932ce70bb39.slice. May 13 13:00:36.760260 kubelet[2702]: I0513 13:00:36.760211 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-xtables-lock\") pod \"kube-proxy-t8qjw\" (UID: \"c542fbc4-e79e-46b7-bf97-ab20b1bf1d20\") " pod="kube-system/kube-proxy-t8qjw" May 13 13:00:36.760260 kubelet[2702]: I0513 13:00:36.760242 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-lib-modules\") pod \"kube-proxy-t8qjw\" (UID: \"c542fbc4-e79e-46b7-bf97-ab20b1bf1d20\") " pod="kube-system/kube-proxy-t8qjw" May 13 13:00:36.760260 kubelet[2702]: I0513 13:00:36.760267 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-cgroup\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760281 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cni-path\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760295 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-xtables-lock\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760309 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-hubble-tls\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760324 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwxq\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760364 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be89790-d1c6-4b4d-8215-5932ce70bb39-clustermesh-secrets\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760449 kubelet[2702]: I0513 13:00:36.760399 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-hostproc\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760585 kubelet[2702]: I0513 13:00:36.760434 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6kdd\" (UniqueName: \"kubernetes.io/projected/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-kube-api-access-z6kdd\") pod \"kube-proxy-t8qjw\" (UID: \"c542fbc4-e79e-46b7-bf97-ab20b1bf1d20\") " pod="kube-system/kube-proxy-t8qjw" May 13 13:00:36.760585 kubelet[2702]: I0513 13:00:36.760459 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-net\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760585 kubelet[2702]: I0513 13:00:36.760484 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-bpf-maps\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760585 kubelet[2702]: I0513 13:00:36.760500 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-lib-modules\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760585 kubelet[2702]: I0513 13:00:36.760514 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-config-path\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760695 kubelet[2702]: I0513 13:00:36.760533 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-kube-proxy\") pod \"kube-proxy-t8qjw\" (UID: \"c542fbc4-e79e-46b7-bf97-ab20b1bf1d20\") " pod="kube-system/kube-proxy-t8qjw" May 13 13:00:36.760695 kubelet[2702]: I0513 13:00:36.760549 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-run\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760695 kubelet[2702]: I0513 13:00:36.760565 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-kernel\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.760695 kubelet[2702]: I0513 13:00:36.760578 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-etc-cni-netd\") pod \"cilium-nrndf\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " pod="kube-system/cilium-nrndf" May 13 13:00:36.890391 kubelet[2702]: E0513 13:00:36.890340 2702 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 13:00:36.890391 kubelet[2702]: E0513 13:00:36.890391 2702 projected.go:194] Error preparing data for projected volume kube-api-access-z6kdd for pod kube-system/kube-proxy-t8qjw: configmap "kube-root-ca.crt" not found May 13 13:00:36.890583 kubelet[2702]: E0513 13:00:36.890450 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-kube-api-access-z6kdd podName:c542fbc4-e79e-46b7-bf97-ab20b1bf1d20 nodeName:}" failed. No retries permitted until 2025-05-13 13:00:37.390426244 +0000 UTC m=+7.044386085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z6kdd" (UniqueName: "kubernetes.io/projected/c542fbc4-e79e-46b7-bf97-ab20b1bf1d20-kube-api-access-z6kdd") pod "kube-proxy-t8qjw" (UID: "c542fbc4-e79e-46b7-bf97-ab20b1bf1d20") : configmap "kube-root-ca.crt" not found May 13 13:00:36.897473 kubelet[2702]: E0513 13:00:36.896542 2702 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 13:00:36.897473 kubelet[2702]: E0513 13:00:36.896574 2702 projected.go:194] Error preparing data for projected volume kube-api-access-8wwxq for pod kube-system/cilium-nrndf: configmap "kube-root-ca.crt" not found May 13 13:00:36.897473 kubelet[2702]: E0513 13:00:36.896730 2702 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq podName:4be89790-d1c6-4b4d-8215-5932ce70bb39 nodeName:}" failed. No retries permitted until 2025-05-13 13:00:37.39671106 +0000 UTC m=+7.050670901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wwxq" (UniqueName: "kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq") pod "cilium-nrndf" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39") : configmap "kube-root-ca.crt" not found May 13 13:00:37.184198 systemd[1]: Created slice kubepods-besteffort-pod428387a6_827e_418f_ad6e_a40e0add663b.slice - libcontainer container kubepods-besteffort-pod428387a6_827e_418f_ad6e_a40e0add663b.slice. May 13 13:00:37.263843 kubelet[2702]: I0513 13:00:37.263808 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpm4k\" (UniqueName: \"kubernetes.io/projected/428387a6-827e-418f-ad6e-a40e0add663b-kube-api-access-hpm4k\") pod \"cilium-operator-5d85765b45-nmfzj\" (UID: \"428387a6-827e-418f-ad6e-a40e0add663b\") " pod="kube-system/cilium-operator-5d85765b45-nmfzj" May 13 13:00:37.263843 kubelet[2702]: I0513 13:00:37.263839 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/428387a6-827e-418f-ad6e-a40e0add663b-cilium-config-path\") pod \"cilium-operator-5d85765b45-nmfzj\" (UID: \"428387a6-827e-418f-ad6e-a40e0add663b\") " pod="kube-system/cilium-operator-5d85765b45-nmfzj" May 13 13:00:37.454864 kubelet[2702]: E0513 13:00:37.454763 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.486490 kubelet[2702]: E0513 13:00:37.486446 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.487084 containerd[1580]: time="2025-05-13T13:00:37.487023968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nmfzj,Uid:428387a6-827e-418f-ad6e-a40e0add663b,Namespace:kube-system,Attempt:0,}" May 13 13:00:37.508509 containerd[1580]: time="2025-05-13T13:00:37.508461763Z" level=info msg="connecting to shim e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642" address="unix:///run/containerd/s/b166270c012773c449be6268bea493e140ec5ab1aff70873c1e3b44958bf814d" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:37.535081 systemd[1]: Started cri-containerd-e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642.scope - libcontainer container e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642. May 13 13:00:37.576649 containerd[1580]: time="2025-05-13T13:00:37.576609628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nmfzj,Uid:428387a6-827e-418f-ad6e-a40e0add663b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\"" May 13 13:00:37.577303 kubelet[2702]: E0513 13:00:37.577280 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.578185 containerd[1580]: time="2025-05-13T13:00:37.578119677Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 13:00:37.587193 kubelet[2702]: E0513 13:00:37.587165 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.587754 containerd[1580]: time="2025-05-13T13:00:37.587547822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8qjw,Uid:c542fbc4-e79e-46b7-bf97-ab20b1bf1d20,Namespace:kube-system,Attempt:0,}" May 13 13:00:37.595702 kubelet[2702]: E0513 13:00:37.595657 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.596134 containerd[1580]: time="2025-05-13T13:00:37.596094123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrndf,Uid:4be89790-d1c6-4b4d-8215-5932ce70bb39,Namespace:kube-system,Attempt:0,}" May 13 13:00:37.606850 containerd[1580]: time="2025-05-13T13:00:37.606799775Z" level=info msg="connecting to shim e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1" address="unix:///run/containerd/s/60834156c8d15d4c73e769298f43f5763f36baa17f696b184af0d619258b5f8a" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:37.616875 containerd[1580]: time="2025-05-13T13:00:37.616824675Z" level=info msg="connecting to shim 84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" namespace=k8s.io protocol=ttrpc version=3 May 13 13:00:37.635178 systemd[1]: Started cri-containerd-e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1.scope - libcontainer container e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1. May 13 13:00:37.638090 systemd[1]: Started cri-containerd-84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c.scope - libcontainer container 84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c. May 13 13:00:37.664399 containerd[1580]: time="2025-05-13T13:00:37.664345456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8qjw,Uid:c542fbc4-e79e-46b7-bf97-ab20b1bf1d20,Namespace:kube-system,Attempt:0,} returns sandbox id \"e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1\"" May 13 13:00:37.665045 kubelet[2702]: E0513 13:00:37.665025 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.666718 containerd[1580]: time="2025-05-13T13:00:37.666656615Z" level=info msg="CreateContainer within sandbox \"e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 13:00:37.667495 containerd[1580]: time="2025-05-13T13:00:37.667469799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nrndf,Uid:4be89790-d1c6-4b4d-8215-5932ce70bb39,Namespace:kube-system,Attempt:0,} returns sandbox id \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\"" May 13 13:00:37.668001 kubelet[2702]: E0513 13:00:37.667976 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:37.677402 containerd[1580]: time="2025-05-13T13:00:37.677347178Z" level=info msg="Container 688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:37.688783 containerd[1580]: time="2025-05-13T13:00:37.688739574Z" level=info msg="CreateContainer within sandbox \"e171ad440fcb84b1d1cfbc0d371828c646f4fcbf76246e82ab13f9f7ebc6cdd1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea\"" May 13 13:00:37.689307 containerd[1580]: time="2025-05-13T13:00:37.689279189Z" level=info msg="StartContainer for \"688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea\"" May 13 13:00:37.690783 containerd[1580]: time="2025-05-13T13:00:37.690756435Z" level=info msg="connecting to shim 688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea" address="unix:///run/containerd/s/60834156c8d15d4c73e769298f43f5763f36baa17f696b184af0d619258b5f8a" protocol=ttrpc version=3 May 13 13:00:37.718078 systemd[1]: Started cri-containerd-688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea.scope - libcontainer container 688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea. May 13 13:00:37.757210 containerd[1580]: time="2025-05-13T13:00:37.757164586Z" level=info msg="StartContainer for \"688f5d60f584c788e937694fb216c378a9862a5335dbf82d42c6fdfd1bdf32ea\" returns successfully" May 13 13:00:38.457157 kubelet[2702]: E0513 13:00:38.457111 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:38.464398 kubelet[2702]: I0513 13:00:38.464339 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t8qjw" podStartSLOduration=2.464323785 podStartE2EDuration="2.464323785s" podCreationTimestamp="2025-05-13 13:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:00:38.463943734 +0000 UTC m=+8.117903575" watchObservedRunningTime="2025-05-13 13:00:38.464323785 +0000 UTC m=+8.118283626" May 13 13:00:38.958545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797500857.mount: Deactivated successfully. May 13 13:00:39.245683 update_engine[1559]: I20250513 13:00:39.245533 1559 update_attempter.cc:509] Updating boot flags... May 13 13:00:40.310223 containerd[1580]: time="2025-05-13T13:00:40.310167231Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:40.311260 containerd[1580]: time="2025-05-13T13:00:40.311230255Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 13:00:40.312852 containerd[1580]: time="2025-05-13T13:00:40.312822102Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:40.314263 containerd[1580]: time="2025-05-13T13:00:40.314232344Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.736084334s" May 13 13:00:40.314305 containerd[1580]: time="2025-05-13T13:00:40.314261790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 13:00:40.315292 containerd[1580]: time="2025-05-13T13:00:40.315262687Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 13:00:40.316401 containerd[1580]: time="2025-05-13T13:00:40.316360969Z" level=info msg="CreateContainer within sandbox \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 13:00:40.326587 containerd[1580]: time="2025-05-13T13:00:40.326549225Z" level=info msg="Container e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:40.333095 containerd[1580]: time="2025-05-13T13:00:40.333060605Z" level=info msg="CreateContainer within sandbox \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\"" May 13 13:00:40.333571 containerd[1580]: time="2025-05-13T13:00:40.333545063Z" level=info msg="StartContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\"" May 13 13:00:40.334336 containerd[1580]: time="2025-05-13T13:00:40.334299583Z" level=info msg="connecting to shim e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793" address="unix:///run/containerd/s/b166270c012773c449be6268bea493e140ec5ab1aff70873c1e3b44958bf814d" protocol=ttrpc version=3 May 13 13:00:40.385174 systemd[1]: Started cri-containerd-e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793.scope - libcontainer container e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793. May 13 13:00:40.414788 containerd[1580]: time="2025-05-13T13:00:40.414734912Z" level=info msg="StartContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" returns successfully" May 13 13:00:40.464928 kubelet[2702]: E0513 13:00:40.464887 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:41.466318 kubelet[2702]: E0513 13:00:41.466288 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:42.412382 kubelet[2702]: E0513 13:00:42.412338 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:42.468167 kubelet[2702]: I0513 13:00:42.468093 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nmfzj" podStartSLOduration=2.730771689 podStartE2EDuration="5.468073512s" podCreationTimestamp="2025-05-13 13:00:37 +0000 UTC" firstStartedPulling="2025-05-13 13:00:37.577758861 +0000 UTC m=+7.231718692" lastFinishedPulling="2025-05-13 13:00:40.315060674 +0000 UTC m=+9.969020515" observedRunningTime="2025-05-13 13:00:40.474986844 +0000 UTC m=+10.128946685" watchObservedRunningTime="2025-05-13 13:00:42.468073512 +0000 UTC m=+12.122033353" May 13 13:00:42.468167 kubelet[2702]: E0513 13:00:42.468170 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:43.419290 kubelet[2702]: E0513 13:00:43.419252 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:50.859488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610098194.mount: Deactivated successfully. May 13 13:00:54.329249 containerd[1580]: time="2025-05-13T13:00:54.329174177Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:54.330213 containerd[1580]: time="2025-05-13T13:00:54.330163681Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 13:00:54.331617 containerd[1580]: time="2025-05-13T13:00:54.331565952Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 13:00:54.332736 containerd[1580]: time="2025-05-13T13:00:54.332693265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.017393508s" May 13 13:00:54.332736 containerd[1580]: time="2025-05-13T13:00:54.332726398Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 13:00:54.335274 containerd[1580]: time="2025-05-13T13:00:54.335243819Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 13:00:54.345402 containerd[1580]: time="2025-05-13T13:00:54.345337793Z" level=info msg="Container cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:54.352911 containerd[1580]: time="2025-05-13T13:00:54.352860302Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\"" May 13 13:00:54.353416 containerd[1580]: time="2025-05-13T13:00:54.353381784Z" level=info msg="StartContainer for \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\"" May 13 13:00:54.354533 containerd[1580]: time="2025-05-13T13:00:54.354498807Z" level=info msg="connecting to shim cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" protocol=ttrpc version=3 May 13 13:00:54.376088 systemd[1]: Started cri-containerd-cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345.scope - libcontainer container cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345. May 13 13:00:54.406764 containerd[1580]: time="2025-05-13T13:00:54.406710981Z" level=info msg="StartContainer for \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" returns successfully" May 13 13:00:54.416910 systemd[1]: cri-containerd-cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345.scope: Deactivated successfully. May 13 13:00:54.419240 containerd[1580]: time="2025-05-13T13:00:54.419189956Z" level=info msg="received exit event container_id:\"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" id:\"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" pid:3186 exited_at:{seconds:1747141254 nanos:418776027}" May 13 13:00:54.419340 containerd[1580]: time="2025-05-13T13:00:54.419266861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" id:\"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" pid:3186 exited_at:{seconds:1747141254 nanos:418776027}" May 13 13:00:54.440835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345-rootfs.mount: Deactivated successfully. May 13 13:00:54.486040 kubelet[2702]: E0513 13:00:54.485693 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:55.488638 kubelet[2702]: E0513 13:00:55.488593 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:55.490745 containerd[1580]: time="2025-05-13T13:00:55.490615771Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 13:00:55.504386 containerd[1580]: time="2025-05-13T13:00:55.504332441Z" level=info msg="Container 8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:55.508006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494895642.mount: Deactivated successfully. May 13 13:00:55.510468 containerd[1580]: time="2025-05-13T13:00:55.510434111Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\"" May 13 13:00:55.510964 containerd[1580]: time="2025-05-13T13:00:55.510866174Z" level=info msg="StartContainer for \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\"" May 13 13:00:55.511563 containerd[1580]: time="2025-05-13T13:00:55.511536456Z" level=info msg="connecting to shim 8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" protocol=ttrpc version=3 May 13 13:00:55.533079 systemd[1]: Started cri-containerd-8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4.scope - libcontainer container 8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4. May 13 13:00:55.559810 containerd[1580]: time="2025-05-13T13:00:55.559768540Z" level=info msg="StartContainer for \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" returns successfully" May 13 13:00:55.572970 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 13:00:55.573568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 13:00:55.573750 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 13:00:55.575371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 13:00:55.577234 containerd[1580]: time="2025-05-13T13:00:55.577186188Z" level=info msg="received exit event container_id:\"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" id:\"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" pid:3233 exited_at:{seconds:1747141255 nanos:576899449}" May 13 13:00:55.577545 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 13:00:55.577743 containerd[1580]: time="2025-05-13T13:00:55.577679347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" id:\"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" pid:3233 exited_at:{seconds:1747141255 nanos:576899449}" May 13 13:00:55.578410 systemd[1]: cri-containerd-8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4.scope: Deactivated successfully. May 13 13:00:55.608799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 13:00:56.494216 kubelet[2702]: E0513 13:00:56.494147 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:56.496898 containerd[1580]: time="2025-05-13T13:00:56.496859050Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 13:00:56.505652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4-rootfs.mount: Deactivated successfully. May 13 13:00:56.599626 containerd[1580]: time="2025-05-13T13:00:56.599581255Z" level=info msg="Container 598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:56.603251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822374650.mount: Deactivated successfully. May 13 13:00:56.820859 containerd[1580]: time="2025-05-13T13:00:56.820762551Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\"" May 13 13:00:56.821192 containerd[1580]: time="2025-05-13T13:00:56.821167594Z" level=info msg="StartContainer for \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\"" May 13 13:00:56.822442 containerd[1580]: time="2025-05-13T13:00:56.822412468Z" level=info msg="connecting to shim 598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" protocol=ttrpc version=3 May 13 13:00:56.846105 systemd[1]: Started cri-containerd-598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce.scope - libcontainer container 598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce. May 13 13:00:56.883847 systemd[1]: cri-containerd-598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce.scope: Deactivated successfully. May 13 13:00:56.885452 containerd[1580]: time="2025-05-13T13:00:56.885415057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" id:\"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" pid:3282 exited_at:{seconds:1747141256 nanos:885185955}" May 13 13:00:56.895048 containerd[1580]: time="2025-05-13T13:00:56.895019478Z" level=info msg="received exit event container_id:\"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" id:\"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" pid:3282 exited_at:{seconds:1747141256 nanos:885185955}" May 13 13:00:56.896993 containerd[1580]: time="2025-05-13T13:00:56.896945042Z" level=info msg="StartContainer for \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" returns successfully" May 13 13:00:56.916640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce-rootfs.mount: Deactivated successfully. May 13 13:00:57.358217 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:52954.service - OpenSSH per-connection server daemon (10.0.0.1:52954). May 13 13:00:57.413905 sshd[3309]: Accepted publickey for core from 10.0.0.1 port 52954 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:00:57.415445 sshd-session[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:00:57.419422 systemd-logind[1555]: New session 8 of user core. May 13 13:00:57.429088 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 13:00:57.503620 kubelet[2702]: E0513 13:00:57.503582 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:57.508277 containerd[1580]: time="2025-05-13T13:00:57.508231893Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 13:00:57.526582 containerd[1580]: time="2025-05-13T13:00:57.526535094Z" level=info msg="Container eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:57.530584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610965374.mount: Deactivated successfully. May 13 13:00:57.535899 containerd[1580]: time="2025-05-13T13:00:57.535786887Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\"" May 13 13:00:57.536463 containerd[1580]: time="2025-05-13T13:00:57.536441108Z" level=info msg="StartContainer for \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\"" May 13 13:00:57.537391 containerd[1580]: time="2025-05-13T13:00:57.537368053Z" level=info msg="connecting to shim eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" protocol=ttrpc version=3 May 13 13:00:57.563232 systemd[1]: Started cri-containerd-eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a.scope - libcontainer container eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a. May 13 13:00:57.586004 sshd[3311]: Connection closed by 10.0.0.1 port 52954 May 13 13:00:57.586430 sshd-session[3309]: pam_unix(sshd:session): session closed for user core May 13 13:00:57.592114 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:52954.service: Deactivated successfully. May 13 13:00:57.595636 systemd[1]: session-8.scope: Deactivated successfully. May 13 13:00:57.598081 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. May 13 13:00:57.598355 systemd[1]: cri-containerd-eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a.scope: Deactivated successfully. May 13 13:00:57.600023 containerd[1580]: time="2025-05-13T13:00:57.599989096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" id:\"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" pid:3333 exited_at:{seconds:1747141257 nanos:599570168}" May 13 13:00:57.601745 systemd-logind[1555]: Removed session 8. May 13 13:00:57.602367 containerd[1580]: time="2025-05-13T13:00:57.602329602Z" level=info msg="received exit event container_id:\"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" id:\"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" pid:3333 exited_at:{seconds:1747141257 nanos:599570168}" May 13 13:00:57.609752 containerd[1580]: time="2025-05-13T13:00:57.609627287Z" level=info msg="StartContainer for \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" returns successfully" May 13 13:00:57.622369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a-rootfs.mount: Deactivated successfully. May 13 13:00:58.506490 kubelet[2702]: E0513 13:00:58.506456 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:58.508071 containerd[1580]: time="2025-05-13T13:00:58.508009361Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 13:00:58.519191 containerd[1580]: time="2025-05-13T13:00:58.519146549Z" level=info msg="Container e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc: CDI devices from CRI Config.CDIDevices: []" May 13 13:00:58.525561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543523079.mount: Deactivated successfully. May 13 13:00:58.528218 containerd[1580]: time="2025-05-13T13:00:58.528187923Z" level=info msg="CreateContainer within sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\"" May 13 13:00:58.528673 containerd[1580]: time="2025-05-13T13:00:58.528650723Z" level=info msg="StartContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\"" May 13 13:00:58.529505 containerd[1580]: time="2025-05-13T13:00:58.529435350Z" level=info msg="connecting to shim e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc" address="unix:///run/containerd/s/d680342914f7f1bdff750d908a40dd9764eac968afa0f5131f48d51a17891b82" protocol=ttrpc version=3 May 13 13:00:58.553114 systemd[1]: Started cri-containerd-e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc.scope - libcontainer container e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc. May 13 13:00:58.587474 containerd[1580]: time="2025-05-13T13:00:58.587431736Z" level=info msg="StartContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" returns successfully" May 13 13:00:58.651182 containerd[1580]: time="2025-05-13T13:00:58.651139449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" id:\"a2662b4b1852c220953ef24dacedae1fe2377b5e845d58be16803119d0ada1b8\" pid:3406 exited_at:{seconds:1747141258 nanos:650718698}" May 13 13:00:58.680349 kubelet[2702]: I0513 13:00:58.679877 2702 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 13:00:58.706211 kubelet[2702]: I0513 13:00:58.705939 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd8982c-3231-434e-9e26-4b4cf0e387ab-config-volume\") pod \"coredns-6f6b679f8f-52qft\" (UID: \"0fd8982c-3231-434e-9e26-4b4cf0e387ab\") " pod="kube-system/coredns-6f6b679f8f-52qft" May 13 13:00:58.706211 kubelet[2702]: I0513 13:00:58.706004 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rv8f\" (UniqueName: \"kubernetes.io/projected/0fd8982c-3231-434e-9e26-4b4cf0e387ab-kube-api-access-6rv8f\") pod \"coredns-6f6b679f8f-52qft\" (UID: \"0fd8982c-3231-434e-9e26-4b4cf0e387ab\") " pod="kube-system/coredns-6f6b679f8f-52qft" May 13 13:00:58.706211 kubelet[2702]: I0513 13:00:58.706053 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvtr\" (UniqueName: \"kubernetes.io/projected/65f908b7-010b-47ad-b950-aed618c3fb8d-kube-api-access-kxvtr\") pod \"coredns-6f6b679f8f-qlxzg\" (UID: \"65f908b7-010b-47ad-b950-aed618c3fb8d\") " pod="kube-system/coredns-6f6b679f8f-qlxzg" May 13 13:00:58.706211 kubelet[2702]: I0513 13:00:58.706086 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f908b7-010b-47ad-b950-aed618c3fb8d-config-volume\") pod \"coredns-6f6b679f8f-qlxzg\" (UID: \"65f908b7-010b-47ad-b950-aed618c3fb8d\") " pod="kube-system/coredns-6f6b679f8f-qlxzg" May 13 13:00:58.721863 systemd[1]: Created slice kubepods-burstable-pod65f908b7_010b_47ad_b950_aed618c3fb8d.slice - libcontainer container kubepods-burstable-pod65f908b7_010b_47ad_b950_aed618c3fb8d.slice. May 13 13:00:58.727098 systemd[1]: Created slice kubepods-burstable-pod0fd8982c_3231_434e_9e26_4b4cf0e387ab.slice - libcontainer container kubepods-burstable-pod0fd8982c_3231_434e_9e26_4b4cf0e387ab.slice. May 13 13:00:59.025809 kubelet[2702]: E0513 13:00:59.025764 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:59.026568 containerd[1580]: time="2025-05-13T13:00:59.026527582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qlxzg,Uid:65f908b7-010b-47ad-b950-aed618c3fb8d,Namespace:kube-system,Attempt:0,}" May 13 13:00:59.029634 kubelet[2702]: E0513 13:00:59.029586 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:59.031984 containerd[1580]: time="2025-05-13T13:00:59.030656431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52qft,Uid:0fd8982c-3231-434e-9e26-4b4cf0e387ab,Namespace:kube-system,Attempt:0,}" May 13 13:00:59.511686 kubelet[2702]: E0513 13:00:59.511648 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:00:59.527802 kubelet[2702]: I0513 13:00:59.527255 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nrndf" podStartSLOduration=6.861895346 podStartE2EDuration="23.527236401s" podCreationTimestamp="2025-05-13 13:00:36 +0000 UTC" firstStartedPulling="2025-05-13 13:00:37.668326466 +0000 UTC m=+7.322286307" lastFinishedPulling="2025-05-13 13:00:54.333667521 +0000 UTC m=+23.987627362" observedRunningTime="2025-05-13 13:00:59.52654493 +0000 UTC m=+29.180504771" watchObservedRunningTime="2025-05-13 13:00:59.527236401 +0000 UTC m=+29.181196242" May 13 13:01:00.513614 kubelet[2702]: E0513 13:01:00.513577 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:00.780506 systemd-networkd[1495]: cilium_host: Link UP May 13 13:01:00.781205 systemd-networkd[1495]: cilium_net: Link UP May 13 13:01:00.781656 systemd-networkd[1495]: cilium_net: Gained carrier May 13 13:01:00.782146 systemd-networkd[1495]: cilium_host: Gained carrier May 13 13:01:00.874751 systemd-networkd[1495]: cilium_vxlan: Link UP May 13 13:01:00.874758 systemd-networkd[1495]: cilium_vxlan: Gained carrier May 13 13:01:01.077068 kernel: NET: Registered PF_ALG protocol family May 13 13:01:01.515671 kubelet[2702]: E0513 13:01:01.515638 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:01.665160 systemd-networkd[1495]: cilium_net: Gained IPv6LL May 13 13:01:01.679134 systemd-networkd[1495]: lxc_health: Link UP May 13 13:01:01.680099 systemd-networkd[1495]: lxc_health: Gained carrier May 13 13:01:01.729107 systemd-networkd[1495]: cilium_host: Gained IPv6LL May 13 13:01:01.985152 systemd-networkd[1495]: cilium_vxlan: Gained IPv6LL May 13 13:01:02.067977 kernel: eth0: renamed from tmpbf2ca May 13 13:01:02.067933 systemd-networkd[1495]: lxc90471402ea68: Link UP May 13 13:01:02.069162 systemd-networkd[1495]: lxc90471402ea68: Gained carrier May 13 13:01:02.083997 kernel: eth0: renamed from tmp2ebc3 May 13 13:01:02.084912 systemd-networkd[1495]: lxc462aa8ccb820: Link UP May 13 13:01:02.086442 systemd-networkd[1495]: lxc462aa8ccb820: Gained carrier May 13 13:01:02.603268 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:50470.service - OpenSSH per-connection server daemon (10.0.0.1:50470). May 13 13:01:02.657821 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 50470 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:02.659628 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:02.664977 systemd-logind[1555]: New session 9 of user core. May 13 13:01:02.675136 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 13:01:02.799994 sshd[3876]: Connection closed by 10.0.0.1 port 50470 May 13 13:01:02.800360 sshd-session[3874]: pam_unix(sshd:session): session closed for user core May 13 13:01:02.804391 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:50470.service: Deactivated successfully. May 13 13:01:02.806408 systemd[1]: session-9.scope: Deactivated successfully. May 13 13:01:02.807401 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. May 13 13:01:02.808754 systemd-logind[1555]: Removed session 9. May 13 13:01:03.137225 systemd-networkd[1495]: lxc90471402ea68: Gained IPv6LL May 13 13:01:03.457142 systemd-networkd[1495]: lxc462aa8ccb820: Gained IPv6LL May 13 13:01:03.585133 systemd-networkd[1495]: lxc_health: Gained IPv6LL May 13 13:01:03.597094 kubelet[2702]: E0513 13:01:03.597035 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:04.520005 kubelet[2702]: E0513 13:01:04.519975 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:05.502509 containerd[1580]: time="2025-05-13T13:01:05.502449049Z" level=info msg="connecting to shim 2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd" address="unix:///run/containerd/s/3e2ee50959ef252d93ad4fb6201b98094cb3d10dc4cbccfc77a46622dbffedba" namespace=k8s.io protocol=ttrpc version=3 May 13 13:01:05.504501 containerd[1580]: time="2025-05-13T13:01:05.504445983Z" level=info msg="connecting to shim bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220" address="unix:///run/containerd/s/2d1109a3ecb95b3545d5355d5c02587e7e1cf6e30b7440d97a77d5de8e99bd56" namespace=k8s.io protocol=ttrpc version=3 May 13 13:01:05.524976 kubelet[2702]: E0513 13:01:05.524927 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:05.530112 systemd[1]: Started cri-containerd-2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd.scope - libcontainer container 2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd. May 13 13:01:05.533800 systemd[1]: Started cri-containerd-bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220.scope - libcontainer container bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220. May 13 13:01:05.541578 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 13:01:05.545712 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 13:01:05.580250 containerd[1580]: time="2025-05-13T13:01:05.580198595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-52qft,Uid:0fd8982c-3231-434e-9e26-4b4cf0e387ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd\"" May 13 13:01:05.581195 kubelet[2702]: E0513 13:01:05.581116 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:05.582312 containerd[1580]: time="2025-05-13T13:01:05.582279737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qlxzg,Uid:65f908b7-010b-47ad-b950-aed618c3fb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220\"" May 13 13:01:05.583060 kubelet[2702]: E0513 13:01:05.582783 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:05.583911 containerd[1580]: time="2025-05-13T13:01:05.583855128Z" level=info msg="CreateContainer within sandbox \"2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 13:01:05.584466 containerd[1580]: time="2025-05-13T13:01:05.584406304Z" level=info msg="CreateContainer within sandbox \"bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 13:01:05.659375 containerd[1580]: time="2025-05-13T13:01:05.659325088Z" level=info msg="Container b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:06.059343 containerd[1580]: time="2025-05-13T13:01:06.059300225Z" level=info msg="CreateContainer within sandbox \"bf2ca94d17e251783c1d7d3992f97e0bdd2b4598853c3eefed0048b3c083e220\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1\"" May 13 13:01:06.059778 containerd[1580]: time="2025-05-13T13:01:06.059754628Z" level=info msg="StartContainer for \"b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1\"" May 13 13:01:06.070275 containerd[1580]: time="2025-05-13T13:01:06.070230055Z" level=info msg="connecting to shim b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1" address="unix:///run/containerd/s/2d1109a3ecb95b3545d5355d5c02587e7e1cf6e30b7440d97a77d5de8e99bd56" protocol=ttrpc version=3 May 13 13:01:06.100082 systemd[1]: Started cri-containerd-b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1.scope - libcontainer container b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1. May 13 13:01:06.492199 containerd[1580]: time="2025-05-13T13:01:06.492105627Z" level=info msg="Container e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:06.573780 containerd[1580]: time="2025-05-13T13:01:06.573721921Z" level=info msg="StartContainer for \"b8ed3c1bd1f9f37d80a3535122f73baeabf83167835108db82847fb354c94ff1\" returns successfully" May 13 13:01:06.731395 containerd[1580]: time="2025-05-13T13:01:06.731342672Z" level=info msg="CreateContainer within sandbox \"2ebc31e48af9528fcfc0419f9bc7f3294ae684654b0ba165378834edb5b2a2cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5\"" May 13 13:01:06.731980 containerd[1580]: time="2025-05-13T13:01:06.731785935Z" level=info msg="StartContainer for \"e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5\"" May 13 13:01:06.732674 containerd[1580]: time="2025-05-13T13:01:06.732653255Z" level=info msg="connecting to shim e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5" address="unix:///run/containerd/s/3e2ee50959ef252d93ad4fb6201b98094cb3d10dc4cbccfc77a46622dbffedba" protocol=ttrpc version=3 May 13 13:01:06.757104 systemd[1]: Started cri-containerd-e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5.scope - libcontainer container e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5. May 13 13:01:06.792831 containerd[1580]: time="2025-05-13T13:01:06.792773580Z" level=info msg="StartContainer for \"e2d69a431e52d7052f73424deae1e217089865f4fcdf9422428c6a17e0a9c8a5\" returns successfully" May 13 13:01:07.580002 kubelet[2702]: E0513 13:01:07.579448 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:07.580002 kubelet[2702]: E0513 13:01:07.579758 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:07.589479 kubelet[2702]: I0513 13:01:07.589360 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-52qft" podStartSLOduration=30.589343112 podStartE2EDuration="30.589343112s" podCreationTimestamp="2025-05-13 13:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:01:07.589249316 +0000 UTC m=+37.243209157" watchObservedRunningTime="2025-05-13 13:01:07.589343112 +0000 UTC m=+37.243302953" May 13 13:01:07.601897 kubelet[2702]: I0513 13:01:07.601791 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qlxzg" podStartSLOduration=30.601773539 podStartE2EDuration="30.601773539s" podCreationTimestamp="2025-05-13 13:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:01:07.600838032 +0000 UTC m=+37.254797873" watchObservedRunningTime="2025-05-13 13:01:07.601773539 +0000 UTC m=+37.255733380" May 13 13:01:07.816497 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:50480.service - OpenSSH per-connection server daemon (10.0.0.1:50480). May 13 13:01:07.875105 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 50480 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:07.876637 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:07.880830 systemd-logind[1555]: New session 10 of user core. May 13 13:01:07.886081 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 13:01:08.000342 sshd[4070]: Connection closed by 10.0.0.1 port 50480 May 13 13:01:08.000630 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 13 13:01:08.004774 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:50480.service: Deactivated successfully. May 13 13:01:08.006632 systemd[1]: session-10.scope: Deactivated successfully. May 13 13:01:08.007478 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. May 13 13:01:08.008474 systemd-logind[1555]: Removed session 10. May 13 13:01:08.581716 kubelet[2702]: E0513 13:01:08.581631 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:08.582199 kubelet[2702]: E0513 13:01:08.581791 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:09.583437 kubelet[2702]: E0513 13:01:09.583407 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:09.583437 kubelet[2702]: E0513 13:01:09.583407 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:13.016324 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:43270.service - OpenSSH per-connection server daemon (10.0.0.1:43270). May 13 13:01:13.069147 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 43270 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:13.070553 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:13.074827 systemd-logind[1555]: New session 11 of user core. May 13 13:01:13.083089 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 13:01:13.197188 sshd[4092]: Connection closed by 10.0.0.1 port 43270 May 13 13:01:13.197592 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 13 13:01:13.210460 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:43270.service: Deactivated successfully. May 13 13:01:13.212252 systemd[1]: session-11.scope: Deactivated successfully. May 13 13:01:13.213055 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. May 13 13:01:13.216008 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:43284.service - OpenSSH per-connection server daemon (10.0.0.1:43284). May 13 13:01:13.216548 systemd-logind[1555]: Removed session 11. May 13 13:01:13.275685 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 43284 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:13.276994 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:13.281435 systemd-logind[1555]: New session 12 of user core. May 13 13:01:13.288069 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 13:01:13.428740 sshd[4108]: Connection closed by 10.0.0.1 port 43284 May 13 13:01:13.429354 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 13 13:01:13.439346 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:43284.service: Deactivated successfully. May 13 13:01:13.441862 systemd[1]: session-12.scope: Deactivated successfully. May 13 13:01:13.443390 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. May 13 13:01:13.448147 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). May 13 13:01:13.449940 systemd-logind[1555]: Removed session 12. May 13 13:01:13.496527 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:13.497970 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:13.502247 systemd-logind[1555]: New session 13 of user core. May 13 13:01:13.509085 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 13:01:13.614688 sshd[4121]: Connection closed by 10.0.0.1 port 43294 May 13 13:01:13.614989 sshd-session[4119]: pam_unix(sshd:session): session closed for user core May 13 13:01:13.619476 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:43294.service: Deactivated successfully. May 13 13:01:13.621474 systemd[1]: session-13.scope: Deactivated successfully. May 13 13:01:13.622685 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. May 13 13:01:13.623838 systemd-logind[1555]: Removed session 13. May 13 13:01:18.627119 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:58266.service - OpenSSH per-connection server daemon (10.0.0.1:58266). May 13 13:01:18.682867 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 58266 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:18.684173 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:18.688694 systemd-logind[1555]: New session 14 of user core. May 13 13:01:18.701107 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 13:01:18.810180 sshd[4138]: Connection closed by 10.0.0.1 port 58266 May 13 13:01:18.810493 sshd-session[4136]: pam_unix(sshd:session): session closed for user core May 13 13:01:18.814696 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:58266.service: Deactivated successfully. May 13 13:01:18.816817 systemd[1]: session-14.scope: Deactivated successfully. May 13 13:01:18.817648 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. May 13 13:01:18.818725 systemd-logind[1555]: Removed session 14. May 13 13:01:23.835744 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:58272.service - OpenSSH per-connection server daemon (10.0.0.1:58272). May 13 13:01:23.894354 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 58272 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:23.895910 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:23.900221 systemd-logind[1555]: New session 15 of user core. May 13 13:01:23.910072 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 13:01:24.017918 sshd[4154]: Connection closed by 10.0.0.1 port 58272 May 13 13:01:24.018348 sshd-session[4152]: pam_unix(sshd:session): session closed for user core May 13 13:01:24.034489 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:58272.service: Deactivated successfully. May 13 13:01:24.036475 systemd[1]: session-15.scope: Deactivated successfully. May 13 13:01:24.037332 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. May 13 13:01:24.040637 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:58288.service - OpenSSH per-connection server daemon (10.0.0.1:58288). May 13 13:01:24.041241 systemd-logind[1555]: Removed session 15. May 13 13:01:24.089229 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 58288 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:24.090905 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:24.095530 systemd-logind[1555]: New session 16 of user core. May 13 13:01:24.102090 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 13:01:24.273722 sshd[4170]: Connection closed by 10.0.0.1 port 58288 May 13 13:01:24.274188 sshd-session[4168]: pam_unix(sshd:session): session closed for user core May 13 13:01:24.287455 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:58288.service: Deactivated successfully. May 13 13:01:24.289146 systemd[1]: session-16.scope: Deactivated successfully. May 13 13:01:24.289929 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. May 13 13:01:24.292573 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:58304.service - OpenSSH per-connection server daemon (10.0.0.1:58304). May 13 13:01:24.293352 systemd-logind[1555]: Removed session 16. May 13 13:01:24.347327 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 58304 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:24.348720 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:24.353387 systemd-logind[1555]: New session 17 of user core. May 13 13:01:24.368081 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 13:01:25.631698 sshd[4185]: Connection closed by 10.0.0.1 port 58304 May 13 13:01:25.632026 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 13 13:01:25.641849 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:58304.service: Deactivated successfully. May 13 13:01:25.644853 systemd[1]: session-17.scope: Deactivated successfully. May 13 13:01:25.646621 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. May 13 13:01:25.649372 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:58320.service - OpenSSH per-connection server daemon (10.0.0.1:58320). May 13 13:01:25.651117 systemd-logind[1555]: Removed session 17. May 13 13:01:25.691887 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 58320 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:25.693100 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:25.697372 systemd-logind[1555]: New session 18 of user core. May 13 13:01:25.706075 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 13:01:25.901547 sshd[4207]: Connection closed by 10.0.0.1 port 58320 May 13 13:01:25.901750 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 13 13:01:25.911783 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:58320.service: Deactivated successfully. May 13 13:01:25.913531 systemd[1]: session-18.scope: Deactivated successfully. May 13 13:01:25.914384 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. May 13 13:01:25.916865 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:58334.service - OpenSSH per-connection server daemon (10.0.0.1:58334). May 13 13:01:25.917750 systemd-logind[1555]: Removed session 18. May 13 13:01:25.971381 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 58334 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:25.972759 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:25.977099 systemd-logind[1555]: New session 19 of user core. May 13 13:01:25.991206 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 13:01:26.099281 sshd[4221]: Connection closed by 10.0.0.1 port 58334 May 13 13:01:26.099568 sshd-session[4219]: pam_unix(sshd:session): session closed for user core May 13 13:01:26.103742 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:58334.service: Deactivated successfully. May 13 13:01:26.105838 systemd[1]: session-19.scope: Deactivated successfully. May 13 13:01:26.106553 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. May 13 13:01:26.107769 systemd-logind[1555]: Removed session 19. May 13 13:01:31.116463 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). May 13 13:01:31.178761 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:31.180141 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:31.184512 systemd-logind[1555]: New session 20 of user core. May 13 13:01:31.195080 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 13:01:31.300533 sshd[4243]: Connection closed by 10.0.0.1 port 41362 May 13 13:01:31.300845 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 13 13:01:31.305105 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:41362.service: Deactivated successfully. May 13 13:01:31.307060 systemd[1]: session-20.scope: Deactivated successfully. May 13 13:01:31.307981 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. May 13 13:01:31.309309 systemd-logind[1555]: Removed session 20. May 13 13:01:36.316389 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:41364.service - OpenSSH per-connection server daemon (10.0.0.1:41364). May 13 13:01:36.365727 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 41364 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:36.366915 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:36.370760 systemd-logind[1555]: New session 21 of user core. May 13 13:01:36.380064 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 13:01:36.479365 sshd[4258]: Connection closed by 10.0.0.1 port 41364 May 13 13:01:36.479649 sshd-session[4256]: pam_unix(sshd:session): session closed for user core May 13 13:01:36.482469 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:41364.service: Deactivated successfully. May 13 13:01:36.484425 systemd[1]: session-21.scope: Deactivated successfully. May 13 13:01:36.485841 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. May 13 13:01:36.487180 systemd-logind[1555]: Removed session 21. May 13 13:01:41.495379 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:42376.service - OpenSSH per-connection server daemon (10.0.0.1:42376). May 13 13:01:41.533028 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:41.534232 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:41.538526 systemd-logind[1555]: New session 22 of user core. May 13 13:01:41.546188 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 13:01:41.648683 sshd[4276]: Connection closed by 10.0.0.1 port 42376 May 13 13:01:41.649004 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 13 13:01:41.653479 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:42376.service: Deactivated successfully. May 13 13:01:41.655439 systemd[1]: session-22.scope: Deactivated successfully. May 13 13:01:41.656406 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. May 13 13:01:41.657497 systemd-logind[1555]: Removed session 22. May 13 13:01:46.661227 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:42386.service - OpenSSH per-connection server daemon (10.0.0.1:42386). May 13 13:01:46.700693 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 42386 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:46.702349 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:46.706653 systemd-logind[1555]: New session 23 of user core. May 13 13:01:46.724204 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 13:01:46.828467 sshd[4291]: Connection closed by 10.0.0.1 port 42386 May 13 13:01:46.828994 sshd-session[4289]: pam_unix(sshd:session): session closed for user core May 13 13:01:46.837544 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:42386.service: Deactivated successfully. May 13 13:01:46.839251 systemd[1]: session-23.scope: Deactivated successfully. May 13 13:01:46.840123 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. May 13 13:01:46.843395 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:42390.service - OpenSSH per-connection server daemon (10.0.0.1:42390). May 13 13:01:46.843982 systemd-logind[1555]: Removed session 23. May 13 13:01:46.896261 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 42390 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:46.898018 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:46.902816 systemd-logind[1555]: New session 24 of user core. May 13 13:01:46.911141 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 13:01:48.245691 containerd[1580]: time="2025-05-13T13:01:48.245518339Z" level=info msg="StopContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" with timeout 30 (s)" May 13 13:01:48.251735 containerd[1580]: time="2025-05-13T13:01:48.251706774Z" level=info msg="Stop container \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" with signal terminated" May 13 13:01:48.264661 systemd[1]: cri-containerd-e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793.scope: Deactivated successfully. May 13 13:01:48.266718 containerd[1580]: time="2025-05-13T13:01:48.266681573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" id:\"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" pid:3125 exited_at:{seconds:1747141308 nanos:266355368}" May 13 13:01:48.266797 containerd[1580]: time="2025-05-13T13:01:48.266749934Z" level=info msg="received exit event container_id:\"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" id:\"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" pid:3125 exited_at:{seconds:1747141308 nanos:266355368}" May 13 13:01:48.272401 containerd[1580]: time="2025-05-13T13:01:48.272369329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" id:\"6c496e0cb9ccaf64833d410e845625e29b858b9ae8fb6e53d3f115660aa443b9\" pid:4328 exited_at:{seconds:1747141308 nanos:272135491}" May 13 13:01:48.274657 containerd[1580]: time="2025-05-13T13:01:48.274607096Z" level=info msg="StopContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" with timeout 2 (s)" May 13 13:01:48.275027 containerd[1580]: time="2025-05-13T13:01:48.275001812Z" level=info msg="Stop container \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" with signal terminated" May 13 13:01:48.281687 systemd-networkd[1495]: lxc_health: Link DOWN May 13 13:01:48.281696 systemd-networkd[1495]: lxc_health: Lost carrier May 13 13:01:48.284057 containerd[1580]: time="2025-05-13T13:01:48.283745503Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 13:01:48.288937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793-rootfs.mount: Deactivated successfully. May 13 13:01:48.298548 systemd[1]: cri-containerd-e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc.scope: Deactivated successfully. May 13 13:01:48.298899 systemd[1]: cri-containerd-e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc.scope: Consumed 6.253s CPU time, 123.8M memory peak, 156K read from disk, 13.3M written to disk. May 13 13:01:48.299911 containerd[1580]: time="2025-05-13T13:01:48.299864663Z" level=info msg="received exit event container_id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" pid:3374 exited_at:{seconds:1747141308 nanos:299602972}" May 13 13:01:48.300079 containerd[1580]: time="2025-05-13T13:01:48.299892326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" id:\"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" pid:3374 exited_at:{seconds:1747141308 nanos:299602972}" May 13 13:01:48.308087 containerd[1580]: time="2025-05-13T13:01:48.308054643Z" level=info msg="StopContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" returns successfully" May 13 13:01:48.311580 containerd[1580]: time="2025-05-13T13:01:48.311536915Z" level=info msg="StopPodSandbox for \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\"" May 13 13:01:48.319823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc-rootfs.mount: Deactivated successfully. May 13 13:01:48.321372 containerd[1580]: time="2025-05-13T13:01:48.321334825Z" level=info msg="Container to stop \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.329271 systemd[1]: cri-containerd-e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642.scope: Deactivated successfully. May 13 13:01:48.330152 containerd[1580]: time="2025-05-13T13:01:48.330118893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" id:\"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" pid:2816 exit_status:137 exited_at:{seconds:1747141308 nanos:329171969}" May 13 13:01:48.355703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642-rootfs.mount: Deactivated successfully. May 13 13:01:48.364547 containerd[1580]: time="2025-05-13T13:01:48.364504567Z" level=info msg="shim disconnected" id=e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642 namespace=k8s.io May 13 13:01:48.364547 containerd[1580]: time="2025-05-13T13:01:48.364532572Z" level=warning msg="cleaning up after shim disconnected" id=e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642 namespace=k8s.io May 13 13:01:48.379162 containerd[1580]: time="2025-05-13T13:01:48.364540026Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 13:01:48.379207 containerd[1580]: time="2025-05-13T13:01:48.365481620Z" level=info msg="StopContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" returns successfully" May 13 13:01:48.379647 containerd[1580]: time="2025-05-13T13:01:48.379621077Z" level=info msg="StopPodSandbox for \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\"" May 13 13:01:48.379731 containerd[1580]: time="2025-05-13T13:01:48.379713495Z" level=info msg="Container to stop \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.379731 containerd[1580]: time="2025-05-13T13:01:48.379727892Z" level=info msg="Container to stop \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.379781 containerd[1580]: time="2025-05-13T13:01:48.379736007Z" level=info msg="Container to stop \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.379781 containerd[1580]: time="2025-05-13T13:01:48.379756937Z" level=info msg="Container to stop \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.379781 containerd[1580]: time="2025-05-13T13:01:48.379765123Z" level=info msg="Container to stop \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 13:01:48.385822 systemd[1]: cri-containerd-84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c.scope: Deactivated successfully. May 13 13:01:48.400247 containerd[1580]: time="2025-05-13T13:01:48.400196805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" id:\"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" pid:2896 exit_status:137 exited_at:{seconds:1747141308 nanos:386437505}" May 13 13:01:48.402285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642-shm.mount: Deactivated successfully. May 13 13:01:48.406557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c-rootfs.mount: Deactivated successfully. May 13 13:01:48.410615 containerd[1580]: time="2025-05-13T13:01:48.410577903Z" level=info msg="received exit event sandbox_id:\"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" exit_status:137 exited_at:{seconds:1747141308 nanos:329171969}" May 13 13:01:48.412090 containerd[1580]: time="2025-05-13T13:01:48.412063468Z" level=info msg="TearDown network for sandbox \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" successfully" May 13 13:01:48.412090 containerd[1580]: time="2025-05-13T13:01:48.412086162Z" level=info msg="StopPodSandbox for \"e94de648b8ed4414811db31698637b5c77e186c85e86c7119b1c567f4460d642\" returns successfully" May 13 13:01:48.414429 containerd[1580]: time="2025-05-13T13:01:48.414409373Z" level=info msg="received exit event sandbox_id:\"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" exit_status:137 exited_at:{seconds:1747141308 nanos:386437505}" May 13 13:01:48.414969 containerd[1580]: time="2025-05-13T13:01:48.414647569Z" level=info msg="shim disconnected" id=84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c namespace=k8s.io May 13 13:01:48.414969 containerd[1580]: time="2025-05-13T13:01:48.414673098Z" level=warning msg="cleaning up after shim disconnected" id=84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c namespace=k8s.io May 13 13:01:48.414969 containerd[1580]: time="2025-05-13T13:01:48.414680672Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 13:01:48.416077 containerd[1580]: time="2025-05-13T13:01:48.416055206Z" level=info msg="TearDown network for sandbox \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" successfully" May 13 13:01:48.416077 containerd[1580]: time="2025-05-13T13:01:48.416076156Z" level=info msg="StopPodSandbox for \"84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c\" returns successfully" May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571809 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-hubble-tls\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571856 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be89790-d1c6-4b4d-8215-5932ce70bb39-clustermesh-secrets\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571871 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-lib-modules\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571884 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-kernel\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571901 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cni-path\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.572622 kubelet[2702]: I0513 13:01:48.571913 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-bpf-maps\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.571926 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-etc-cni-netd\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.571940 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-cgroup\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.571966 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-run\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.571981 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwxq\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.571994 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-hostproc\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573171 kubelet[2702]: I0513 13:01:48.572012 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/428387a6-827e-418f-ad6e-a40e0add663b-cilium-config-path\") pod \"428387a6-827e-418f-ad6e-a40e0add663b\" (UID: \"428387a6-827e-418f-ad6e-a40e0add663b\") " May 13 13:01:48.573380 kubelet[2702]: I0513 13:01:48.572028 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-xtables-lock\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573380 kubelet[2702]: I0513 13:01:48.572042 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-config-path\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573380 kubelet[2702]: I0513 13:01:48.572059 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpm4k\" (UniqueName: \"kubernetes.io/projected/428387a6-827e-418f-ad6e-a40e0add663b-kube-api-access-hpm4k\") pod \"428387a6-827e-418f-ad6e-a40e0add663b\" (UID: \"428387a6-827e-418f-ad6e-a40e0add663b\") " May 13 13:01:48.573380 kubelet[2702]: I0513 13:01:48.572074 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-net\") pod \"4be89790-d1c6-4b4d-8215-5932ce70bb39\" (UID: \"4be89790-d1c6-4b4d-8215-5932ce70bb39\") " May 13 13:01:48.573380 kubelet[2702]: I0513 13:01:48.572122 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573517 kubelet[2702]: I0513 13:01:48.572156 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573517 kubelet[2702]: I0513 13:01:48.572167 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573517 kubelet[2702]: I0513 13:01:48.572183 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-hostproc" (OuterVolumeSpecName: "hostproc") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573517 kubelet[2702]: I0513 13:01:48.572238 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573517 kubelet[2702]: I0513 13:01:48.572328 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cni-path" (OuterVolumeSpecName: "cni-path") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573640 kubelet[2702]: I0513 13:01:48.572344 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573640 kubelet[2702]: I0513 13:01:48.572628 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573640 kubelet[2702]: I0513 13:01:48.572672 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.573640 kubelet[2702]: I0513 13:01:48.573451 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 13:01:48.575204 kubelet[2702]: I0513 13:01:48.575182 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/428387a6-827e-418f-ad6e-a40e0add663b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "428387a6-827e-418f-ad6e-a40e0add663b" (UID: "428387a6-827e-418f-ad6e-a40e0add663b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 13:01:48.576706 kubelet[2702]: I0513 13:01:48.576671 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 13:01:48.577509 kubelet[2702]: I0513 13:01:48.577453 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 13:01:48.577768 kubelet[2702]: I0513 13:01:48.577729 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/428387a6-827e-418f-ad6e-a40e0add663b-kube-api-access-hpm4k" (OuterVolumeSpecName: "kube-api-access-hpm4k") pod "428387a6-827e-418f-ad6e-a40e0add663b" (UID: "428387a6-827e-418f-ad6e-a40e0add663b"). InnerVolumeSpecName "kube-api-access-hpm4k". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 13:01:48.577941 kubelet[2702]: I0513 13:01:48.577917 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be89790-d1c6-4b4d-8215-5932ce70bb39-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 13:01:48.578339 kubelet[2702]: I0513 13:01:48.578306 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq" (OuterVolumeSpecName: "kube-api-access-8wwxq") pod "4be89790-d1c6-4b4d-8215-5932ce70bb39" (UID: "4be89790-d1c6-4b4d-8215-5932ce70bb39"). InnerVolumeSpecName "kube-api-access-8wwxq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 13:01:48.654261 kubelet[2702]: I0513 13:01:48.654230 2702 scope.go:117] "RemoveContainer" containerID="e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc" May 13 13:01:48.656413 containerd[1580]: time="2025-05-13T13:01:48.656249099Z" level=info msg="RemoveContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\"" May 13 13:01:48.660017 systemd[1]: Removed slice kubepods-burstable-pod4be89790_d1c6_4b4d_8215_5932ce70bb39.slice - libcontainer container kubepods-burstable-pod4be89790_d1c6_4b4d_8215_5932ce70bb39.slice. May 13 13:01:48.660154 systemd[1]: kubepods-burstable-pod4be89790_d1c6_4b4d_8215_5932ce70bb39.slice: Consumed 6.354s CPU time, 124.2M memory peak, 164K read from disk, 13.3M written to disk. May 13 13:01:48.663997 systemd[1]: Removed slice kubepods-besteffort-pod428387a6_827e_418f_ad6e_a40e0add663b.slice - libcontainer container kubepods-besteffort-pod428387a6_827e_418f_ad6e_a40e0add663b.slice. May 13 13:01:48.673123 kubelet[2702]: I0513 13:01:48.673081 2702 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hpm4k\" (UniqueName: \"kubernetes.io/projected/428387a6-827e-418f-ad6e-a40e0add663b-kube-api-access-hpm4k\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673123 kubelet[2702]: I0513 13:01:48.673114 2702 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673123 kubelet[2702]: I0513 13:01:48.673123 2702 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673133 2702 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be89790-d1c6-4b4d-8215-5932ce70bb39-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673142 2702 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673149 2702 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673157 2702 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673165 2702 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673173 2702 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673180 2702 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673240 kubelet[2702]: I0513 13:01:48.673190 2702 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673424 kubelet[2702]: I0513 13:01:48.673197 2702 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8wwxq\" (UniqueName: \"kubernetes.io/projected/4be89790-d1c6-4b4d-8215-5932ce70bb39-kube-api-access-8wwxq\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673424 kubelet[2702]: I0513 13:01:48.673205 2702 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673424 kubelet[2702]: I0513 13:01:48.673213 2702 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/428387a6-827e-418f-ad6e-a40e0add663b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673424 kubelet[2702]: I0513 13:01:48.673220 2702 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be89790-d1c6-4b4d-8215-5932ce70bb39-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.673424 kubelet[2702]: I0513 13:01:48.673226 2702 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be89790-d1c6-4b4d-8215-5932ce70bb39-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 13:01:48.743584 containerd[1580]: time="2025-05-13T13:01:48.743537170Z" level=info msg="RemoveContainer for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" returns successfully" May 13 13:01:48.744022 kubelet[2702]: I0513 13:01:48.743979 2702 scope.go:117] "RemoveContainer" containerID="eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a" May 13 13:01:48.745666 containerd[1580]: time="2025-05-13T13:01:48.745632986Z" level=info msg="RemoveContainer for \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\"" May 13 13:01:48.751387 containerd[1580]: time="2025-05-13T13:01:48.751356120Z" level=info msg="RemoveContainer for \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" returns successfully" May 13 13:01:48.751584 kubelet[2702]: I0513 13:01:48.751552 2702 scope.go:117] "RemoveContainer" containerID="598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce" May 13 13:01:48.753489 containerd[1580]: time="2025-05-13T13:01:48.753447797Z" level=info msg="RemoveContainer for \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\"" May 13 13:01:48.758294 containerd[1580]: time="2025-05-13T13:01:48.758256229Z" level=info msg="RemoveContainer for \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" returns successfully" May 13 13:01:48.758424 kubelet[2702]: I0513 13:01:48.758401 2702 scope.go:117] "RemoveContainer" containerID="8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4" May 13 13:01:48.759609 containerd[1580]: time="2025-05-13T13:01:48.759585806Z" level=info msg="RemoveContainer for \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\"" May 13 13:01:48.763307 containerd[1580]: time="2025-05-13T13:01:48.763273942Z" level=info msg="RemoveContainer for \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" returns successfully" May 13 13:01:48.763446 kubelet[2702]: I0513 13:01:48.763409 2702 scope.go:117] "RemoveContainer" containerID="cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345" May 13 13:01:48.764543 containerd[1580]: time="2025-05-13T13:01:48.764521101Z" level=info msg="RemoveContainer for \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\"" May 13 13:01:48.767960 containerd[1580]: time="2025-05-13T13:01:48.767922037Z" level=info msg="RemoveContainer for \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" returns successfully" May 13 13:01:48.768086 kubelet[2702]: I0513 13:01:48.768064 2702 scope.go:117] "RemoveContainer" containerID="e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc" May 13 13:01:48.768292 containerd[1580]: time="2025-05-13T13:01:48.768229034Z" level=error msg="ContainerStatus for \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\": not found" May 13 13:01:48.771821 kubelet[2702]: E0513 13:01:48.771799 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\": not found" containerID="e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc" May 13 13:01:48.771893 kubelet[2702]: I0513 13:01:48.771828 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc"} err="failed to get container status \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3bfac208ff0e777d1ebeb17cf0143bc85c252d3e380edc31c29eb51ca4d8bfc\": not found" May 13 13:01:48.771927 kubelet[2702]: I0513 13:01:48.771892 2702 scope.go:117] "RemoveContainer" containerID="eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a" May 13 13:01:48.772102 containerd[1580]: time="2025-05-13T13:01:48.772064883Z" level=error msg="ContainerStatus for \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\": not found" May 13 13:01:48.772188 kubelet[2702]: E0513 13:01:48.772156 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\": not found" containerID="eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a" May 13 13:01:48.772188 kubelet[2702]: I0513 13:01:48.772180 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a"} err="failed to get container status \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb967633787d6b0c34cf3bb62356ba94b7c5028602c603edd90b3036e573240a\": not found" May 13 13:01:48.772265 kubelet[2702]: I0513 13:01:48.772195 2702 scope.go:117] "RemoveContainer" containerID="598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce" May 13 13:01:48.772345 containerd[1580]: time="2025-05-13T13:01:48.772318529Z" level=error msg="ContainerStatus for \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\": not found" May 13 13:01:48.772527 kubelet[2702]: E0513 13:01:48.772505 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\": not found" containerID="598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce" May 13 13:01:48.772574 kubelet[2702]: I0513 13:01:48.772531 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce"} err="failed to get container status \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\": rpc error: code = NotFound desc = an error occurred when try to find container \"598cdf9a032ce6f42072af6209730857ed1d54fdc3cd222465c6a44213cb6bce\": not found" May 13 13:01:48.772574 kubelet[2702]: I0513 13:01:48.772547 2702 scope.go:117] "RemoveContainer" containerID="8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4" May 13 13:01:48.772736 containerd[1580]: time="2025-05-13T13:01:48.772706142Z" level=error msg="ContainerStatus for \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\": not found" May 13 13:01:48.772835 kubelet[2702]: E0513 13:01:48.772817 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\": not found" containerID="8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4" May 13 13:01:48.772859 kubelet[2702]: I0513 13:01:48.772839 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4"} err="failed to get container status \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8aec253f253290d81f25bc1109bc39c29132227a0e60db838c26a6ef6e336cb4\": not found" May 13 13:01:48.772859 kubelet[2702]: I0513 13:01:48.772853 2702 scope.go:117] "RemoveContainer" containerID="cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345" May 13 13:01:48.773072 containerd[1580]: time="2025-05-13T13:01:48.773035984Z" level=error msg="ContainerStatus for \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\": not found" May 13 13:01:48.773172 kubelet[2702]: E0513 13:01:48.773147 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\": not found" containerID="cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345" May 13 13:01:48.773235 kubelet[2702]: I0513 13:01:48.773170 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345"} err="failed to get container status \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf598f33c8db9a0ff4c66ea8611a2dd4599bb40b2fa3c30ff0b11dff9a849345\": not found" May 13 13:01:48.773235 kubelet[2702]: I0513 13:01:48.773186 2702 scope.go:117] "RemoveContainer" containerID="e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793" May 13 13:01:48.774365 containerd[1580]: time="2025-05-13T13:01:48.774330884Z" level=info msg="RemoveContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\"" May 13 13:01:48.778073 containerd[1580]: time="2025-05-13T13:01:48.778047735Z" level=info msg="RemoveContainer for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" returns successfully" May 13 13:01:48.779443 kubelet[2702]: I0513 13:01:48.779023 2702 scope.go:117] "RemoveContainer" containerID="e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793" May 13 13:01:48.779443 kubelet[2702]: E0513 13:01:48.779387 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\": not found" containerID="e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793" May 13 13:01:48.779443 kubelet[2702]: I0513 13:01:48.779439 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793"} err="failed to get container status \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\": rpc error: code = NotFound desc = an error occurred when try to find container \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\": not found" May 13 13:01:48.779649 containerd[1580]: time="2025-05-13T13:01:48.779270257Z" level=error msg="ContainerStatus for \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e66714acd00977c20075a77d49085d789ccbc87285da6e62ff18793e9ce58793\": not found" May 13 13:01:49.288493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84f558a7b2d51cef7bffc5f5ba4c701ccf85dcd17249c715a6a422a6b7f8b01c-shm.mount: Deactivated successfully. May 13 13:01:49.288595 systemd[1]: var-lib-kubelet-pods-4be89790\x2dd1c6\x2d4b4d\x2d8215\x2d5932ce70bb39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8wwxq.mount: Deactivated successfully. May 13 13:01:49.288680 systemd[1]: var-lib-kubelet-pods-428387a6\x2d827e\x2d418f\x2dad6e\x2da40e0add663b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpm4k.mount: Deactivated successfully. May 13 13:01:49.288754 systemd[1]: var-lib-kubelet-pods-4be89790\x2dd1c6\x2d4b4d\x2d8215\x2d5932ce70bb39-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 13:01:49.288828 systemd[1]: var-lib-kubelet-pods-4be89790\x2dd1c6\x2d4b4d\x2d8215\x2d5932ce70bb39-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 13:01:50.206362 sshd[4307]: Connection closed by 10.0.0.1 port 42390 May 13 13:01:50.206857 sshd-session[4305]: pam_unix(sshd:session): session closed for user core May 13 13:01:50.222568 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:42390.service: Deactivated successfully. May 13 13:01:50.224418 systemd[1]: session-24.scope: Deactivated successfully. May 13 13:01:50.225284 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. May 13 13:01:50.228356 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). May 13 13:01:50.229186 systemd-logind[1555]: Removed session 24. May 13 13:01:50.278697 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:50.279991 sshd-session[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:50.284197 systemd-logind[1555]: New session 25 of user core. May 13 13:01:50.294073 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 13:01:50.435968 kubelet[2702]: I0513 13:01:50.435901 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="428387a6-827e-418f-ad6e-a40e0add663b" path="/var/lib/kubelet/pods/428387a6-827e-418f-ad6e-a40e0add663b/volumes" May 13 13:01:50.436476 kubelet[2702]: I0513 13:01:50.436461 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" path="/var/lib/kubelet/pods/4be89790-d1c6-4b4d-8215-5932ce70bb39/volumes" May 13 13:01:50.490770 kubelet[2702]: E0513 13:01:50.490687 2702 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 13:01:50.634003 sshd[4457]: Connection closed by 10.0.0.1 port 49364 May 13 13:01:50.634260 sshd-session[4455]: pam_unix(sshd:session): session closed for user core May 13 13:01:50.648824 kubelet[2702]: E0513 13:01:50.648783 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="428387a6-827e-418f-ad6e-a40e0add663b" containerName="cilium-operator" May 13 13:01:50.648824 kubelet[2702]: E0513 13:01:50.648818 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="mount-cgroup" May 13 13:01:50.648824 kubelet[2702]: E0513 13:01:50.648827 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="mount-bpf-fs" May 13 13:01:50.648824 kubelet[2702]: E0513 13:01:50.648834 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="clean-cilium-state" May 13 13:01:50.649544 kubelet[2702]: E0513 13:01:50.648841 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="apply-sysctl-overwrites" May 13 13:01:50.649544 kubelet[2702]: E0513 13:01:50.648849 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="cilium-agent" May 13 13:01:50.649544 kubelet[2702]: I0513 13:01:50.648877 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="428387a6-827e-418f-ad6e-a40e0add663b" containerName="cilium-operator" May 13 13:01:50.649544 kubelet[2702]: I0513 13:01:50.648884 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be89790-d1c6-4b4d-8215-5932ce70bb39" containerName="cilium-agent" May 13 13:01:50.650711 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:49364.service: Deactivated successfully. May 13 13:01:50.652489 systemd[1]: session-25.scope: Deactivated successfully. May 13 13:01:50.659867 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. May 13 13:01:50.665191 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:49374.service - OpenSSH per-connection server daemon (10.0.0.1:49374). May 13 13:01:50.669399 systemd-logind[1555]: Removed session 25. May 13 13:01:50.675102 systemd[1]: Created slice kubepods-burstable-pod637c40e5_51af_4258_91e9_1a1ddb91c87a.slice - libcontainer container kubepods-burstable-pod637c40e5_51af_4258_91e9_1a1ddb91c87a.slice. May 13 13:01:50.717494 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 49374 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:50.719142 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:50.723050 systemd-logind[1555]: New session 26 of user core. May 13 13:01:50.729080 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 13:01:50.779556 sshd[4471]: Connection closed by 10.0.0.1 port 49374 May 13 13:01:50.779843 sshd-session[4469]: pam_unix(sshd:session): session closed for user core May 13 13:01:50.782948 kubelet[2702]: I0513 13:01:50.782875 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-bpf-maps\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783038 kubelet[2702]: I0513 13:01:50.782976 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-cilium-cgroup\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783038 kubelet[2702]: I0513 13:01:50.783001 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-lib-modules\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783038 kubelet[2702]: I0513 13:01:50.783017 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/637c40e5-51af-4258-91e9-1a1ddb91c87a-clustermesh-secrets\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783110 kubelet[2702]: I0513 13:01:50.783040 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-cilium-run\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783110 kubelet[2702]: I0513 13:01:50.783059 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77xgv\" (UniqueName: \"kubernetes.io/projected/637c40e5-51af-4258-91e9-1a1ddb91c87a-kube-api-access-77xgv\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783110 kubelet[2702]: I0513 13:01:50.783079 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/637c40e5-51af-4258-91e9-1a1ddb91c87a-cilium-ipsec-secrets\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783110 kubelet[2702]: I0513 13:01:50.783099 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-host-proc-sys-net\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783196 kubelet[2702]: I0513 13:01:50.783121 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-hostproc\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783196 kubelet[2702]: I0513 13:01:50.783173 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/637c40e5-51af-4258-91e9-1a1ddb91c87a-cilium-config-path\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783236 kubelet[2702]: I0513 13:01:50.783217 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-cni-path\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783260 kubelet[2702]: I0513 13:01:50.783240 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/637c40e5-51af-4258-91e9-1a1ddb91c87a-hubble-tls\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783260 kubelet[2702]: I0513 13:01:50.783253 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-etc-cni-netd\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783310 kubelet[2702]: I0513 13:01:50.783272 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-xtables-lock\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.783310 kubelet[2702]: I0513 13:01:50.783285 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/637c40e5-51af-4258-91e9-1a1ddb91c87a-host-proc-sys-kernel\") pod \"cilium-wc6wx\" (UID: \"637c40e5-51af-4258-91e9-1a1ddb91c87a\") " pod="kube-system/cilium-wc6wx" May 13 13:01:50.790347 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:49374.service: Deactivated successfully. May 13 13:01:50.792411 systemd[1]: session-26.scope: Deactivated successfully. May 13 13:01:50.793148 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. May 13 13:01:50.795658 systemd[1]: Started sshd@26-10.0.0.133:22-10.0.0.1:49390.service - OpenSSH per-connection server daemon (10.0.0.1:49390). May 13 13:01:50.796263 systemd-logind[1555]: Removed session 26. May 13 13:01:50.842817 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 49390 ssh2: RSA SHA256:KkL3F8epEKDzqF4GUDsi0vRmecGudNCTOWUWlTFD3Yo May 13 13:01:50.844051 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 13:01:50.848253 systemd-logind[1555]: New session 27 of user core. May 13 13:01:50.858069 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 13:01:50.985183 kubelet[2702]: E0513 13:01:50.985146 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:50.985796 containerd[1580]: time="2025-05-13T13:01:50.985756913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc6wx,Uid:637c40e5-51af-4258-91e9-1a1ddb91c87a,Namespace:kube-system,Attempt:0,}" May 13 13:01:51.003838 containerd[1580]: time="2025-05-13T13:01:51.003783137Z" level=info msg="connecting to shim dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" namespace=k8s.io protocol=ttrpc version=3 May 13 13:01:51.024079 systemd[1]: Started cri-containerd-dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea.scope - libcontainer container dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea. May 13 13:01:51.046564 containerd[1580]: time="2025-05-13T13:01:51.046437686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc6wx,Uid:637c40e5-51af-4258-91e9-1a1ddb91c87a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\"" May 13 13:01:51.047568 kubelet[2702]: E0513 13:01:51.047516 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:51.049165 containerd[1580]: time="2025-05-13T13:01:51.049130277Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 13:01:51.057447 containerd[1580]: time="2025-05-13T13:01:51.057396318Z" level=info msg="Container 9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:51.066181 containerd[1580]: time="2025-05-13T13:01:51.066141155Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\"" May 13 13:01:51.066550 containerd[1580]: time="2025-05-13T13:01:51.066529938Z" level=info msg="StartContainer for \"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\"" May 13 13:01:51.067546 containerd[1580]: time="2025-05-13T13:01:51.067512427Z" level=info msg="connecting to shim 9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" protocol=ttrpc version=3 May 13 13:01:51.087198 systemd[1]: Started cri-containerd-9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb.scope - libcontainer container 9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb. May 13 13:01:51.113508 containerd[1580]: time="2025-05-13T13:01:51.113471026Z" level=info msg="StartContainer for \"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\" returns successfully" May 13 13:01:51.121137 systemd[1]: cri-containerd-9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb.scope: Deactivated successfully. May 13 13:01:51.122325 containerd[1580]: time="2025-05-13T13:01:51.122290275Z" level=info msg="received exit event container_id:\"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\" id:\"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\" pid:4552 exited_at:{seconds:1747141311 nanos:122014077}" May 13 13:01:51.122399 containerd[1580]: time="2025-05-13T13:01:51.122341042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\" id:\"9b3727e5744d930611451015eda54f6eae3fce63ddfa3764656eb3204a95d9bb\" pid:4552 exited_at:{seconds:1747141311 nanos:122014077}" May 13 13:01:51.673550 kubelet[2702]: E0513 13:01:51.673504 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:51.675148 containerd[1580]: time="2025-05-13T13:01:51.675101549Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 13:01:51.682601 containerd[1580]: time="2025-05-13T13:01:51.682559204Z" level=info msg="Container 17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:51.690468 containerd[1580]: time="2025-05-13T13:01:51.690416133Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\"" May 13 13:01:51.690902 containerd[1580]: time="2025-05-13T13:01:51.690875121Z" level=info msg="StartContainer for \"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\"" May 13 13:01:51.691835 containerd[1580]: time="2025-05-13T13:01:51.691810189Z" level=info msg="connecting to shim 17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" protocol=ttrpc version=3 May 13 13:01:51.723083 systemd[1]: Started cri-containerd-17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572.scope - libcontainer container 17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572. May 13 13:01:51.750472 containerd[1580]: time="2025-05-13T13:01:51.750434394Z" level=info msg="StartContainer for \"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\" returns successfully" May 13 13:01:51.756650 systemd[1]: cri-containerd-17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572.scope: Deactivated successfully. May 13 13:01:51.757132 containerd[1580]: time="2025-05-13T13:01:51.757031272Z" level=info msg="received exit event container_id:\"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\" id:\"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\" pid:4597 exited_at:{seconds:1747141311 nanos:756788137}" May 13 13:01:51.757436 containerd[1580]: time="2025-05-13T13:01:51.757391482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\" id:\"17be5018f7a24980f1d8a4a32f7ec00897dc3991d54e2ae6b55d9f12ff06e572\" pid:4597 exited_at:{seconds:1747141311 nanos:756788137}" May 13 13:01:52.308533 kubelet[2702]: I0513 13:01:52.308489 2702 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T13:01:52Z","lastTransitionTime":"2025-05-13T13:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 13:01:52.677192 kubelet[2702]: E0513 13:01:52.677157 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:52.679481 containerd[1580]: time="2025-05-13T13:01:52.679429221Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 13:01:52.689602 containerd[1580]: time="2025-05-13T13:01:52.689557195Z" level=info msg="Container 7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:52.703495 containerd[1580]: time="2025-05-13T13:01:52.703447520Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\"" May 13 13:01:52.704183 containerd[1580]: time="2025-05-13T13:01:52.703894225Z" level=info msg="StartContainer for \"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\"" May 13 13:01:52.705719 containerd[1580]: time="2025-05-13T13:01:52.705681190Z" level=info msg="connecting to shim 7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" protocol=ttrpc version=3 May 13 13:01:52.727096 systemd[1]: Started cri-containerd-7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124.scope - libcontainer container 7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124. May 13 13:01:52.766938 systemd[1]: cri-containerd-7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124.scope: Deactivated successfully. May 13 13:01:52.767368 containerd[1580]: time="2025-05-13T13:01:52.767338229Z" level=info msg="StartContainer for \"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\" returns successfully" May 13 13:01:52.768102 containerd[1580]: time="2025-05-13T13:01:52.768031535Z" level=info msg="received exit event container_id:\"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\" id:\"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\" pid:4641 exited_at:{seconds:1747141312 nanos:767604178}" May 13 13:01:52.768433 containerd[1580]: time="2025-05-13T13:01:52.768196841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\" id:\"7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124\" pid:4641 exited_at:{seconds:1747141312 nanos:767604178}" May 13 13:01:52.789264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8346a75e2d39748c9f5bac78c16e14e2e46ca7612030c54d1cbb24d7205124-rootfs.mount: Deactivated successfully. May 13 13:01:53.681320 kubelet[2702]: E0513 13:01:53.681290 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:53.682866 containerd[1580]: time="2025-05-13T13:01:53.682817629Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 13:01:53.691303 containerd[1580]: time="2025-05-13T13:01:53.691226577Z" level=info msg="Container 6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:53.699009 containerd[1580]: time="2025-05-13T13:01:53.698968432Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\"" May 13 13:01:53.699467 containerd[1580]: time="2025-05-13T13:01:53.699441996Z" level=info msg="StartContainer for \"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\"" May 13 13:01:53.709159 containerd[1580]: time="2025-05-13T13:01:53.709126683Z" level=info msg="connecting to shim 6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" protocol=ttrpc version=3 May 13 13:01:53.728080 systemd[1]: Started cri-containerd-6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6.scope - libcontainer container 6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6. May 13 13:01:53.754550 systemd[1]: cri-containerd-6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6.scope: Deactivated successfully. May 13 13:01:53.755218 containerd[1580]: time="2025-05-13T13:01:53.755180730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\" id:\"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\" pid:4680 exited_at:{seconds:1747141313 nanos:754605160}" May 13 13:01:53.756373 containerd[1580]: time="2025-05-13T13:01:53.756345415Z" level=info msg="received exit event container_id:\"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\" id:\"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\" pid:4680 exited_at:{seconds:1747141313 nanos:754605160}" May 13 13:01:53.763300 containerd[1580]: time="2025-05-13T13:01:53.763268385Z" level=info msg="StartContainer for \"6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6\" returns successfully" May 13 13:01:53.775068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cd8da20faa85d9bfe138c563007e8ea44ed1e32636c7117d781ca0a3ac229b6-rootfs.mount: Deactivated successfully. May 13 13:01:54.686637 kubelet[2702]: E0513 13:01:54.686600 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:54.696222 containerd[1580]: time="2025-05-13T13:01:54.696183986Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 13:01:54.742899 containerd[1580]: time="2025-05-13T13:01:54.742856624Z" level=info msg="Container 13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81: CDI devices from CRI Config.CDIDevices: []" May 13 13:01:54.750406 containerd[1580]: time="2025-05-13T13:01:54.750371998Z" level=info msg="CreateContainer within sandbox \"dc2f812786feec89f6a098ab3eccd34ad075d2263d4974f883934e91a050e5ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\"" May 13 13:01:54.750912 containerd[1580]: time="2025-05-13T13:01:54.750862095Z" level=info msg="StartContainer for \"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\"" May 13 13:01:54.753152 containerd[1580]: time="2025-05-13T13:01:54.752928922Z" level=info msg="connecting to shim 13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81" address="unix:///run/containerd/s/dd42570b97870a5ac4dd09aa7d200cfb2dfea9fdfae97af0e8774d5093a185a1" protocol=ttrpc version=3 May 13 13:01:54.787065 systemd[1]: Started cri-containerd-13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81.scope - libcontainer container 13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81. May 13 13:01:54.821637 containerd[1580]: time="2025-05-13T13:01:54.821553540Z" level=info msg="StartContainer for \"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" returns successfully" May 13 13:01:54.881966 containerd[1580]: time="2025-05-13T13:01:54.881718209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"07c28c81d4fdb2550aaeb0696eba7d10f579fa29d1c4824266e8cd29f0b17c5f\" pid:4751 exited_at:{seconds:1747141314 nanos:881398248}" May 13 13:01:55.230994 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 13 13:01:55.432828 kubelet[2702]: E0513 13:01:55.432785 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:55.691872 kubelet[2702]: E0513 13:01:55.691845 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:56.433262 kubelet[2702]: E0513 13:01:56.433218 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:56.985878 kubelet[2702]: E0513 13:01:56.985841 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:57.227219 containerd[1580]: time="2025-05-13T13:01:57.227119295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"6cf7ea9b2455a284ff4d2ba0a545f80aeb9de47ea73d7fc3617bae4e6047505a\" pid:4976 exit_status:1 exited_at:{seconds:1747141317 nanos:226405944}" May 13 13:01:58.174433 systemd-networkd[1495]: lxc_health: Link UP May 13 13:01:58.176614 systemd-networkd[1495]: lxc_health: Gained carrier May 13 13:01:58.989833 kubelet[2702]: E0513 13:01:58.989660 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:01:59.009631 kubelet[2702]: I0513 13:01:59.009569 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wc6wx" podStartSLOduration=9.008588446 podStartE2EDuration="9.008588446s" podCreationTimestamp="2025-05-13 13:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 13:01:55.704206954 +0000 UTC m=+85.358166795" watchObservedRunningTime="2025-05-13 13:01:59.008588446 +0000 UTC m=+88.662548287" May 13 13:01:59.329240 containerd[1580]: time="2025-05-13T13:01:59.329119829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"132f0bf6bbd452ef0d05a4e37418e74c7d567e6b228f6172abd7b357a9798ee4\" pid:5284 exited_at:{seconds:1747141319 nanos:328411198}" May 13 13:01:59.699430 kubelet[2702]: E0513 13:01:59.699400 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:02:00.033141 systemd-networkd[1495]: lxc_health: Gained IPv6LL May 13 13:02:00.701280 kubelet[2702]: E0513 13:02:00.701247 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:02:01.437541 containerd[1580]: time="2025-05-13T13:02:01.437497946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"e43b7198b576813982c6aca4ad727ff72e59fbaaccbcb6b0313cd064bb009b18\" pid:5317 exited_at:{seconds:1747141321 nanos:437219045}" May 13 13:02:03.529503 containerd[1580]: time="2025-05-13T13:02:03.529450808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"037eaa5282a89e3ea0d669fcbce3210bf27ceb73bd3b0da6c93592bf5b5323aa\" pid:5341 exited_at:{seconds:1747141323 nanos:528522332}" May 13 13:02:04.433536 kubelet[2702]: E0513 13:02:04.433498 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 13:02:05.620314 containerd[1580]: time="2025-05-13T13:02:05.620260885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13f072b50275dedd8b019627c50e5274ce410f7f43c3dd00c7022e3800a08a81\" id:\"566d68e371b6d44a5ae949256e2f1be04f2575d9117f5dd1d8f3dbdc435fb42c\" pid:5365 exited_at:{seconds:1747141325 nanos:619835817}" May 13 13:02:05.628742 sshd[4480]: Connection closed by 10.0.0.1 port 49390 May 13 13:02:05.629129 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 13 13:02:05.633854 systemd[1]: sshd@26-10.0.0.133:22-10.0.0.1:49390.service: Deactivated successfully. May 13 13:02:05.635902 systemd[1]: session-27.scope: Deactivated successfully. May 13 13:02:05.636802 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. May 13 13:02:05.638070 systemd-logind[1555]: Removed session 27.