May 17 10:21:28.823609 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat May 17 08:43:17 -00 2025 May 17 10:21:28.823632 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c133b701f4651573a60c1236067561af59c5f220e6e069d5bcb75ac157263bd May 17 10:21:28.823643 kernel: BIOS-provided physical RAM map: May 17 10:21:28.823650 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 10:21:28.823656 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 17 10:21:28.823663 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 17 10:21:28.823670 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 17 10:21:28.823677 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 17 10:21:28.823690 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 17 10:21:28.823696 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 17 10:21:28.823703 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 17 10:21:28.823709 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 17 10:21:28.823716 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 17 10:21:28.823723 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 17 10:21:28.823733 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 17 10:21:28.823740 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 17 10:21:28.823750 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 17 10:21:28.823757 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 17 10:21:28.823764 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 17 10:21:28.823771 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 17 10:21:28.823778 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 17 10:21:28.823785 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 17 10:21:28.823792 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 10:21:28.823799 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 10:21:28.823806 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 17 10:21:28.823815 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 10:21:28.823822 kernel: NX (Execute Disable) protection: active May 17 10:21:28.823829 kernel: APIC: Static calls initialized May 17 10:21:28.823837 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 17 10:21:28.823844 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 17 10:21:28.823851 kernel: extended physical RAM map: May 17 10:21:28.823858 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 10:21:28.823865 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 17 10:21:28.823872 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 17 10:21:28.823879 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 17 10:21:28.823886 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 17 10:21:28.823896 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 17 10:21:28.823903 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 17 10:21:28.823910 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 17 10:21:28.823917 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 17 10:21:28.824009 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 17 10:21:28.824017 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 17 10:21:28.824026 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 17 10:21:28.824034 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 17 10:21:28.824041 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 17 10:21:28.824049 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 17 10:21:28.824056 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 17 10:21:28.824068 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 17 10:21:28.824075 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 17 10:21:28.824082 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 17 10:21:28.824090 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 17 10:21:28.824099 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 17 10:21:28.824106 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 17 10:21:28.824114 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 17 10:21:28.824121 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 17 10:21:28.824128 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 10:21:28.824136 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 17 10:21:28.824143 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 10:21:28.824153 kernel: efi: EFI v2.7 by EDK II May 17 10:21:28.824161 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 17 10:21:28.824168 kernel: random: crng init done May 17 10:21:28.824178 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 17 10:21:28.824185 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 17 10:21:28.824197 kernel: secureboot: Secure boot disabled May 17 10:21:28.824204 kernel: SMBIOS 2.8 present. May 17 10:21:28.824211 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 17 10:21:28.824218 kernel: DMI: Memory slots populated: 1/1 May 17 10:21:28.824226 kernel: Hypervisor detected: KVM May 17 10:21:28.824233 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 10:21:28.824240 kernel: kvm-clock: using sched offset of 4688006005 cycles May 17 10:21:28.824248 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 10:21:28.824256 kernel: tsc: Detected 2794.746 MHz processor May 17 10:21:28.824264 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 10:21:28.824274 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 10:21:28.824281 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 17 10:21:28.824289 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 10:21:28.824296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 10:21:28.824304 kernel: Using GB pages for direct mapping May 17 10:21:28.824311 kernel: ACPI: Early table checksum verification disabled May 17 10:21:28.824319 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 17 10:21:28.824327 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 17 10:21:28.824334 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824344 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824351 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 17 10:21:28.824359 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824366 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824374 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824381 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 10:21:28.824389 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 17 10:21:28.824403 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 17 10:21:28.824410 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 17 10:21:28.824420 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 17 10:21:28.824428 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 17 10:21:28.824435 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 17 10:21:28.824442 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 17 10:21:28.824450 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 17 10:21:28.824457 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 17 10:21:28.824465 kernel: No NUMA configuration found May 17 10:21:28.824472 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 17 10:21:28.824480 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 17 10:21:28.824489 kernel: Zone ranges: May 17 10:21:28.824497 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 10:21:28.824505 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 17 10:21:28.824512 kernel: Normal empty May 17 10:21:28.824519 kernel: Device empty May 17 10:21:28.824527 kernel: Movable zone start for each node May 17 10:21:28.824538 kernel: Early memory node ranges May 17 10:21:28.824546 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 10:21:28.824553 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 17 10:21:28.824563 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 17 10:21:28.824573 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 17 10:21:28.824580 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 17 10:21:28.824587 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 17 10:21:28.824595 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 17 10:21:28.824602 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 17 10:21:28.824610 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 17 10:21:28.824619 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 10:21:28.824627 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 10:21:28.824643 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 17 10:21:28.824651 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 10:21:28.824658 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 17 10:21:28.824666 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 17 10:21:28.824676 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 17 10:21:28.824684 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 17 10:21:28.824692 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 17 10:21:28.824700 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 10:21:28.824707 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 10:21:28.824717 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 10:21:28.824725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 10:21:28.824733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 10:21:28.824741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 10:21:28.824748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 10:21:28.824756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 10:21:28.824764 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 10:21:28.824772 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 10:21:28.824779 kernel: TSC deadline timer available May 17 10:21:28.824789 kernel: CPU topo: Max. logical packages: 1 May 17 10:21:28.824797 kernel: CPU topo: Max. logical dies: 1 May 17 10:21:28.824805 kernel: CPU topo: Max. dies per package: 1 May 17 10:21:28.824812 kernel: CPU topo: Max. threads per core: 1 May 17 10:21:28.824820 kernel: CPU topo: Num. cores per package: 4 May 17 10:21:28.824828 kernel: CPU topo: Num. threads per package: 4 May 17 10:21:28.824835 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 17 10:21:28.824843 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 10:21:28.824851 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 10:21:28.824860 kernel: kvm-guest: setup PV sched yield May 17 10:21:28.824868 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 17 10:21:28.824876 kernel: Booting paravirtualized kernel on KVM May 17 10:21:28.824884 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 10:21:28.824892 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 10:21:28.824899 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 17 10:21:28.824907 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 17 10:21:28.824915 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 10:21:28.824937 kernel: kvm-guest: PV spinlocks enabled May 17 10:21:28.824948 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 10:21:28.824956 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c133b701f4651573a60c1236067561af59c5f220e6e069d5bcb75ac157263bd May 17 10:21:28.824967 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 10:21:28.824975 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 10:21:28.824983 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 10:21:28.824990 kernel: Fallback order for Node 0: 0 May 17 10:21:28.824998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 17 10:21:28.825006 kernel: Policy zone: DMA32 May 17 10:21:28.825016 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 10:21:28.825024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 10:21:28.825032 kernel: ftrace: allocating 40065 entries in 157 pages May 17 10:21:28.825039 kernel: ftrace: allocated 157 pages with 5 groups May 17 10:21:28.825047 kernel: Dynamic Preempt: voluntary May 17 10:21:28.825055 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 10:21:28.825063 kernel: rcu: RCU event tracing is enabled. May 17 10:21:28.825071 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 10:21:28.825079 kernel: Trampoline variant of Tasks RCU enabled. May 17 10:21:28.825087 kernel: Rude variant of Tasks RCU enabled. May 17 10:21:28.825097 kernel: Tracing variant of Tasks RCU enabled. May 17 10:21:28.825105 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 10:21:28.825115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 10:21:28.825123 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 10:21:28.825131 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 10:21:28.825139 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 10:21:28.825147 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 10:21:28.825155 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 10:21:28.825162 kernel: Console: colour dummy device 80x25 May 17 10:21:28.825173 kernel: printk: legacy console [ttyS0] enabled May 17 10:21:28.825180 kernel: ACPI: Core revision 20240827 May 17 10:21:28.825188 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 10:21:28.825196 kernel: APIC: Switch to symmetric I/O mode setup May 17 10:21:28.825204 kernel: x2apic enabled May 17 10:21:28.825211 kernel: APIC: Switched APIC routing to: physical x2apic May 17 10:21:28.825219 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 10:21:28.825227 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 10:21:28.825235 kernel: kvm-guest: setup PV IPIs May 17 10:21:28.825245 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 10:21:28.825253 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 17 10:21:28.825261 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 17 10:21:28.825269 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 10:21:28.825276 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 10:21:28.825284 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 10:21:28.825292 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 10:21:28.825300 kernel: Spectre V2 : Mitigation: Retpolines May 17 10:21:28.825307 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 17 10:21:28.825317 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 17 10:21:28.825325 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 10:21:28.825333 kernel: RETBleed: Mitigation: untrained return thunk May 17 10:21:28.825343 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 10:21:28.825351 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 10:21:28.825359 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 10:21:28.825367 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 10:21:28.825375 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 10:21:28.825385 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 10:21:28.825400 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 10:21:28.825407 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 10:21:28.825415 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 10:21:28.825423 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 10:21:28.825432 kernel: Freeing SMP alternatives memory: 32K May 17 10:21:28.825439 kernel: pid_max: default: 32768 minimum: 301 May 17 10:21:28.825447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 17 10:21:28.825455 kernel: landlock: Up and running. May 17 10:21:28.825465 kernel: SELinux: Initializing. May 17 10:21:28.825472 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 10:21:28.825480 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 10:21:28.825488 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 10:21:28.825496 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 10:21:28.825504 kernel: ... version: 0 May 17 10:21:28.825511 kernel: ... bit width: 48 May 17 10:21:28.825519 kernel: ... generic registers: 6 May 17 10:21:28.825527 kernel: ... value mask: 0000ffffffffffff May 17 10:21:28.825537 kernel: ... max period: 00007fffffffffff May 17 10:21:28.825545 kernel: ... fixed-purpose events: 0 May 17 10:21:28.825553 kernel: ... event mask: 000000000000003f May 17 10:21:28.825562 kernel: signal: max sigframe size: 1776 May 17 10:21:28.825571 kernel: rcu: Hierarchical SRCU implementation. May 17 10:21:28.825580 kernel: rcu: Max phase no-delay instances is 400. May 17 10:21:28.825591 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 17 10:21:28.825599 kernel: smp: Bringing up secondary CPUs ... May 17 10:21:28.825606 kernel: smpboot: x86: Booting SMP configuration: May 17 10:21:28.825616 kernel: .... node #0, CPUs: #1 #2 #3 May 17 10:21:28.825624 kernel: smp: Brought up 1 node, 4 CPUs May 17 10:21:28.825632 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 17 10:21:28.825640 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54428K init, 2532K bss, 137196K reserved, 0K cma-reserved) May 17 10:21:28.825648 kernel: devtmpfs: initialized May 17 10:21:28.825655 kernel: x86/mm: Memory block size: 128MB May 17 10:21:28.825663 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 17 10:21:28.825671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 17 10:21:28.825679 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 17 10:21:28.825689 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 17 10:21:28.825697 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 17 10:21:28.825705 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 17 10:21:28.825712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 10:21:28.825720 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 10:21:28.825728 kernel: pinctrl core: initialized pinctrl subsystem May 17 10:21:28.825736 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 10:21:28.825743 kernel: audit: initializing netlink subsys (disabled) May 17 10:21:28.825751 kernel: audit: type=2000 audit(1747477285.972:1): state=initialized audit_enabled=0 res=1 May 17 10:21:28.825761 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 10:21:28.825769 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 10:21:28.825776 kernel: cpuidle: using governor menu May 17 10:21:28.825784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 10:21:28.825792 kernel: dca service started, version 1.12.1 May 17 10:21:28.825800 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 17 10:21:28.825807 kernel: PCI: Using configuration type 1 for base access May 17 10:21:28.825815 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 10:21:28.825823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 10:21:28.825833 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 10:21:28.825840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 10:21:28.825848 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 10:21:28.825856 kernel: ACPI: Added _OSI(Module Device) May 17 10:21:28.825863 kernel: ACPI: Added _OSI(Processor Device) May 17 10:21:28.825871 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 10:21:28.825879 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 10:21:28.825887 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 10:21:28.825894 kernel: ACPI: Interpreter enabled May 17 10:21:28.825904 kernel: ACPI: PM: (supports S0 S3 S5) May 17 10:21:28.825911 kernel: ACPI: Using IOAPIC for interrupt routing May 17 10:21:28.825919 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 10:21:28.825943 kernel: PCI: Using E820 reservations for host bridge windows May 17 10:21:28.825951 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 10:21:28.825958 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 10:21:28.826157 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 10:21:28.826283 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 10:21:28.826418 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 10:21:28.826428 kernel: PCI host bridge to bus 0000:00 May 17 10:21:28.826565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 10:21:28.826681 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 10:21:28.826796 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 10:21:28.826905 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 17 10:21:28.827037 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 17 10:21:28.827147 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 17 10:21:28.827256 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 10:21:28.827416 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 17 10:21:28.827551 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 17 10:21:28.827677 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 17 10:21:28.827797 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 17 10:21:28.827920 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 17 10:21:28.828382 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 10:21:28.828573 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 17 10:21:28.828699 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 17 10:21:28.828821 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 17 10:21:28.828960 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 17 10:21:28.829101 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 17 10:21:28.829230 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 17 10:21:28.829352 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 17 10:21:28.829483 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 17 10:21:28.829672 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 17 10:21:28.829898 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 17 10:21:28.830126 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 17 10:21:28.830255 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 17 10:21:28.830376 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 17 10:21:28.830520 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 17 10:21:28.830666 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 10:21:28.830803 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 17 10:21:28.830939 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 17 10:21:28.831063 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 17 10:21:28.831206 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 17 10:21:28.831327 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 17 10:21:28.831338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 10:21:28.831346 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 10:21:28.831354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 10:21:28.831362 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 10:21:28.831370 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 10:21:28.831377 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 10:21:28.831389 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 10:21:28.831406 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 10:21:28.831414 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 10:21:28.831422 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 10:21:28.831430 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 10:21:28.831437 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 10:21:28.831445 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 10:21:28.831453 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 10:21:28.831461 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 10:21:28.831470 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 10:21:28.831478 kernel: iommu: Default domain type: Translated May 17 10:21:28.831486 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 10:21:28.831494 kernel: efivars: Registered efivars operations May 17 10:21:28.831502 kernel: PCI: Using ACPI for IRQ routing May 17 10:21:28.831510 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 10:21:28.831517 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 17 10:21:28.831525 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 17 10:21:28.831533 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 17 10:21:28.831543 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 17 10:21:28.831551 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 17 10:21:28.831558 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 17 10:21:28.831566 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 17 10:21:28.831574 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 17 10:21:28.831697 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 10:21:28.831837 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 10:21:28.832005 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 10:21:28.832021 kernel: vgaarb: loaded May 17 10:21:28.832029 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 10:21:28.832037 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 10:21:28.832045 kernel: clocksource: Switched to clocksource kvm-clock May 17 10:21:28.832053 kernel: VFS: Disk quotas dquot_6.6.0 May 17 10:21:28.832061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 10:21:28.832069 kernel: pnp: PnP ACPI init May 17 10:21:28.832278 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 17 10:21:28.832298 kernel: pnp: PnP ACPI: found 6 devices May 17 10:21:28.832306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 10:21:28.832314 kernel: NET: Registered PF_INET protocol family May 17 10:21:28.832322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 10:21:28.832333 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 10:21:28.832341 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 10:21:28.832349 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 10:21:28.832357 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 10:21:28.832365 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 10:21:28.832375 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 10:21:28.832384 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 10:21:28.832402 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 10:21:28.832410 kernel: NET: Registered PF_XDP protocol family May 17 10:21:28.832537 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 17 10:21:28.832659 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 17 10:21:28.832769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 10:21:28.832878 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 10:21:28.833016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 10:21:28.833142 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 17 10:21:28.833258 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 17 10:21:28.833368 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 17 10:21:28.833379 kernel: PCI: CLS 0 bytes, default 64 May 17 10:21:28.833387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 17 10:21:28.833406 kernel: Initialise system trusted keyrings May 17 10:21:28.833419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 10:21:28.833427 kernel: Key type asymmetric registered May 17 10:21:28.833436 kernel: Asymmetric key parser 'x509' registered May 17 10:21:28.833444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 10:21:28.833452 kernel: io scheduler mq-deadline registered May 17 10:21:28.833461 kernel: io scheduler kyber registered May 17 10:21:28.833469 kernel: io scheduler bfq registered May 17 10:21:28.833479 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 10:21:28.833488 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 10:21:28.833496 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 10:21:28.833504 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 10:21:28.833513 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 10:21:28.833521 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 10:21:28.833529 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 10:21:28.833537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 10:21:28.833546 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 10:21:28.833714 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 10:21:28.833727 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 10:21:28.833844 kernel: rtc_cmos 00:04: registered as rtc0 May 17 10:21:28.833986 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T10:21:28 UTC (1747477288) May 17 10:21:28.834100 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 17 10:21:28.834110 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 10:21:28.834118 kernel: efifb: probing for efifb May 17 10:21:28.834127 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 17 10:21:28.834139 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 17 10:21:28.834147 kernel: efifb: scrolling: redraw May 17 10:21:28.834155 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 10:21:28.834163 kernel: Console: switching to colour frame buffer device 160x50 May 17 10:21:28.834172 kernel: fb0: EFI VGA frame buffer device May 17 10:21:28.834180 kernel: pstore: Using crash dump compression: deflate May 17 10:21:28.834188 kernel: pstore: Registered efi_pstore as persistent store backend May 17 10:21:28.834196 kernel: NET: Registered PF_INET6 protocol family May 17 10:21:28.834204 kernel: Segment Routing with IPv6 May 17 10:21:28.834214 kernel: In-situ OAM (IOAM) with IPv6 May 17 10:21:28.834223 kernel: NET: Registered PF_PACKET protocol family May 17 10:21:28.834231 kernel: Key type dns_resolver registered May 17 10:21:28.834250 kernel: IPI shorthand broadcast: enabled May 17 10:21:28.834260 kernel: sched_clock: Marking stable (3364003371, 193356504)->(3606088667, -48728792) May 17 10:21:28.834285 kernel: registered taskstats version 1 May 17 10:21:28.834294 kernel: Loading compiled-in X.509 certificates May 17 10:21:28.834302 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 2cc2dc7fee657ed10992c84454f152cb3c880646' May 17 10:21:28.834310 kernel: Demotion targets for Node 0: null May 17 10:21:28.834321 kernel: Key type .fscrypt registered May 17 10:21:28.834329 kernel: Key type fscrypt-provisioning registered May 17 10:21:28.834338 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 10:21:28.834346 kernel: ima: Allocated hash algorithm: sha1 May 17 10:21:28.834359 kernel: ima: No architecture policies found May 17 10:21:28.834367 kernel: clk: Disabling unused clocks May 17 10:21:28.834375 kernel: Warning: unable to open an initial console. May 17 10:21:28.834384 kernel: Freeing unused kernel image (initmem) memory: 54428K May 17 10:21:28.834400 kernel: Write protecting the kernel read-only data: 24576k May 17 10:21:28.834412 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 17 10:21:28.834420 kernel: Run /init as init process May 17 10:21:28.834428 kernel: with arguments: May 17 10:21:28.834436 kernel: /init May 17 10:21:28.834444 kernel: with environment: May 17 10:21:28.834451 kernel: HOME=/ May 17 10:21:28.834459 kernel: TERM=linux May 17 10:21:28.834467 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 10:21:28.834477 systemd[1]: Successfully made /usr/ read-only. May 17 10:21:28.834490 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 17 10:21:28.834500 systemd[1]: Detected virtualization kvm. May 17 10:21:28.834508 systemd[1]: Detected architecture x86-64. May 17 10:21:28.834517 systemd[1]: Running in initrd. May 17 10:21:28.834525 systemd[1]: No hostname configured, using default hostname. May 17 10:21:28.834534 systemd[1]: Hostname set to . May 17 10:21:28.834543 systemd[1]: Initializing machine ID from VM UUID. May 17 10:21:28.834554 systemd[1]: Queued start job for default target initrd.target. May 17 10:21:28.834563 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 10:21:28.834574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 10:21:28.834584 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 10:21:28.834595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 10:21:28.834604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 10:21:28.834613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 10:21:28.834625 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 10:21:28.834634 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 10:21:28.834643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 10:21:28.834652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 10:21:28.834661 systemd[1]: Reached target paths.target - Path Units. May 17 10:21:28.834669 systemd[1]: Reached target slices.target - Slice Units. May 17 10:21:28.834678 systemd[1]: Reached target swap.target - Swaps. May 17 10:21:28.834687 systemd[1]: Reached target timers.target - Timer Units. May 17 10:21:28.834697 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 10:21:28.834706 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 10:21:28.834715 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 10:21:28.834724 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 17 10:21:28.834733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 10:21:28.834742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 10:21:28.834750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 10:21:28.834759 systemd[1]: Reached target sockets.target - Socket Units. May 17 10:21:28.834768 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 10:21:28.834779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 10:21:28.834787 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 10:21:28.834797 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 17 10:21:28.834806 systemd[1]: Starting systemd-fsck-usr.service... May 17 10:21:28.834815 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 10:21:28.834823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 10:21:28.834832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 10:21:28.834841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 10:21:28.834854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 10:21:28.834863 systemd[1]: Finished systemd-fsck-usr.service. May 17 10:21:28.834872 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 10:21:28.834904 systemd-journald[220]: Collecting audit messages is disabled. May 17 10:21:28.834946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 10:21:28.834956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 10:21:28.834965 systemd-journald[220]: Journal started May 17 10:21:28.834988 systemd-journald[220]: Runtime Journal (/run/log/journal/f252d4e802684473a41ef6d8e7700265) is 6M, max 48.5M, 42.4M free. May 17 10:21:28.833088 systemd-modules-load[221]: Inserted module 'overlay' May 17 10:21:28.837963 systemd[1]: Started systemd-journald.service - Journal Service. May 17 10:21:28.847070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 10:21:28.850809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:28.863950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 10:21:28.865857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 10:21:28.869986 kernel: Bridge firewalling registered May 17 10:21:28.866961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 10:21:28.869848 systemd-modules-load[221]: Inserted module 'br_netfilter' May 17 10:21:28.872670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 10:21:28.874414 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 17 10:21:28.877083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 10:21:28.879527 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 10:21:28.892482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 10:21:28.896189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 10:21:28.910066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 10:21:28.912892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 10:21:28.926505 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c133b701f4651573a60c1236067561af59c5f220e6e069d5bcb75ac157263bd May 17 10:21:28.965789 systemd-resolved[263]: Positive Trust Anchors: May 17 10:21:28.965803 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 10:21:28.965839 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 10:21:28.968337 systemd-resolved[263]: Defaulting to hostname 'linux'. May 17 10:21:28.969702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 10:21:28.975799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 10:21:29.039957 kernel: SCSI subsystem initialized May 17 10:21:29.049953 kernel: Loading iSCSI transport class v2.0-870. May 17 10:21:29.060958 kernel: iscsi: registered transport (tcp) May 17 10:21:29.094953 kernel: iscsi: registered transport (qla4xxx) May 17 10:21:29.094980 kernel: QLogic iSCSI HBA Driver May 17 10:21:29.116544 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 10:21:29.133444 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 10:21:29.137170 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 10:21:29.214036 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 10:21:29.217562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 10:21:29.272957 kernel: raid6: avx2x4 gen() 29563 MB/s May 17 10:21:29.289946 kernel: raid6: avx2x2 gen() 30769 MB/s May 17 10:21:29.307061 kernel: raid6: avx2x1 gen() 25760 MB/s May 17 10:21:29.307083 kernel: raid6: using algorithm avx2x2 gen() 30769 MB/s May 17 10:21:29.325049 kernel: raid6: .... xor() 19239 MB/s, rmw enabled May 17 10:21:29.325067 kernel: raid6: using avx2x2 recovery algorithm May 17 10:21:29.344949 kernel: xor: automatically using best checksumming function avx May 17 10:21:29.512959 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 10:21:29.523136 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 10:21:29.527026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 10:21:29.567628 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 17 10:21:29.573035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 10:21:29.576386 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 10:21:29.601856 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation May 17 10:21:29.634783 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 10:21:29.638711 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 10:21:29.713063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 10:21:29.716762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 10:21:29.748953 kernel: cryptd: max_cpu_qlen set to 1000 May 17 10:21:29.758951 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 10:21:29.772658 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 10:21:29.772815 kernel: AES CTR mode by8 optimization enabled May 17 10:21:29.772827 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 10:21:29.772837 kernel: GPT:9289727 != 19775487 May 17 10:21:29.772847 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 10:21:29.772857 kernel: GPT:9289727 != 19775487 May 17 10:21:29.772867 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 10:21:29.772877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 10:21:29.783962 kernel: libata version 3.00 loaded. May 17 10:21:29.787097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 10:21:29.787384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:29.793019 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 10:21:29.799690 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 17 10:21:29.795251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 10:21:29.808972 kernel: ahci 0000:00:1f.2: version 3.0 May 17 10:21:29.830790 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 10:21:29.830809 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 17 10:21:29.830997 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 17 10:21:29.831140 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 10:21:29.831283 kernel: scsi host0: ahci May 17 10:21:29.831444 kernel: scsi host1: ahci May 17 10:21:29.831593 kernel: scsi host2: ahci May 17 10:21:29.831763 kernel: scsi host3: ahci May 17 10:21:29.831903 kernel: scsi host4: ahci May 17 10:21:29.832069 kernel: scsi host5: ahci May 17 10:21:29.832215 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 17 10:21:29.832228 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 17 10:21:29.832239 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 17 10:21:29.832250 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 17 10:21:29.832260 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 17 10:21:29.832271 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 17 10:21:29.812448 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 17 10:21:29.814454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 10:21:29.814670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:29.846977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 10:21:29.856361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 10:21:29.866065 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 10:21:29.872846 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 10:21:29.874066 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 10:21:29.876660 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 10:21:29.880442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 10:21:29.906728 disk-uuid[634]: Primary Header is updated. May 17 10:21:29.906728 disk-uuid[634]: Secondary Entries is updated. May 17 10:21:29.906728 disk-uuid[634]: Secondary Header is updated. May 17 10:21:29.910957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 10:21:29.914307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:29.917988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 10:21:30.142026 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 10:21:30.142111 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 10:21:30.142123 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 10:21:30.143959 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 10:21:30.143989 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 10:21:30.144959 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 10:21:30.145959 kernel: ata3.00: applying bridge limits May 17 10:21:30.145984 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 10:21:30.146962 kernel: ata3.00: configured for UDMA/100 May 17 10:21:30.148968 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 10:21:30.187975 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 10:21:30.213691 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 10:21:30.213705 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 10:21:30.627102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 10:21:30.627806 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 10:21:30.630457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 10:21:30.630664 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 10:21:30.632246 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 10:21:30.665018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 10:21:30.916973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 10:21:30.917785 disk-uuid[637]: The operation has completed successfully. May 17 10:21:30.956580 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 10:21:30.956715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 10:21:30.985644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 10:21:31.011905 sh[668]: Success May 17 10:21:31.029952 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 10:21:31.029980 kernel: device-mapper: uevent: version 1.0.3 May 17 10:21:31.031958 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 17 10:21:31.039945 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 17 10:21:31.074122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 10:21:31.077248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 10:21:31.103542 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 10:21:31.110834 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 17 10:21:31.110892 kernel: BTRFS: device fsid 68d67fdc-db1a-4cd3-9490-455e627e302b devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (680) May 17 10:21:31.113076 kernel: BTRFS info (device dm-0): first mount of filesystem 68d67fdc-db1a-4cd3-9490-455e627e302b May 17 10:21:31.113100 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 10:21:31.113112 kernel: BTRFS info (device dm-0): using free-space-tree May 17 10:21:31.117713 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 10:21:31.119539 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 17 10:21:31.121917 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 10:21:31.123047 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 10:21:31.125578 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 10:21:31.153960 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (711) May 17 10:21:31.154017 kernel: BTRFS info (device vda6): first mount of filesystem dfcb18e1-4b20-4f52-aac0-10c7829dc173 May 17 10:21:31.155473 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 10:21:31.155549 kernel: BTRFS info (device vda6): using free-space-tree May 17 10:21:31.162989 kernel: BTRFS info (device vda6): last unmount of filesystem dfcb18e1-4b20-4f52-aac0-10c7829dc173 May 17 10:21:31.164488 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 10:21:31.165996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 10:21:31.322489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 10:21:31.325242 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 10:21:31.376981 ignition[754]: Ignition 2.21.0 May 17 10:21:31.376996 ignition[754]: Stage: fetch-offline May 17 10:21:31.377046 ignition[754]: no configs at "/usr/lib/ignition/base.d" May 17 10:21:31.377058 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:31.377141 ignition[754]: parsed url from cmdline: "" May 17 10:21:31.377145 ignition[754]: no config URL provided May 17 10:21:31.377150 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" May 17 10:21:31.377159 ignition[754]: no config at "/usr/lib/ignition/user.ign" May 17 10:21:31.377185 ignition[754]: op(1): [started] loading QEMU firmware config module May 17 10:21:31.377190 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 10:21:31.385969 ignition[754]: op(1): [finished] loading QEMU firmware config module May 17 10:21:31.412110 systemd-networkd[855]: lo: Link UP May 17 10:21:31.412123 systemd-networkd[855]: lo: Gained carrier May 17 10:21:31.414203 systemd-networkd[855]: Enumeration completed May 17 10:21:31.414400 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 10:21:31.414639 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 10:21:31.414644 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 10:21:31.415673 systemd-networkd[855]: eth0: Link UP May 17 10:21:31.415677 systemd-networkd[855]: eth0: Gained carrier May 17 10:21:31.415685 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 10:21:31.418317 systemd[1]: Reached target network.target - Network. May 17 10:21:31.438209 ignition[754]: parsing config with SHA512: b19aba94fe159af5923429a5b68732daf852b56ce41abb28fc9a5911abc1e2cdaecc01ea752363b0871a9a2785eac3b1e03a08d1f7522149eb6c8c526eae597d May 17 10:21:31.440983 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 10:21:31.444075 unknown[754]: fetched base config from "system" May 17 10:21:31.444259 unknown[754]: fetched user config from "qemu" May 17 10:21:31.444638 ignition[754]: fetch-offline: fetch-offline passed May 17 10:21:31.444691 ignition[754]: Ignition finished successfully May 17 10:21:31.448524 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 10:21:31.448793 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 10:21:31.449659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 10:21:31.491915 ignition[862]: Ignition 2.21.0 May 17 10:21:31.491943 ignition[862]: Stage: kargs May 17 10:21:31.492101 ignition[862]: no configs at "/usr/lib/ignition/base.d" May 17 10:21:31.492113 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:31.493001 ignition[862]: kargs: kargs passed May 17 10:21:31.493049 ignition[862]: Ignition finished successfully May 17 10:21:31.498446 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 10:21:31.501405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 10:21:31.544208 ignition[870]: Ignition 2.21.0 May 17 10:21:31.544223 ignition[870]: Stage: disks May 17 10:21:31.544385 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 17 10:21:31.544397 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:31.546647 ignition[870]: disks: disks passed May 17 10:21:31.546696 ignition[870]: Ignition finished successfully May 17 10:21:31.552001 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 10:21:31.552401 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 10:21:31.556219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 10:21:31.558631 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 10:21:31.560747 systemd[1]: Reached target sysinit.target - System Initialization. May 17 10:21:31.561798 systemd[1]: Reached target basic.target - Basic System. May 17 10:21:31.564685 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 10:21:31.590696 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.13 May 17 10:21:31.590712 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 17 10:21:31.592215 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 17 10:21:31.600089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 10:21:31.601151 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 10:21:31.705949 kernel: EXT4-fs (vda9): mounted filesystem 44b0ba68-13ba-4c53-8432-268eaab48ec0 r/w with ordered data mode. Quota mode: none. May 17 10:21:31.706487 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 10:21:31.707205 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 10:21:31.710409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 10:21:31.711583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 10:21:31.714132 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 10:21:31.714243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 10:21:31.714266 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 10:21:31.747572 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 10:21:31.752949 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (888) May 17 10:21:31.775864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 10:21:31.780373 kernel: BTRFS info (device vda6): first mount of filesystem dfcb18e1-4b20-4f52-aac0-10c7829dc173 May 17 10:21:31.780396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 10:21:31.780407 kernel: BTRFS info (device vda6): using free-space-tree May 17 10:21:31.783760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 10:21:31.826145 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 17 10:21:31.831049 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 17 10:21:31.835220 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 17 10:21:31.839780 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 17 10:21:31.951306 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 10:21:31.954590 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 10:21:31.956219 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 10:21:32.061953 kernel: BTRFS info (device vda6): last unmount of filesystem dfcb18e1-4b20-4f52-aac0-10c7829dc173 May 17 10:21:32.076612 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 10:21:32.102271 ignition[1003]: INFO : Ignition 2.21.0 May 17 10:21:32.102271 ignition[1003]: INFO : Stage: mount May 17 10:21:32.104011 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 10:21:32.104011 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:32.106787 ignition[1003]: INFO : mount: mount passed May 17 10:21:32.107581 ignition[1003]: INFO : Ignition finished successfully May 17 10:21:32.110084 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 10:21:32.111917 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 10:21:32.114015 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 10:21:32.147233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 10:21:32.159874 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1015) May 17 10:21:32.159902 kernel: BTRFS info (device vda6): first mount of filesystem dfcb18e1-4b20-4f52-aac0-10c7829dc173 May 17 10:21:32.159914 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 10:21:32.160743 kernel: BTRFS info (device vda6): using free-space-tree May 17 10:21:32.164675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 10:21:32.200563 ignition[1032]: INFO : Ignition 2.21.0 May 17 10:21:32.200563 ignition[1032]: INFO : Stage: files May 17 10:21:32.202454 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 10:21:32.202454 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:32.205209 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping May 17 10:21:32.207084 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 10:21:32.207084 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 10:21:32.210037 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 10:21:32.210037 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 10:21:32.213122 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 10:21:32.213122 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 10:21:32.213122 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 10:21:32.210128 unknown[1032]: wrote ssh authorized keys file for user: core May 17 10:21:32.288351 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 10:21:32.501337 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 10:21:32.501337 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 10:21:32.505356 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 10:21:32.583189 systemd-networkd[855]: eth0: Gained IPv6LL May 17 10:21:33.066435 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 10:21:33.464836 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 10:21:33.464836 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 10:21:33.469125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 10:21:33.481733 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 10:21:34.179663 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 10:21:34.545748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 10:21:34.545748 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 10:21:34.549910 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 10:21:34.551895 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 17 10:21:34.570502 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 10:21:34.575203 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 10:21:34.576798 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 17 10:21:34.576798 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 10:21:34.576798 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 10:21:34.576798 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 10:21:34.576798 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 10:21:34.576798 ignition[1032]: INFO : files: files passed May 17 10:21:34.576798 ignition[1032]: INFO : Ignition finished successfully May 17 10:21:34.582357 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 10:21:34.583995 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 10:21:34.585211 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 10:21:34.603957 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 10:21:34.604256 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 10:21:34.607273 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory May 17 10:21:34.611350 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 10:21:34.611350 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 10:21:34.614868 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 10:21:34.614519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 10:21:34.616424 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 10:21:34.618800 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 10:21:34.693122 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 10:21:34.693307 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 10:21:34.696258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 10:21:34.698546 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 10:21:34.700764 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 10:21:34.701841 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 10:21:34.738221 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 10:21:34.742844 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 10:21:34.775636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 10:21:34.778051 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 10:21:34.778218 systemd[1]: Stopped target timers.target - Timer Units. May 17 10:21:34.780532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 10:21:34.780708 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 10:21:34.785539 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 10:21:34.785680 systemd[1]: Stopped target basic.target - Basic System. May 17 10:21:34.787612 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 10:21:34.787952 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 10:21:34.788467 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 10:21:34.794305 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 17 10:21:34.794633 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 10:21:34.794982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 10:21:34.795490 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 10:21:34.795816 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 10:21:34.796321 systemd[1]: Stopped target swap.target - Swaps. May 17 10:21:34.796647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 10:21:34.796816 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 10:21:34.811191 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 10:21:34.811406 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 10:21:34.813732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 10:21:34.815966 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 10:21:34.816992 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 10:21:34.817122 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 10:21:34.821320 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 10:21:34.821435 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 10:21:34.822635 systemd[1]: Stopped target paths.target - Path Units. May 17 10:21:34.824867 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 10:21:34.827017 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 10:21:34.828151 systemd[1]: Stopped target slices.target - Slice Units. May 17 10:21:34.830733 systemd[1]: Stopped target sockets.target - Socket Units. May 17 10:21:34.833410 systemd[1]: iscsid.socket: Deactivated successfully. May 17 10:21:34.833554 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 10:21:34.835697 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 10:21:34.835787 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 10:21:34.837402 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 10:21:34.837562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 10:21:34.839209 systemd[1]: ignition-files.service: Deactivated successfully. May 17 10:21:34.839332 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 10:21:34.847168 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 10:21:34.850290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 10:21:34.851372 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 10:21:34.851503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 10:21:34.854064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 10:21:34.854213 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 10:21:34.859975 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 10:21:34.860114 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 10:21:34.873625 ignition[1087]: INFO : Ignition 2.21.0 May 17 10:21:34.874760 ignition[1087]: INFO : Stage: umount May 17 10:21:34.874760 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 10:21:34.874760 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 10:21:34.879104 ignition[1087]: INFO : umount: umount passed May 17 10:21:34.879104 ignition[1087]: INFO : Ignition finished successfully May 17 10:21:34.878885 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 10:21:34.879610 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 10:21:34.879737 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 10:21:34.880902 systemd[1]: Stopped target network.target - Network. May 17 10:21:34.883452 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 10:21:34.883521 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 10:21:34.884578 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 10:21:34.884627 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 10:21:34.886901 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 10:21:34.886976 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 10:21:34.890022 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 10:21:34.890076 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 10:21:34.891497 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 10:21:34.894766 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 10:21:34.902383 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 10:21:34.902531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 10:21:34.907979 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 17 10:21:34.908655 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 10:21:34.908722 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 10:21:34.914315 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 17 10:21:34.914648 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 10:21:34.914799 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 10:21:34.918790 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 17 10:21:34.919412 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 17 10:21:34.920551 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 10:21:34.920608 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 10:21:34.923403 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 10:21:34.925219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 10:21:34.925291 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 10:21:34.926450 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 10:21:34.926506 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 10:21:34.933635 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 10:21:34.933744 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 10:21:34.935803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 10:21:34.940297 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 10:21:34.965138 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 10:21:34.965368 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 10:21:34.969446 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 10:21:34.969584 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 10:21:34.971278 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 10:21:34.971365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 10:21:34.972454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 10:21:34.972502 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 10:21:34.972755 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 10:21:34.972820 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 10:21:34.973674 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 10:21:34.973729 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 10:21:34.979328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 10:21:34.979391 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 10:21:34.987163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 10:21:34.987315 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 17 10:21:34.987396 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 17 10:21:34.992173 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 10:21:34.992247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 10:21:34.995784 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 10:21:34.995834 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 10:21:34.999317 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 10:21:34.999368 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 10:21:35.001988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 10:21:35.002037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:35.020404 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 10:21:35.020572 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 10:21:35.045152 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 10:21:35.045362 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 10:21:35.047674 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 10:21:35.049305 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 10:21:35.049379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 10:21:35.053502 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 10:21:35.083081 systemd[1]: Switching root. May 17 10:21:35.122792 systemd-journald[220]: Journal stopped May 17 10:21:36.398564 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 17 10:21:36.398659 kernel: SELinux: policy capability network_peer_controls=1 May 17 10:21:36.398674 kernel: SELinux: policy capability open_perms=1 May 17 10:21:36.398691 kernel: SELinux: policy capability extended_socket_class=1 May 17 10:21:36.398705 kernel: SELinux: policy capability always_check_network=0 May 17 10:21:36.398721 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 10:21:36.398737 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 10:21:36.398748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 10:21:36.398760 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 10:21:36.398771 kernel: SELinux: policy capability userspace_initial_context=0 May 17 10:21:36.398782 kernel: audit: type=1403 audit(1747477295.534:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 10:21:36.398795 systemd[1]: Successfully loaded SELinux policy in 49.279ms. May 17 10:21:36.398815 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.128ms. May 17 10:21:36.398831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 17 10:21:36.398848 systemd[1]: Detected virtualization kvm. May 17 10:21:36.398860 systemd[1]: Detected architecture x86-64. May 17 10:21:36.398872 systemd[1]: Detected first boot. May 17 10:21:36.398884 systemd[1]: Initializing machine ID from VM UUID. May 17 10:21:36.398897 zram_generator::config[1134]: No configuration found. May 17 10:21:36.398914 kernel: Guest personality initialized and is inactive May 17 10:21:36.398938 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 17 10:21:36.398952 kernel: Initialized host personality May 17 10:21:36.398963 kernel: NET: Registered PF_VSOCK protocol family May 17 10:21:36.398975 systemd[1]: Populated /etc with preset unit settings. May 17 10:21:36.398988 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 17 10:21:36.399000 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 10:21:36.399012 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 10:21:36.399025 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 10:21:36.399038 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 10:21:36.399052 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 10:21:36.399067 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 10:21:36.399079 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 10:21:36.399092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 10:21:36.399104 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 10:21:36.399116 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 10:21:36.399129 systemd[1]: Created slice user.slice - User and Session Slice. May 17 10:21:36.399141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 10:21:36.399153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 10:21:36.399165 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 10:21:36.399180 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 10:21:36.399193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 10:21:36.399205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 10:21:36.399217 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 10:21:36.399238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 10:21:36.399251 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 10:21:36.399276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 10:21:36.399295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 10:21:36.399308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 10:21:36.399325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 10:21:36.399339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 10:21:36.399356 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 10:21:36.399376 systemd[1]: Reached target slices.target - Slice Units. May 17 10:21:36.399388 systemd[1]: Reached target swap.target - Swaps. May 17 10:21:36.399400 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 10:21:36.399413 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 10:21:36.399425 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 17 10:21:36.399441 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 10:21:36.399465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 10:21:36.399483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 10:21:36.399495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 10:21:36.399508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 10:21:36.399520 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 10:21:36.399549 systemd[1]: Mounting media.mount - External Media Directory... May 17 10:21:36.399572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:36.399591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 10:21:36.399609 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 10:21:36.399639 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 10:21:36.399667 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 10:21:36.399697 systemd[1]: Reached target machines.target - Containers. May 17 10:21:36.399713 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 10:21:36.399726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 10:21:36.399738 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 10:21:36.399769 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 10:21:36.399796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 10:21:36.399811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 10:21:36.399823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 10:21:36.399835 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 10:21:36.399847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 10:21:36.399862 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 10:21:36.399876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 10:21:36.399889 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 10:21:36.399904 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 10:21:36.399937 systemd[1]: Stopped systemd-fsck-usr.service. May 17 10:21:36.399968 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 10:21:36.399984 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 10:21:36.399997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 10:21:36.400010 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 10:21:36.400022 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 10:21:36.400037 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 17 10:21:36.400059 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 10:21:36.400073 systemd[1]: verity-setup.service: Deactivated successfully. May 17 10:21:36.400088 systemd[1]: Stopped verity-setup.service. May 17 10:21:36.400108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:36.400120 kernel: fuse: init (API version 7.41) May 17 10:21:36.400135 kernel: ACPI: bus type drm_connector registered May 17 10:21:36.400147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 10:21:36.400159 kernel: loop: module loaded May 17 10:21:36.400170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 10:21:36.400182 systemd[1]: Mounted media.mount - External Media Directory. May 17 10:21:36.400194 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 10:21:36.400209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 10:21:36.400228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 10:21:36.400243 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 10:21:36.400255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 10:21:36.400290 systemd-journald[1216]: Collecting audit messages is disabled. May 17 10:21:36.400317 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 10:21:36.400330 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 10:21:36.400343 systemd-journald[1216]: Journal started May 17 10:21:36.400367 systemd-journald[1216]: Runtime Journal (/run/log/journal/f252d4e802684473a41ef6d8e7700265) is 6M, max 48.5M, 42.4M free. May 17 10:21:36.098512 systemd[1]: Queued start job for default target multi-user.target. May 17 10:21:36.125262 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 10:21:36.125782 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 10:21:36.401970 systemd[1]: Started systemd-journald.service - Journal Service. May 17 10:21:36.404160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 10:21:36.404402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 10:21:36.405839 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 10:21:36.406072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 10:21:36.408166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 10:21:36.408467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 10:21:36.410024 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 10:21:36.410248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 10:21:36.411653 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 10:21:36.411873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 10:21:36.413464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 10:21:36.414948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 10:21:36.416865 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 10:21:36.418545 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 17 10:21:36.435009 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 10:21:36.437915 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 10:21:36.440353 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 10:21:36.441827 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 10:21:36.441871 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 10:21:36.444121 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 17 10:21:36.449695 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 10:21:36.451274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 10:21:36.453404 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 10:21:36.456082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 10:21:36.458525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 10:21:36.459859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 10:21:36.461325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 10:21:36.469396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 10:21:36.472419 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 10:21:36.475461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 10:21:36.478675 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 10:21:36.480233 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 10:21:36.485162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 10:21:36.492653 systemd-journald[1216]: Time spent on flushing to /var/log/journal/f252d4e802684473a41ef6d8e7700265 is 22.376ms for 1074 entries. May 17 10:21:36.492653 systemd-journald[1216]: System Journal (/var/log/journal/f252d4e802684473a41ef6d8e7700265) is 8M, max 195.6M, 187.6M free. May 17 10:21:36.525387 systemd-journald[1216]: Received client request to flush runtime journal. May 17 10:21:36.525435 kernel: loop0: detected capacity change from 0 to 113872 May 17 10:21:36.494392 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 10:21:36.497174 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 10:21:36.502263 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 17 10:21:36.514080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 10:21:36.523557 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 17 10:21:36.523571 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 17 10:21:36.527966 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 10:21:36.535235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 10:21:36.539283 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 10:21:36.573584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 10:21:36.580687 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 17 10:21:36.598091 kernel: loop1: detected capacity change from 0 to 146240 May 17 10:21:36.618946 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 10:21:36.622989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 10:21:36.630429 kernel: loop2: detected capacity change from 0 to 224512 May 17 10:21:36.680249 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 17 10:21:36.680849 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 17 10:21:36.680949 kernel: loop3: detected capacity change from 0 to 113872 May 17 10:21:36.687953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 10:21:36.692963 kernel: loop4: detected capacity change from 0 to 146240 May 17 10:21:36.711981 kernel: loop5: detected capacity change from 0 to 224512 May 17 10:21:36.720850 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 10:21:36.721547 (sd-merge)[1279]: Merged extensions into '/usr'. May 17 10:21:36.726583 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... May 17 10:21:36.726607 systemd[1]: Reloading... May 17 10:21:36.829967 zram_generator::config[1304]: No configuration found. May 17 10:21:36.997059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:21:37.020586 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 10:21:37.087128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 10:21:37.087593 systemd[1]: Reloading finished in 360 ms. May 17 10:21:37.117575 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 10:21:37.121752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 10:21:37.135879 systemd[1]: Starting ensure-sysext.service... May 17 10:21:37.138273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 10:21:37.161143 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... May 17 10:21:37.161159 systemd[1]: Reloading... May 17 10:21:37.181379 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 17 10:21:37.181438 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 17 10:21:37.181860 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 10:21:37.182365 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 10:21:37.183623 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 10:21:37.184031 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 17 10:21:37.184137 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 17 10:21:37.189359 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 17 10:21:37.189376 systemd-tmpfiles[1344]: Skipping /boot May 17 10:21:37.213415 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 17 10:21:37.213434 systemd-tmpfiles[1344]: Skipping /boot May 17 10:21:37.222958 zram_generator::config[1371]: No configuration found. May 17 10:21:37.342407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:21:37.431057 systemd[1]: Reloading finished in 269 ms. May 17 10:21:37.448283 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 10:21:37.475634 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 10:21:37.484602 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 17 10:21:37.487341 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 10:21:37.489713 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 10:21:37.498029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 10:21:37.501159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 10:21:37.506769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 10:21:37.512274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:37.512447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 10:21:37.514497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 10:21:37.517214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 10:21:37.520417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 10:21:37.521997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 10:21:37.522154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 10:21:37.524647 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 10:21:37.525870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:37.534520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 10:21:37.535169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 10:21:37.537496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 10:21:37.539709 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 10:21:37.544398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 10:21:37.544889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 10:21:37.547773 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 10:21:37.548095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 10:21:37.556810 systemd-udevd[1414]: Using default interface naming scheme 'v255'. May 17 10:21:37.559797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:37.560073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 10:21:37.562112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 10:21:37.565010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 10:21:37.568266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 10:21:37.573606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 10:21:37.575162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 10:21:37.575328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 17 10:21:37.583379 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 10:21:37.584526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 10:21:37.586688 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 10:21:37.587734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 10:21:37.589514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 10:21:37.591007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 10:21:37.593173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 10:21:37.593460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 10:21:37.595817 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 10:21:37.597028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 10:21:37.598807 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 10:21:37.599114 augenrules[1449]: No rules May 17 10:21:37.600630 systemd[1]: audit-rules.service: Deactivated successfully. May 17 10:21:37.601055 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 17 10:21:37.602700 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 10:21:37.613345 systemd[1]: Finished ensure-sysext.service. May 17 10:21:37.614657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 10:21:37.616540 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 10:21:37.643805 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 10:21:37.644989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 10:21:37.645089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 10:21:37.649106 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 10:21:37.650476 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 10:21:37.727895 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 10:21:37.749093 systemd-resolved[1413]: Positive Trust Anchors: May 17 10:21:37.749114 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 10:21:37.749156 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 10:21:37.762229 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 17 10:21:37.764270 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 10:21:37.765751 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 10:21:37.865951 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 10:21:37.867943 kernel: mousedev: PS/2 mouse device common for all mice May 17 10:21:37.900951 kernel: ACPI: button: Power Button [PWRF] May 17 10:21:37.909000 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 17 10:21:37.909370 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 10:21:37.909555 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 10:21:37.998700 systemd-networkd[1492]: lo: Link UP May 17 10:21:37.998715 systemd-networkd[1492]: lo: Gained carrier May 17 10:21:38.002875 systemd-networkd[1492]: Enumeration completed May 17 10:21:38.003409 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 10:21:38.003415 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 10:21:38.004307 systemd-networkd[1492]: eth0: Link UP May 17 10:21:38.004467 systemd-networkd[1492]: eth0: Gained carrier May 17 10:21:38.004482 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 10:21:38.005622 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 10:21:38.007076 systemd[1]: Reached target network.target - Network. May 17 10:21:38.010876 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 17 10:21:38.014134 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 10:21:38.026695 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 10:21:38.039544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 10:21:38.041198 systemd[1]: Reached target sysinit.target - System Initialization. May 17 10:21:38.042519 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 10:21:38.043919 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 10:21:38.045369 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 17 10:21:38.046973 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 10:21:38.048727 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 10:21:38.048758 systemd[1]: Reached target paths.target - Path Units. May 17 10:21:38.050741 systemd[1]: Reached target time-set.target - System Time Set. May 17 10:21:38.052323 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 10:21:38.053783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 10:21:38.055348 systemd[1]: Reached target timers.target - Timer Units. May 17 10:21:38.057564 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 10:21:38.844989 systemd-timesyncd[1494]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 10:21:38.845057 systemd-timesyncd[1494]: Initial clock synchronization to Sat 2025-05-17 10:21:38.844896 UTC. May 17 10:21:38.847452 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 10:21:38.849723 systemd-resolved[1413]: Clock change detected. Flushing caches. May 17 10:21:38.855430 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 17 10:21:38.859006 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 17 10:21:38.860549 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 17 10:21:38.870553 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 10:21:38.872321 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 17 10:21:38.874525 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 10:21:38.879770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 10:21:38.883309 systemd[1]: Reached target sockets.target - Socket Units. May 17 10:21:38.884429 systemd[1]: Reached target basic.target - Basic System. May 17 10:21:38.885902 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 10:21:38.885942 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 10:21:38.887123 systemd[1]: Starting containerd.service - containerd container runtime... May 17 10:21:38.893011 kernel: kvm_amd: TSC scaling supported May 17 10:21:38.893058 kernel: kvm_amd: Nested Virtualization enabled May 17 10:21:38.893072 kernel: kvm_amd: Nested Paging enabled May 17 10:21:38.893084 kernel: kvm_amd: LBR virtualization supported May 17 10:21:38.893724 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 10:21:38.893747 kernel: kvm_amd: Virtual GIF supported May 17 10:21:38.893174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 10:21:38.898066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 10:21:38.900610 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 10:21:38.904651 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 10:21:38.916768 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 10:21:38.920689 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 17 10:21:38.923322 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 10:21:38.927240 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 10:21:38.929152 jq[1531]: false May 17 10:21:38.931383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 10:21:38.937850 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 10:21:38.941790 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 10:21:38.957051 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 10:21:38.957806 extend-filesystems[1532]: Found loop3 May 17 10:21:38.966683 extend-filesystems[1532]: Found loop4 May 17 10:21:38.966683 extend-filesystems[1532]: Found loop5 May 17 10:21:38.966683 extend-filesystems[1532]: Found sr0 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda May 17 10:21:38.966683 extend-filesystems[1532]: Found vda1 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda2 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda3 May 17 10:21:38.966683 extend-filesystems[1532]: Found usr May 17 10:21:38.966683 extend-filesystems[1532]: Found vda4 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda6 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda7 May 17 10:21:38.966683 extend-filesystems[1532]: Found vda9 May 17 10:21:38.966683 extend-filesystems[1532]: Checking size of /dev/vda9 May 17 10:21:38.959231 oslogin_cache_refresh[1533]: Refreshing passwd entry cache May 17 10:21:38.979671 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache May 17 10:21:38.979671 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting May 17 10:21:38.979671 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 17 10:21:38.979671 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache May 17 10:21:38.967085 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 10:21:38.972586 oslogin_cache_refresh[1533]: Failure getting users, quitting May 17 10:21:38.968574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 10:21:38.972606 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 17 10:21:38.970091 systemd[1]: Starting update-engine.service - Update Engine... May 17 10:21:38.972653 oslogin_cache_refresh[1533]: Refreshing group entry cache May 17 10:21:38.975616 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 10:21:38.977538 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 17 10:21:38.982150 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 10:21:38.984394 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting May 17 10:21:38.984394 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 17 10:21:38.983296 oslogin_cache_refresh[1533]: Failure getting groups, quitting May 17 10:21:38.983309 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 17 10:21:38.984601 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 10:21:38.985052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 10:21:38.986003 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 17 10:21:38.989820 extend-filesystems[1532]: Resized partition /dev/vda9 May 17 10:21:38.992281 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 17 10:21:38.994191 systemd[1]: motdgen.service: Deactivated successfully. May 17 10:21:38.994825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 10:21:38.995837 jq[1552]: true May 17 10:21:38.998181 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 10:21:38.999716 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) May 17 10:21:38.999740 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 10:21:39.002380 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 10:21:39.010441 update_engine[1549]: I20250517 10:21:39.010357 1549 main.cc:92] Flatcar Update Engine starting May 17 10:21:39.015303 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 10:21:39.021628 jq[1559]: true May 17 10:21:39.031550 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 10:21:39.064511 tar[1558]: linux-amd64/LICENSE May 17 10:21:39.065692 tar[1558]: linux-amd64/helm May 17 10:21:39.074016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 10:21:39.210102 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 10:21:39.154268 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Power Button) May 17 10:21:39.241373 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 10:21:39.241645 update_engine[1549]: I20250517 10:21:39.230407 1549 update_check_scheduler.cc:74] Next update check in 6m32s May 17 10:21:39.220105 dbus-daemon[1529]: [system] SELinux support is enabled May 17 10:21:39.154295 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 10:21:39.201554 systemd-logind[1545]: New seat seat0. May 17 10:21:39.203441 systemd[1]: Started systemd-logind.service - User Login Management. May 17 10:21:39.216761 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 10:21:39.225647 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 10:21:39.227767 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 10:21:39.231172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 10:21:39.231195 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 10:21:39.234278 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 10:21:39.234297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 10:21:39.257468 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 10:21:39.257468 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 10:21:39.257468 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 10:21:39.262080 extend-filesystems[1532]: Resized filesystem in /dev/vda9 May 17 10:21:39.262193 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 10:21:39.269190 bash[1598]: Updated "/home/core/.ssh/authorized_keys" May 17 10:21:39.272516 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 10:21:39.273114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 10:21:39.275219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 10:21:39.278437 systemd[1]: Started update-engine.service - Update Engine. May 17 10:21:39.281201 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 10:21:39.283517 kernel: EDAC MC: Ver: 3.0.0 May 17 10:21:39.284687 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 10:21:39.289809 systemd[1]: issuegen.service: Deactivated successfully. May 17 10:21:39.290974 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 10:21:39.301963 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 10:21:39.353061 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 10:21:39.359838 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 10:21:39.362100 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 10:21:39.362322 systemd[1]: Reached target getty.target - Login Prompts. May 17 10:21:39.363472 locksmithd[1605]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 10:21:39.400589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 10:21:39.505171 containerd[1560]: time="2025-05-17T10:21:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 17 10:21:39.509103 containerd[1560]: time="2025-05-17T10:21:39.509004815Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 17 10:21:39.539883 containerd[1560]: time="2025-05-17T10:21:39.539792877Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.518µs" May 17 10:21:39.539883 containerd[1560]: time="2025-05-17T10:21:39.539860854Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 17 10:21:39.539883 containerd[1560]: time="2025-05-17T10:21:39.539888476Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 17 10:21:39.540220 containerd[1560]: time="2025-05-17T10:21:39.540196043Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 17 10:21:39.540271 containerd[1560]: time="2025-05-17T10:21:39.540220389Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 17 10:21:39.540271 containerd[1560]: time="2025-05-17T10:21:39.540250836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 17 10:21:39.540386 containerd[1560]: time="2025-05-17T10:21:39.540329113Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 17 10:21:39.540386 containerd[1560]: time="2025-05-17T10:21:39.540350373Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 17 10:21:39.540769 containerd[1560]: time="2025-05-17T10:21:39.540730656Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 17 10:21:39.540769 containerd[1560]: time="2025-05-17T10:21:39.540758368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 17 10:21:39.540769 containerd[1560]: time="2025-05-17T10:21:39.540770341Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 17 10:21:39.540865 containerd[1560]: time="2025-05-17T10:21:39.540777685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 17 10:21:39.540935 containerd[1560]: time="2025-05-17T10:21:39.540911866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 17 10:21:39.541220 containerd[1560]: time="2025-05-17T10:21:39.541192072Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 17 10:21:39.541257 containerd[1560]: time="2025-05-17T10:21:39.541231286Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 17 10:21:39.541257 containerd[1560]: time="2025-05-17T10:21:39.541242387Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 17 10:21:39.541322 containerd[1560]: time="2025-05-17T10:21:39.541291228Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 17 10:21:39.541831 containerd[1560]: time="2025-05-17T10:21:39.541754588Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 17 10:21:39.542080 containerd[1560]: time="2025-05-17T10:21:39.542042959Z" level=info msg="metadata content store policy set" policy=shared May 17 10:21:39.578632 containerd[1560]: time="2025-05-17T10:21:39.578547337Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 17 10:21:39.578632 containerd[1560]: time="2025-05-17T10:21:39.578650911Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578672862Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578690125Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578753874Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578790824Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578805822Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578818095Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 17 10:21:39.578881 containerd[1560]: time="2025-05-17T10:21:39.578840607Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 17 10:21:39.579055 containerd[1560]: time="2025-05-17T10:21:39.578903665Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 17 10:21:39.579055 containerd[1560]: time="2025-05-17T10:21:39.578918674Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 17 10:21:39.579055 containerd[1560]: time="2025-05-17T10:21:39.578956765Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 17 10:21:39.579249 containerd[1560]: time="2025-05-17T10:21:39.579217154Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 17 10:21:39.579274 containerd[1560]: time="2025-05-17T10:21:39.579249314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 17 10:21:39.579274 containerd[1560]: time="2025-05-17T10:21:39.579267278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 17 10:21:39.579323 containerd[1560]: time="2025-05-17T10:21:39.579281184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 17 10:21:39.579323 containerd[1560]: time="2025-05-17T10:21:39.579292495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 17 10:21:39.579323 containerd[1560]: time="2025-05-17T10:21:39.579304528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 17 10:21:39.579323 containerd[1560]: time="2025-05-17T10:21:39.579315809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 17 10:21:39.579404 containerd[1560]: time="2025-05-17T10:21:39.579328923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 17 10:21:39.579404 containerd[1560]: time="2025-05-17T10:21:39.579340585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 17 10:21:39.579404 containerd[1560]: time="2025-05-17T10:21:39.579364009Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 17 10:21:39.579404 containerd[1560]: time="2025-05-17T10:21:39.579374469Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 17 10:21:39.579475 containerd[1560]: time="2025-05-17T10:21:39.579454870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 17 10:21:39.579475 containerd[1560]: time="2025-05-17T10:21:39.579469597Z" level=info msg="Start snapshots syncer" May 17 10:21:39.579549 containerd[1560]: time="2025-05-17T10:21:39.579532495Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 17 10:21:39.579855 containerd[1560]: time="2025-05-17T10:21:39.579808193Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 17 10:21:39.580027 containerd[1560]: time="2025-05-17T10:21:39.579868115Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 17 10:21:39.590116 containerd[1560]: time="2025-05-17T10:21:39.590002391Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 17 10:21:39.590478 containerd[1560]: time="2025-05-17T10:21:39.590444040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 17 10:21:39.590561 containerd[1560]: time="2025-05-17T10:21:39.590513380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 17 10:21:39.590588 containerd[1560]: time="2025-05-17T10:21:39.590576428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 17 10:21:39.590608 containerd[1560]: time="2025-05-17T10:21:39.590594112Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 17 10:21:39.590627 containerd[1560]: time="2025-05-17T10:21:39.590617546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 17 10:21:39.590649 containerd[1560]: time="2025-05-17T10:21:39.590632814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 17 10:21:39.590670 containerd[1560]: time="2025-05-17T10:21:39.590651930Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 17 10:21:39.590751 containerd[1560]: time="2025-05-17T10:21:39.590720268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 17 10:21:39.590788 containerd[1560]: time="2025-05-17T10:21:39.590767798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 17 10:21:39.590810 containerd[1560]: time="2025-05-17T10:21:39.590793556Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 17 10:21:39.592347 containerd[1560]: time="2025-05-17T10:21:39.592301525Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 17 10:21:39.592394 containerd[1560]: time="2025-05-17T10:21:39.592350978Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 17 10:21:39.592394 containerd[1560]: time="2025-05-17T10:21:39.592375474Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 17 10:21:39.592394 containerd[1560]: time="2025-05-17T10:21:39.592390783Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592399930Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592409248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592422543Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592448652Z" level=info msg="runtime interface created" May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592454573Z" level=info msg="created NRI interface" May 17 10:21:39.592466 containerd[1560]: time="2025-05-17T10:21:39.592462948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 17 10:21:39.592596 containerd[1560]: time="2025-05-17T10:21:39.592482946Z" level=info msg="Connect containerd service" May 17 10:21:39.592596 containerd[1560]: time="2025-05-17T10:21:39.592532299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 10:21:39.595288 containerd[1560]: time="2025-05-17T10:21:39.595248455Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 10:21:39.820184 containerd[1560]: time="2025-05-17T10:21:39.819953408Z" level=info msg="Start subscribing containerd event" May 17 10:21:39.820184 containerd[1560]: time="2025-05-17T10:21:39.820073233Z" level=info msg="Start recovering state" May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820197316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820251467Z" level=info msg="Start event monitor" May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820279239Z" level=info msg="Start cni network conf syncer for default" May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820310227Z" level=info msg="Start streaming server" May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820324134Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820339412Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820414142Z" level=info msg="runtime interface starting up..." May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820419943Z" level=info msg="starting plugins..." May 17 10:21:39.820479 containerd[1560]: time="2025-05-17T10:21:39.820455029Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 17 10:21:39.820974 containerd[1560]: time="2025-05-17T10:21:39.820666917Z" level=info msg="containerd successfully booted in 0.316149s" May 17 10:21:39.820962 systemd[1]: Started containerd.service - containerd container runtime. May 17 10:21:40.008767 tar[1558]: linux-amd64/README.md May 17 10:21:40.072663 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 10:21:40.726783 systemd-networkd[1492]: eth0: Gained IPv6LL May 17 10:21:40.730416 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 10:21:40.732758 systemd[1]: Reached target network-online.target - Network is Online. May 17 10:21:40.736060 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 10:21:40.738649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:21:40.740954 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 10:21:40.770693 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 10:21:40.774300 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 10:21:40.774599 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 10:21:40.776338 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 10:21:42.201086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:21:42.203283 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 10:21:42.205605 systemd[1]: Startup finished in 3.437s (kernel) + 6.892s (initrd) + 5.935s (userspace) = 16.265s. May 17 10:21:42.219966 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 10:21:42.855009 kubelet[1669]: E0517 10:21:42.854918 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 10:21:42.859885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 10:21:42.860224 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 10:21:42.860716 systemd[1]: kubelet.service: Consumed 1.913s CPU time, 265.9M memory peak. May 17 10:21:43.256988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 10:21:43.258303 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:47322.service - OpenSSH per-connection server daemon (10.0.0.1:47322). May 17 10:21:43.336755 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 47322 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:43.338468 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:43.344933 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 10:21:43.346118 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 10:21:43.352511 systemd-logind[1545]: New session 1 of user core. May 17 10:21:43.368811 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 10:21:43.371844 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 10:21:43.391615 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 10:21:43.393856 systemd-logind[1545]: New session c1 of user core. May 17 10:21:43.546664 systemd[1687]: Queued start job for default target default.target. May 17 10:21:43.570947 systemd[1687]: Created slice app.slice - User Application Slice. May 17 10:21:43.570978 systemd[1687]: Reached target paths.target - Paths. May 17 10:21:43.571032 systemd[1687]: Reached target timers.target - Timers. May 17 10:21:43.572709 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 10:21:43.585793 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 10:21:43.585984 systemd[1687]: Reached target sockets.target - Sockets. May 17 10:21:43.586052 systemd[1687]: Reached target basic.target - Basic System. May 17 10:21:43.586112 systemd[1687]: Reached target default.target - Main User Target. May 17 10:21:43.586164 systemd[1687]: Startup finished in 184ms. May 17 10:21:43.586372 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 10:21:43.588153 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 10:21:43.658841 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:47324.service - OpenSSH per-connection server daemon (10.0.0.1:47324). May 17 10:21:43.703271 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:43.705012 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:43.709516 systemd-logind[1545]: New session 2 of user core. May 17 10:21:43.724669 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 10:21:43.778312 sshd[1700]: Connection closed by 10.0.0.1 port 47324 May 17 10:21:43.778745 sshd-session[1698]: pam_unix(sshd:session): session closed for user core May 17 10:21:43.787005 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:47324.service: Deactivated successfully. May 17 10:21:43.789156 systemd[1]: session-2.scope: Deactivated successfully. May 17 10:21:43.789952 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. May 17 10:21:43.793468 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:47340.service - OpenSSH per-connection server daemon (10.0.0.1:47340). May 17 10:21:43.794292 systemd-logind[1545]: Removed session 2. May 17 10:21:43.851075 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 47340 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:43.852345 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:43.856965 systemd-logind[1545]: New session 3 of user core. May 17 10:21:43.865637 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 10:21:43.914368 sshd[1708]: Connection closed by 10.0.0.1 port 47340 May 17 10:21:43.914678 sshd-session[1706]: pam_unix(sshd:session): session closed for user core May 17 10:21:43.925052 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:47340.service: Deactivated successfully. May 17 10:21:43.926807 systemd[1]: session-3.scope: Deactivated successfully. May 17 10:21:43.927599 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. May 17 10:21:43.930423 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:47346.service - OpenSSH per-connection server daemon (10.0.0.1:47346). May 17 10:21:43.931060 systemd-logind[1545]: Removed session 3. May 17 10:21:43.992818 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 47346 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:43.994607 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:43.999219 systemd-logind[1545]: New session 4 of user core. May 17 10:21:44.008650 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 10:21:44.062997 sshd[1716]: Connection closed by 10.0.0.1 port 47346 May 17 10:21:44.063295 sshd-session[1714]: pam_unix(sshd:session): session closed for user core May 17 10:21:44.074064 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:47346.service: Deactivated successfully. May 17 10:21:44.075812 systemd[1]: session-4.scope: Deactivated successfully. May 17 10:21:44.076699 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. May 17 10:21:44.079733 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:47352.service - OpenSSH per-connection server daemon (10.0.0.1:47352). May 17 10:21:44.080562 systemd-logind[1545]: Removed session 4. May 17 10:21:44.148289 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:44.150714 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:44.155794 systemd-logind[1545]: New session 5 of user core. May 17 10:21:44.183745 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 10:21:44.240911 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 10:21:44.241217 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 10:21:44.254217 sudo[1725]: pam_unix(sudo:session): session closed for user root May 17 10:21:44.255790 sshd[1724]: Connection closed by 10.0.0.1 port 47352 May 17 10:21:44.256170 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 17 10:21:44.266821 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:47352.service: Deactivated successfully. May 17 10:21:44.268402 systemd[1]: session-5.scope: Deactivated successfully. May 17 10:21:44.269203 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. May 17 10:21:44.272196 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:47362.service - OpenSSH per-connection server daemon (10.0.0.1:47362). May 17 10:21:44.272787 systemd-logind[1545]: Removed session 5. May 17 10:21:44.324637 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 47362 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:44.326066 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:44.330129 systemd-logind[1545]: New session 6 of user core. May 17 10:21:44.341612 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 10:21:44.394180 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 10:21:44.394482 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 10:21:44.401730 sudo[1736]: pam_unix(sudo:session): session closed for user root May 17 10:21:44.408089 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 17 10:21:44.408382 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 10:21:44.417869 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 17 10:21:44.467500 augenrules[1758]: No rules May 17 10:21:44.469326 systemd[1]: audit-rules.service: Deactivated successfully. May 17 10:21:44.469646 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 17 10:21:44.470856 sudo[1735]: pam_unix(sudo:session): session closed for user root May 17 10:21:44.472683 sshd[1734]: Connection closed by 10.0.0.1 port 47362 May 17 10:21:44.473035 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 17 10:21:44.485241 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:47362.service: Deactivated successfully. May 17 10:21:44.487127 systemd[1]: session-6.scope: Deactivated successfully. May 17 10:21:44.487834 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. May 17 10:21:44.491013 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:47364.service - OpenSSH per-connection server daemon (10.0.0.1:47364). May 17 10:21:44.491560 systemd-logind[1545]: Removed session 6. May 17 10:21:44.558002 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 47364 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:21:44.559432 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:21:44.563867 systemd-logind[1545]: New session 7 of user core. May 17 10:21:44.574653 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 10:21:44.626589 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 10:21:44.626986 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 10:21:45.137303 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 10:21:45.159848 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 10:21:45.788923 dockerd[1790]: time="2025-05-17T10:21:45.788850706Z" level=info msg="Starting up" May 17 10:21:45.789814 dockerd[1790]: time="2025-05-17T10:21:45.789788345Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 17 10:21:46.090253 dockerd[1790]: time="2025-05-17T10:21:46.090105008Z" level=info msg="Loading containers: start." May 17 10:21:46.100542 kernel: Initializing XFRM netlink socket May 17 10:21:46.367955 systemd-networkd[1492]: docker0: Link UP May 17 10:21:46.373395 dockerd[1790]: time="2025-05-17T10:21:46.373347903Z" level=info msg="Loading containers: done." May 17 10:21:46.389930 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck148968007-merged.mount: Deactivated successfully. May 17 10:21:46.391414 dockerd[1790]: time="2025-05-17T10:21:46.391342929Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 10:21:46.391556 dockerd[1790]: time="2025-05-17T10:21:46.391470168Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 17 10:21:46.391682 dockerd[1790]: time="2025-05-17T10:21:46.391648813Z" level=info msg="Initializing buildkit" May 17 10:21:46.424863 dockerd[1790]: time="2025-05-17T10:21:46.424811911Z" level=info msg="Completed buildkit initialization" May 17 10:21:46.431604 dockerd[1790]: time="2025-05-17T10:21:46.431538233Z" level=info msg="Daemon has completed initialization" May 17 10:21:46.431725 dockerd[1790]: time="2025-05-17T10:21:46.431679428Z" level=info msg="API listen on /run/docker.sock" May 17 10:21:46.431837 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 10:21:47.618516 containerd[1560]: time="2025-05-17T10:21:47.618442193Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 10:21:48.478322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107375591.mount: Deactivated successfully. May 17 10:21:49.623669 containerd[1560]: time="2025-05-17T10:21:49.623597152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:49.624484 containerd[1560]: time="2025-05-17T10:21:49.624449512Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 10:21:49.625828 containerd[1560]: time="2025-05-17T10:21:49.625790688Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:49.628692 containerd[1560]: time="2025-05-17T10:21:49.628656506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:49.629662 containerd[1560]: time="2025-05-17T10:21:49.629613321Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.011125874s" May 17 10:21:49.629727 containerd[1560]: time="2025-05-17T10:21:49.629670719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 10:21:49.630456 containerd[1560]: time="2025-05-17T10:21:49.630406249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 10:21:50.973980 containerd[1560]: time="2025-05-17T10:21:50.973895170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:50.974843 containerd[1560]: time="2025-05-17T10:21:50.974784589Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 10:21:50.977074 containerd[1560]: time="2025-05-17T10:21:50.977033168Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:50.980052 containerd[1560]: time="2025-05-17T10:21:50.979998081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:50.980825 containerd[1560]: time="2025-05-17T10:21:50.980775280Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.350337291s" May 17 10:21:50.980825 containerd[1560]: time="2025-05-17T10:21:50.980810245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 10:21:50.981376 containerd[1560]: time="2025-05-17T10:21:50.981349788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 10:21:52.173670 containerd[1560]: time="2025-05-17T10:21:52.173592806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:52.174387 containerd[1560]: time="2025-05-17T10:21:52.174327966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 10:21:52.175835 containerd[1560]: time="2025-05-17T10:21:52.175756336Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:52.178525 containerd[1560]: time="2025-05-17T10:21:52.178448838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:52.179390 containerd[1560]: time="2025-05-17T10:21:52.179344849Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.197966368s" May 17 10:21:52.179390 containerd[1560]: time="2025-05-17T10:21:52.179376238Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 10:21:52.179899 containerd[1560]: time="2025-05-17T10:21:52.179874353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 10:21:53.110761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 10:21:53.113015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:21:53.121415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4210083309.mount: Deactivated successfully. May 17 10:21:53.394242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:21:53.409901 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 10:21:53.580697 kubelet[2075]: E0517 10:21:53.580587 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 10:21:53.588149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 10:21:53.588359 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 10:21:53.588801 systemd[1]: kubelet.service: Consumed 414ms CPU time, 110.9M memory peak. May 17 10:21:54.010857 containerd[1560]: time="2025-05-17T10:21:54.010784134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:54.011647 containerd[1560]: time="2025-05-17T10:21:54.011575569Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 10:21:54.012953 containerd[1560]: time="2025-05-17T10:21:54.012915012Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:54.014964 containerd[1560]: time="2025-05-17T10:21:54.014924914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:54.015571 containerd[1560]: time="2025-05-17T10:21:54.015534347Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.835630319s" May 17 10:21:54.015612 containerd[1560]: time="2025-05-17T10:21:54.015576136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 10:21:54.016242 containerd[1560]: time="2025-05-17T10:21:54.016216698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 10:21:54.534781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339981056.mount: Deactivated successfully. May 17 10:21:55.393644 containerd[1560]: time="2025-05-17T10:21:55.393572587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:55.403203 containerd[1560]: time="2025-05-17T10:21:55.403145911Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 10:21:55.422139 containerd[1560]: time="2025-05-17T10:21:55.422059681Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:55.425306 containerd[1560]: time="2025-05-17T10:21:55.425249136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:55.426673 containerd[1560]: time="2025-05-17T10:21:55.426632782Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.4103868s" May 17 10:21:55.426761 containerd[1560]: time="2025-05-17T10:21:55.426671886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 10:21:55.427343 containerd[1560]: time="2025-05-17T10:21:55.427193985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 10:21:55.862274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498229564.mount: Deactivated successfully. May 17 10:21:55.868472 containerd[1560]: time="2025-05-17T10:21:55.868425435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 10:21:55.869107 containerd[1560]: time="2025-05-17T10:21:55.869072559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 10:21:55.870173 containerd[1560]: time="2025-05-17T10:21:55.870134261Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 10:21:55.872182 containerd[1560]: time="2025-05-17T10:21:55.872132731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 10:21:55.872744 containerd[1560]: time="2025-05-17T10:21:55.872691970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 445.462748ms" May 17 10:21:55.872744 containerd[1560]: time="2025-05-17T10:21:55.872727407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 10:21:55.873282 containerd[1560]: time="2025-05-17T10:21:55.873248865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 10:21:56.409393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182778401.mount: Deactivated successfully. May 17 10:21:57.938983 containerd[1560]: time="2025-05-17T10:21:57.938898829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:57.940033 containerd[1560]: time="2025-05-17T10:21:57.939963918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 10:21:57.941366 containerd[1560]: time="2025-05-17T10:21:57.941317247Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:57.944160 containerd[1560]: time="2025-05-17T10:21:57.944108305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:21:57.945398 containerd[1560]: time="2025-05-17T10:21:57.945362999Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.072081933s" May 17 10:21:57.945398 containerd[1560]: time="2025-05-17T10:21:57.945399147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 10:22:00.368197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:22:00.368378 systemd[1]: kubelet.service: Consumed 414ms CPU time, 110.9M memory peak. May 17 10:22:00.370672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:22:00.398018 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... May 17 10:22:00.398038 systemd[1]: Reloading... May 17 10:22:00.488563 zram_generator::config[2268]: No configuration found. May 17 10:22:00.605849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:22:00.727646 systemd[1]: Reloading finished in 329 ms. May 17 10:22:00.809200 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 10:22:00.809317 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 10:22:00.809636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:22:00.809680 systemd[1]: kubelet.service: Consumed 178ms CPU time, 98.3M memory peak. May 17 10:22:00.811362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:22:01.011924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:22:01.023887 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 10:22:01.078367 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:22:01.078367 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 10:22:01.078367 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:22:01.078849 kubelet[2319]: I0517 10:22:01.078431 2319 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 10:22:01.592906 kubelet[2319]: I0517 10:22:01.592844 2319 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 10:22:01.592906 kubelet[2319]: I0517 10:22:01.592884 2319 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 10:22:01.593188 kubelet[2319]: I0517 10:22:01.593165 2319 server.go:954] "Client rotation is on, will bootstrap in background" May 17 10:22:01.626408 kubelet[2319]: I0517 10:22:01.626352 2319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 10:22:01.637210 kubelet[2319]: E0517 10:22:01.637156 2319 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:01.641593 kubelet[2319]: I0517 10:22:01.641541 2319 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 17 10:22:01.648449 kubelet[2319]: I0517 10:22:01.648370 2319 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 10:22:01.651072 kubelet[2319]: I0517 10:22:01.650988 2319 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 10:22:01.651316 kubelet[2319]: I0517 10:22:01.651041 2319 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 10:22:01.651316 kubelet[2319]: I0517 10:22:01.651282 2319 topology_manager.go:138] "Creating topology manager with none policy" May 17 10:22:01.651316 kubelet[2319]: I0517 10:22:01.651293 2319 container_manager_linux.go:304] "Creating device plugin manager" May 17 10:22:01.651625 kubelet[2319]: I0517 10:22:01.651481 2319 state_mem.go:36] "Initialized new in-memory state store" May 17 10:22:01.656821 kubelet[2319]: I0517 10:22:01.656790 2319 kubelet.go:446] "Attempting to sync node with API server" May 17 10:22:01.656821 kubelet[2319]: I0517 10:22:01.656823 2319 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 10:22:01.656907 kubelet[2319]: I0517 10:22:01.656857 2319 kubelet.go:352] "Adding apiserver pod source" May 17 10:22:01.656907 kubelet[2319]: I0517 10:22:01.656873 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 10:22:01.661389 kubelet[2319]: I0517 10:22:01.661357 2319 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 17 10:22:01.662007 kubelet[2319]: I0517 10:22:01.661969 2319 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 10:22:01.662860 kubelet[2319]: W0517 10:22:01.662820 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 10:22:01.663990 kubelet[2319]: W0517 10:22:01.663938 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:01.664033 kubelet[2319]: E0517 10:22:01.663992 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:01.664628 kubelet[2319]: W0517 10:22:01.664536 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:01.664628 kubelet[2319]: E0517 10:22:01.664630 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:01.665423 kubelet[2319]: I0517 10:22:01.665375 2319 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 10:22:01.665423 kubelet[2319]: I0517 10:22:01.665429 2319 server.go:1287] "Started kubelet" May 17 10:22:01.665743 kubelet[2319]: I0517 10:22:01.665691 2319 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 10:22:01.699872 kubelet[2319]: I0517 10:22:01.699745 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 10:22:01.700286 kubelet[2319]: I0517 10:22:01.700256 2319 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 10:22:01.701761 kubelet[2319]: I0517 10:22:01.701714 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 10:22:01.703912 kubelet[2319]: I0517 10:22:01.703180 2319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 10:22:01.704401 kubelet[2319]: I0517 10:22:01.704367 2319 server.go:479] "Adding debug handlers to kubelet server" May 17 10:22:01.706379 kubelet[2319]: E0517 10:22:01.705761 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 10:22:01.706379 kubelet[2319]: I0517 10:22:01.705802 2319 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 10:22:01.706379 kubelet[2319]: I0517 10:22:01.705895 2319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 10:22:01.706379 kubelet[2319]: I0517 10:22:01.705937 2319 reconciler.go:26] "Reconciler: start to sync state" May 17 10:22:01.706379 kubelet[2319]: W0517 10:22:01.706276 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:01.706379 kubelet[2319]: E0517 10:22:01.706317 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:01.706656 kubelet[2319]: I0517 10:22:01.706471 2319 factory.go:221] Registration of the systemd container factory successfully May 17 10:22:01.706656 kubelet[2319]: E0517 10:22:01.706481 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" May 17 10:22:01.707299 kubelet[2319]: I0517 10:22:01.707265 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 10:22:01.708355 kubelet[2319]: I0517 10:22:01.708321 2319 factory.go:221] Registration of the containerd container factory successfully May 17 10:22:01.709570 kubelet[2319]: E0517 10:22:01.706944 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840495d92eac168 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 10:22:01.665397096 +0000 UTC m=+0.636205615,LastTimestamp:2025-05-17 10:22:01.665397096 +0000 UTC m=+0.636205615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 10:22:01.710025 kubelet[2319]: E0517 10:22:01.709982 2319 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 10:22:01.724051 kubelet[2319]: I0517 10:22:01.724010 2319 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 10:22:01.724051 kubelet[2319]: I0517 10:22:01.724035 2319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 10:22:01.724051 kubelet[2319]: I0517 10:22:01.724053 2319 state_mem.go:36] "Initialized new in-memory state store" May 17 10:22:01.733748 kubelet[2319]: I0517 10:22:01.733674 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 10:22:01.734311 kubelet[2319]: I0517 10:22:01.734281 2319 policy_none.go:49] "None policy: Start" May 17 10:22:01.734311 kubelet[2319]: I0517 10:22:01.734310 2319 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 10:22:01.734376 kubelet[2319]: I0517 10:22:01.734325 2319 state_mem.go:35] "Initializing new in-memory state store" May 17 10:22:01.735618 kubelet[2319]: I0517 10:22:01.735546 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 10:22:01.735618 kubelet[2319]: I0517 10:22:01.735578 2319 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 10:22:01.735618 kubelet[2319]: I0517 10:22:01.735605 2319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 10:22:01.735618 kubelet[2319]: I0517 10:22:01.735614 2319 kubelet.go:2382] "Starting kubelet main sync loop" May 17 10:22:01.735821 kubelet[2319]: E0517 10:22:01.735668 2319 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 10:22:01.736544 kubelet[2319]: W0517 10:22:01.736447 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:01.736544 kubelet[2319]: E0517 10:22:01.736523 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:01.743348 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 10:22:01.764126 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 10:22:01.768304 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 10:22:01.778871 kubelet[2319]: I0517 10:22:01.778809 2319 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 10:22:01.779185 kubelet[2319]: I0517 10:22:01.779157 2319 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 10:22:01.779252 kubelet[2319]: I0517 10:22:01.779183 2319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 10:22:01.779733 kubelet[2319]: I0517 10:22:01.779533 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 10:22:01.781228 kubelet[2319]: E0517 10:22:01.781205 2319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 10:22:01.781394 kubelet[2319]: E0517 10:22:01.781358 2319 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 10:22:01.845081 systemd[1]: Created slice kubepods-burstable-pod6f71d400377d66bae18027eb32f72ce9.slice - libcontainer container kubepods-burstable-pod6f71d400377d66bae18027eb32f72ce9.slice. May 17 10:22:01.863710 kubelet[2319]: E0517 10:22:01.863650 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:01.867075 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 10:22:01.881303 kubelet[2319]: I0517 10:22:01.881252 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:22:01.881690 kubelet[2319]: E0517 10:22:01.881654 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 17 10:22:01.887258 kubelet[2319]: E0517 10:22:01.887210 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:01.890480 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 10:22:01.892517 kubelet[2319]: E0517 10:22:01.892469 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:01.906917 kubelet[2319]: I0517 10:22:01.906828 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:01.907274 kubelet[2319]: E0517 10:22:01.907201 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" May 17 10:22:02.007643 kubelet[2319]: I0517 10:22:02.007582 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:02.007643 kubelet[2319]: I0517 10:22:02.007629 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:02.007643 kubelet[2319]: I0517 10:22:02.007646 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 10:22:02.007643 kubelet[2319]: I0517 10:22:02.007660 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:02.008002 kubelet[2319]: I0517 10:22:02.007675 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:02.008002 kubelet[2319]: I0517 10:22:02.007733 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:02.008002 kubelet[2319]: I0517 10:22:02.007823 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:02.008002 kubelet[2319]: I0517 10:22:02.007878 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:02.083213 kubelet[2319]: I0517 10:22:02.083162 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:22:02.083711 kubelet[2319]: E0517 10:22:02.083631 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 17 10:22:02.164910 kubelet[2319]: E0517 10:22:02.164755 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.165682 containerd[1560]: time="2025-05-17T10:22:02.165641459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f71d400377d66bae18027eb32f72ce9,Namespace:kube-system,Attempt:0,}" May 17 10:22:02.187918 kubelet[2319]: E0517 10:22:02.187882 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.188372 containerd[1560]: time="2025-05-17T10:22:02.188336885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 10:22:02.193623 kubelet[2319]: E0517 10:22:02.193594 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.194132 containerd[1560]: time="2025-05-17T10:22:02.194079000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 10:22:02.308460 kubelet[2319]: E0517 10:22:02.308406 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" May 17 10:22:02.485355 kubelet[2319]: I0517 10:22:02.485319 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:22:02.485781 kubelet[2319]: E0517 10:22:02.485749 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 17 10:22:02.569325 containerd[1560]: time="2025-05-17T10:22:02.569257485Z" level=info msg="connecting to shim 7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478" address="unix:///run/containerd/s/bc6440fc63ede8889020d9d40c5924cdb7e27945d083b5032194b400f18c54d3" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:02.570525 containerd[1560]: time="2025-05-17T10:22:02.570430196Z" level=info msg="connecting to shim 415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42" address="unix:///run/containerd/s/e8b89d25a87c3344bc05cfad14d4faffca12a0313bc99c29b5af8e771d98e6bd" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:02.622018 containerd[1560]: time="2025-05-17T10:22:02.621953596Z" level=info msg="connecting to shim ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0" address="unix:///run/containerd/s/f1c23f961abf03d8be6597db9fe1a188e603063bbddbf47affbec358e906b90a" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:02.639676 systemd[1]: Started cri-containerd-415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42.scope - libcontainer container 415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42. May 17 10:22:02.643886 systemd[1]: Started cri-containerd-7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478.scope - libcontainer container 7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478. May 17 10:22:02.667647 systemd[1]: Started cri-containerd-ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0.scope - libcontainer container ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0. May 17 10:22:02.709193 containerd[1560]: time="2025-05-17T10:22:02.709149312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f71d400377d66bae18027eb32f72ce9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478\"" May 17 10:22:02.712434 kubelet[2319]: E0517 10:22:02.712404 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.717589 containerd[1560]: time="2025-05-17T10:22:02.717546289Z" level=info msg="CreateContainer within sandbox \"7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 10:22:02.731433 containerd[1560]: time="2025-05-17T10:22:02.731399002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0\"" May 17 10:22:02.732479 kubelet[2319]: E0517 10:22:02.732451 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.733730 kubelet[2319]: W0517 10:22:02.733701 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:02.733797 kubelet[2319]: E0517 10:22:02.733771 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:02.735113 containerd[1560]: time="2025-05-17T10:22:02.735088686Z" level=info msg="CreateContainer within sandbox \"ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 10:22:02.738950 containerd[1560]: time="2025-05-17T10:22:02.738766867Z" level=info msg="Container 151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:02.751075 containerd[1560]: time="2025-05-17T10:22:02.751036150Z" level=info msg="Container 51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:02.752754 containerd[1560]: time="2025-05-17T10:22:02.752732172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42\"" May 17 10:22:02.753552 kubelet[2319]: E0517 10:22:02.753525 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:02.755801 containerd[1560]: time="2025-05-17T10:22:02.755780472Z" level=info msg="CreateContainer within sandbox \"415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 10:22:02.756174 containerd[1560]: time="2025-05-17T10:22:02.756128796Z" level=info msg="CreateContainer within sandbox \"7a2767d6d403cd33968715102f154007ab63c3956dc827575a1635f8ca847478\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec\"" May 17 10:22:02.756868 containerd[1560]: time="2025-05-17T10:22:02.756837025Z" level=info msg="StartContainer for \"151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec\"" May 17 10:22:02.758877 containerd[1560]: time="2025-05-17T10:22:02.758836386Z" level=info msg="connecting to shim 151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec" address="unix:///run/containerd/s/bc6440fc63ede8889020d9d40c5924cdb7e27945d083b5032194b400f18c54d3" protocol=ttrpc version=3 May 17 10:22:02.765323 containerd[1560]: time="2025-05-17T10:22:02.765208173Z" level=info msg="CreateContainer within sandbox \"ca16ca3115d42c8964f940b0bfeae19e2b2d3765223a2230e1ae0b064a6935b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901\"" May 17 10:22:02.766088 containerd[1560]: time="2025-05-17T10:22:02.766062496Z" level=info msg="StartContainer for \"51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901\"" May 17 10:22:02.767209 containerd[1560]: time="2025-05-17T10:22:02.767185082Z" level=info msg="connecting to shim 51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901" address="unix:///run/containerd/s/f1c23f961abf03d8be6597db9fe1a188e603063bbddbf47affbec358e906b90a" protocol=ttrpc version=3 May 17 10:22:02.770827 containerd[1560]: time="2025-05-17T10:22:02.770798933Z" level=info msg="Container bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:02.778936 containerd[1560]: time="2025-05-17T10:22:02.778887721Z" level=info msg="CreateContainer within sandbox \"415c368a6d4d08a1be9bd5d2946c97f1728613d4b1fa2b4aa9598f5203d90a42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5\"" May 17 10:22:02.779590 containerd[1560]: time="2025-05-17T10:22:02.779562057Z" level=info msg="StartContainer for \"bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5\"" May 17 10:22:02.780842 containerd[1560]: time="2025-05-17T10:22:02.780806111Z" level=info msg="connecting to shim bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5" address="unix:///run/containerd/s/e8b89d25a87c3344bc05cfad14d4faffca12a0313bc99c29b5af8e771d98e6bd" protocol=ttrpc version=3 May 17 10:22:02.786685 systemd[1]: Started cri-containerd-151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec.scope - libcontainer container 151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec. May 17 10:22:02.791044 systemd[1]: Started cri-containerd-51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901.scope - libcontainer container 51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901. May 17 10:22:02.827640 systemd[1]: Started cri-containerd-bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5.scope - libcontainer container bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5. May 17 10:22:02.855579 containerd[1560]: time="2025-05-17T10:22:02.855532029Z" level=info msg="StartContainer for \"151640c5fd854e37e35240bdcce221c32bb5360efc28dd4f973394bffd68e0ec\" returns successfully" May 17 10:22:02.878526 containerd[1560]: time="2025-05-17T10:22:02.877609916Z" level=info msg="StartContainer for \"51537e262d101be560b775e612b8c64b12c7128d934a705344c6cf2fbc587901\" returns successfully" May 17 10:22:02.892102 containerd[1560]: time="2025-05-17T10:22:02.891974060Z" level=info msg="StartContainer for \"bb10680d145f60cebb422aa79386ac12bd3b8cd724a29b329b9995452bd4fbf5\" returns successfully" May 17 10:22:02.892927 kubelet[2319]: W0517 10:22:02.892868 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 17 10:22:02.892992 kubelet[2319]: E0517 10:22:02.892938 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 17 10:22:03.290523 kubelet[2319]: I0517 10:22:03.290172 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:22:03.753284 kubelet[2319]: E0517 10:22:03.753221 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:03.753574 kubelet[2319]: E0517 10:22:03.753394 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:03.755384 kubelet[2319]: E0517 10:22:03.755355 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:03.755541 kubelet[2319]: E0517 10:22:03.755516 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:03.757745 kubelet[2319]: E0517 10:22:03.757708 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 10:22:03.757877 kubelet[2319]: E0517 10:22:03.757857 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:04.347369 kubelet[2319]: E0517 10:22:04.347291 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 10:22:04.591537 kubelet[2319]: I0517 10:22:04.591150 2319 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 10:22:04.591537 kubelet[2319]: E0517 10:22:04.591212 2319 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 10:22:04.606919 kubelet[2319]: I0517 10:22:04.606723 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:22:04.658843 kubelet[2319]: I0517 10:22:04.658784 2319 apiserver.go:52] "Watching apiserver" May 17 10:22:04.687666 kubelet[2319]: E0517 10:22:04.687579 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 10:22:04.687666 kubelet[2319]: I0517 10:22:04.687644 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:22:04.690703 kubelet[2319]: E0517 10:22:04.690654 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 10:22:04.690703 kubelet[2319]: I0517 10:22:04.690698 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:04.692524 kubelet[2319]: E0517 10:22:04.692420 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:04.707879 kubelet[2319]: I0517 10:22:04.707783 2319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 10:22:04.759447 kubelet[2319]: I0517 10:22:04.759409 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:22:04.759683 kubelet[2319]: I0517 10:22:04.759516 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:04.759683 kubelet[2319]: I0517 10:22:04.759572 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:22:04.762787 kubelet[2319]: E0517 10:22:04.762753 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 10:22:04.762787 kubelet[2319]: E0517 10:22:04.762781 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 10:22:04.763076 kubelet[2319]: E0517 10:22:04.762754 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:04.763076 kubelet[2319]: E0517 10:22:04.763002 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:04.763076 kubelet[2319]: E0517 10:22:04.763004 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:04.763194 kubelet[2319]: E0517 10:22:04.763105 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:05.760847 kubelet[2319]: I0517 10:22:05.760805 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:05.761415 kubelet[2319]: I0517 10:22:05.760989 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:22:05.774432 kubelet[2319]: E0517 10:22:05.774386 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:05.782647 kubelet[2319]: E0517 10:22:05.782593 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:06.763099 kubelet[2319]: E0517 10:22:06.763050 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:06.763672 kubelet[2319]: E0517 10:22:06.763403 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:07.468442 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... May 17 10:22:07.468463 systemd[1]: Reloading... May 17 10:22:07.586537 zram_generator::config[2638]: No configuration found. May 17 10:22:07.701649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 10:22:07.743688 kubelet[2319]: I0517 10:22:07.743567 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:22:07.750118 kubelet[2319]: E0517 10:22:07.750052 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:07.764721 kubelet[2319]: E0517 10:22:07.764656 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:07.869091 systemd[1]: Reloading finished in 400 ms. May 17 10:22:07.892943 kubelet[2319]: I0517 10:22:07.892882 2319 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 10:22:07.892938 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:22:07.904894 systemd[1]: kubelet.service: Deactivated successfully. May 17 10:22:07.905267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:22:07.905329 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 132.7M memory peak. May 17 10:22:07.907412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 10:22:08.152880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 10:22:08.165887 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 10:22:08.214738 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:22:08.214738 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 10:22:08.214738 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 10:22:08.215308 kubelet[2684]: I0517 10:22:08.214780 2684 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 10:22:08.223509 kubelet[2684]: I0517 10:22:08.223433 2684 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 10:22:08.223649 kubelet[2684]: I0517 10:22:08.223583 2684 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 10:22:08.224001 kubelet[2684]: I0517 10:22:08.223968 2684 server.go:954] "Client rotation is on, will bootstrap in background" May 17 10:22:08.225241 kubelet[2684]: I0517 10:22:08.225211 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 10:22:08.227456 kubelet[2684]: I0517 10:22:08.227414 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 10:22:08.234705 kubelet[2684]: I0517 10:22:08.234660 2684 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 17 10:22:08.240983 kubelet[2684]: I0517 10:22:08.240925 2684 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 10:22:08.241486 kubelet[2684]: I0517 10:22:08.241415 2684 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 10:22:08.241709 kubelet[2684]: I0517 10:22:08.241458 2684 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 10:22:08.241709 kubelet[2684]: I0517 10:22:08.241710 2684 topology_manager.go:138] "Creating topology manager with none policy" May 17 10:22:08.241842 kubelet[2684]: I0517 10:22:08.241719 2684 container_manager_linux.go:304] "Creating device plugin manager" May 17 10:22:08.241842 kubelet[2684]: I0517 10:22:08.241834 2684 state_mem.go:36] "Initialized new in-memory state store" May 17 10:22:08.242056 kubelet[2684]: I0517 10:22:08.242035 2684 kubelet.go:446] "Attempting to sync node with API server" May 17 10:22:08.242089 kubelet[2684]: I0517 10:22:08.242065 2684 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 10:22:08.242121 kubelet[2684]: I0517 10:22:08.242104 2684 kubelet.go:352] "Adding apiserver pod source" May 17 10:22:08.242121 kubelet[2684]: I0517 10:22:08.242117 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 10:22:08.243417 kubelet[2684]: I0517 10:22:08.243384 2684 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 17 10:22:08.244021 kubelet[2684]: I0517 10:22:08.243983 2684 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 10:22:08.244891 kubelet[2684]: I0517 10:22:08.244824 2684 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 10:22:08.244891 kubelet[2684]: I0517 10:22:08.244853 2684 server.go:1287] "Started kubelet" May 17 10:22:08.246986 kubelet[2684]: I0517 10:22:08.246932 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 10:22:08.247377 kubelet[2684]: I0517 10:22:08.247231 2684 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 10:22:08.247377 kubelet[2684]: I0517 10:22:08.247288 2684 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 10:22:08.248137 kubelet[2684]: I0517 10:22:08.248115 2684 server.go:479] "Adding debug handlers to kubelet server" May 17 10:22:08.248965 kubelet[2684]: I0517 10:22:08.248943 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 10:22:08.250930 kubelet[2684]: E0517 10:22:08.250889 2684 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 10:22:08.252100 kubelet[2684]: I0517 10:22:08.252052 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 10:22:08.255295 kubelet[2684]: E0517 10:22:08.253431 2684 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 10:22:08.256147 kubelet[2684]: I0517 10:22:08.256089 2684 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 10:22:08.257170 kubelet[2684]: I0517 10:22:08.257143 2684 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 10:22:08.258182 kubelet[2684]: I0517 10:22:08.257651 2684 reconciler.go:26] "Reconciler: start to sync state" May 17 10:22:08.259818 kubelet[2684]: I0517 10:22:08.259534 2684 factory.go:221] Registration of the systemd container factory successfully May 17 10:22:08.259818 kubelet[2684]: I0517 10:22:08.259648 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 10:22:08.263401 kubelet[2684]: I0517 10:22:08.263366 2684 factory.go:221] Registration of the containerd container factory successfully May 17 10:22:08.274745 kubelet[2684]: I0517 10:22:08.274677 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 10:22:08.280006 kubelet[2684]: I0517 10:22:08.279961 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 10:22:08.280148 kubelet[2684]: I0517 10:22:08.280017 2684 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 10:22:08.280148 kubelet[2684]: I0517 10:22:08.280055 2684 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 10:22:08.280148 kubelet[2684]: I0517 10:22:08.280065 2684 kubelet.go:2382] "Starting kubelet main sync loop" May 17 10:22:08.280259 kubelet[2684]: E0517 10:22:08.280156 2684 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 10:22:08.323016 kubelet[2684]: I0517 10:22:08.322943 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 10:22:08.323016 kubelet[2684]: I0517 10:22:08.322969 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 10:22:08.323016 kubelet[2684]: I0517 10:22:08.322989 2684 state_mem.go:36] "Initialized new in-memory state store" May 17 10:22:08.323287 kubelet[2684]: I0517 10:22:08.323151 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 10:22:08.323287 kubelet[2684]: I0517 10:22:08.323162 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 10:22:08.323287 kubelet[2684]: I0517 10:22:08.323182 2684 policy_none.go:49] "None policy: Start" May 17 10:22:08.323287 kubelet[2684]: I0517 10:22:08.323197 2684 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 10:22:08.323287 kubelet[2684]: I0517 10:22:08.323211 2684 state_mem.go:35] "Initializing new in-memory state store" May 17 10:22:08.323438 kubelet[2684]: I0517 10:22:08.323327 2684 state_mem.go:75] "Updated machine memory state" May 17 10:22:08.329057 kubelet[2684]: I0517 10:22:08.329018 2684 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 10:22:08.329389 kubelet[2684]: I0517 10:22:08.329358 2684 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 10:22:08.329473 kubelet[2684]: I0517 10:22:08.329397 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 10:22:08.329768 kubelet[2684]: I0517 10:22:08.329741 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 10:22:08.330863 kubelet[2684]: E0517 10:22:08.330827 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 10:22:08.381952 kubelet[2684]: I0517 10:22:08.381906 2684 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:22:08.382052 kubelet[2684]: I0517 10:22:08.381983 2684 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.382052 kubelet[2684]: I0517 10:22:08.381916 2684 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 10:22:08.388526 kubelet[2684]: E0517 10:22:08.388468 2684 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 10:22:08.388989 kubelet[2684]: E0517 10:22:08.388944 2684 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.388989 kubelet[2684]: E0517 10:22:08.388964 2684 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 10:22:08.435395 kubelet[2684]: I0517 10:22:08.435244 2684 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 10:22:08.444284 kubelet[2684]: I0517 10:22:08.444140 2684 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 10:22:08.444284 kubelet[2684]: I0517 10:22:08.444232 2684 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 10:22:08.464099 sudo[2721]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 10:22:08.464531 sudo[2721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 10:22:08.558358 kubelet[2684]: I0517 10:22:08.558216 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.558358 kubelet[2684]: I0517 10:22:08.558337 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.558358 kubelet[2684]: I0517 10:22:08.558368 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.558624 kubelet[2684]: I0517 10:22:08.558395 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.558624 kubelet[2684]: I0517 10:22:08.558420 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 10:22:08.558624 kubelet[2684]: I0517 10:22:08.558445 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:08.558624 kubelet[2684]: I0517 10:22:08.558466 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 10:22:08.559660 kubelet[2684]: I0517 10:22:08.558485 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:08.559660 kubelet[2684]: I0517 10:22:08.558742 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f71d400377d66bae18027eb32f72ce9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f71d400377d66bae18027eb32f72ce9\") " pod="kube-system/kube-apiserver-localhost" May 17 10:22:08.689820 kubelet[2684]: E0517 10:22:08.688895 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:08.689820 kubelet[2684]: E0517 10:22:08.689751 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:08.690020 kubelet[2684]: E0517 10:22:08.689868 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:09.044103 sudo[2721]: pam_unix(sudo:session): session closed for user root May 17 10:22:09.243159 kubelet[2684]: I0517 10:22:09.243079 2684 apiserver.go:52] "Watching apiserver" May 17 10:22:09.258487 kubelet[2684]: I0517 10:22:09.258083 2684 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 10:22:09.293505 kubelet[2684]: I0517 10:22:09.293450 2684 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 10:22:09.294014 kubelet[2684]: E0517 10:22:09.293975 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:09.294328 kubelet[2684]: E0517 10:22:09.294242 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:09.329439 kubelet[2684]: E0517 10:22:09.329380 2684 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 10:22:09.329680 kubelet[2684]: E0517 10:22:09.329631 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:09.376660 kubelet[2684]: I0517 10:22:09.376588 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.376560822 podStartE2EDuration="2.376560822s" podCreationTimestamp="2025-05-17 10:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:09.376026048 +0000 UTC m=+1.205334915" watchObservedRunningTime="2025-05-17 10:22:09.376560822 +0000 UTC m=+1.205869699" May 17 10:22:09.543227 kubelet[2684]: I0517 10:22:09.543160 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.543136409 podStartE2EDuration="4.543136409s" podCreationTimestamp="2025-05-17 10:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:09.542988652 +0000 UTC m=+1.372297529" watchObservedRunningTime="2025-05-17 10:22:09.543136409 +0000 UTC m=+1.372445286" May 17 10:22:09.543421 kubelet[2684]: I0517 10:22:09.543297 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.543293073 podStartE2EDuration="4.543293073s" podCreationTimestamp="2025-05-17 10:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:09.428235736 +0000 UTC m=+1.257544623" watchObservedRunningTime="2025-05-17 10:22:09.543293073 +0000 UTC m=+1.372601950" May 17 10:22:10.294633 kubelet[2684]: E0517 10:22:10.294595 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:10.294633 kubelet[2684]: E0517 10:22:10.294647 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:10.618950 sudo[1770]: pam_unix(sudo:session): session closed for user root May 17 10:22:10.620445 sshd[1769]: Connection closed by 10.0.0.1 port 47364 May 17 10:22:10.621185 sshd-session[1767]: pam_unix(sshd:session): session closed for user core May 17 10:22:10.625453 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:47364.service: Deactivated successfully. May 17 10:22:10.627827 systemd[1]: session-7.scope: Deactivated successfully. May 17 10:22:10.628073 systemd[1]: session-7.scope: Consumed 4.620s CPU time, 261.5M memory peak. May 17 10:22:10.629300 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. May 17 10:22:10.630715 systemd-logind[1545]: Removed session 7. May 17 10:22:12.153546 kubelet[2684]: E0517 10:22:12.153508 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:12.252043 kubelet[2684]: I0517 10:22:12.251999 2684 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 10:22:12.252471 containerd[1560]: time="2025-05-17T10:22:12.252415417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 10:22:12.252849 kubelet[2684]: I0517 10:22:12.252639 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 10:22:12.298661 kubelet[2684]: E0517 10:22:12.298586 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.024043 systemd[1]: Created slice kubepods-besteffort-podec9f4787_60ce_485e_bc01_db9490cf0810.slice - libcontainer container kubepods-besteffort-podec9f4787_60ce_485e_bc01_db9490cf0810.slice. May 17 10:22:13.036749 systemd[1]: Created slice kubepods-burstable-pod2cdb9171_a0a6_4938_ac79_0069b7567752.slice - libcontainer container kubepods-burstable-pod2cdb9171_a0a6_4938_ac79_0069b7567752.slice. May 17 10:22:13.085966 kubelet[2684]: I0517 10:22:13.085895 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-xtables-lock\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086161 kubelet[2684]: I0517 10:22:13.086006 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-hubble-tls\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086161 kubelet[2684]: I0517 10:22:13.086035 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-config-path\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086290 kubelet[2684]: I0517 10:22:13.086267 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec9f4787-60ce-485e-bc01-db9490cf0810-kube-proxy\") pod \"kube-proxy-cjxj4\" (UID: \"ec9f4787-60ce-485e-bc01-db9490cf0810\") " pod="kube-system/kube-proxy-cjxj4" May 17 10:22:13.086341 kubelet[2684]: I0517 10:22:13.086302 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec9f4787-60ce-485e-bc01-db9490cf0810-xtables-lock\") pod \"kube-proxy-cjxj4\" (UID: \"ec9f4787-60ce-485e-bc01-db9490cf0810\") " pod="kube-system/kube-proxy-cjxj4" May 17 10:22:13.086341 kubelet[2684]: I0517 10:22:13.086324 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-run\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086344 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf28s\" (UniqueName: \"kubernetes.io/projected/ec9f4787-60ce-485e-bc01-db9490cf0810-kube-api-access-pf28s\") pod \"kube-proxy-cjxj4\" (UID: \"ec9f4787-60ce-485e-bc01-db9490cf0810\") " pod="kube-system/kube-proxy-cjxj4" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086364 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cni-path\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086382 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-hostproc\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086402 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-kernel\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086426 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec9f4787-60ce-485e-bc01-db9490cf0810-lib-modules\") pod \"kube-proxy-cjxj4\" (UID: \"ec9f4787-60ce-485e-bc01-db9490cf0810\") " pod="kube-system/kube-proxy-cjxj4" May 17 10:22:13.086524 kubelet[2684]: I0517 10:22:13.086447 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-etc-cni-netd\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.086472 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-net\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.086830 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-bpf-maps\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.086911 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cdb9171-a0a6-4938-ac79-0069b7567752-clustermesh-secrets\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.086993 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjbp\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-kube-api-access-qpjbp\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.087107 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-cgroup\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.087642 kubelet[2684]: I0517 10:22:13.087210 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-lib-modules\") pod \"cilium-qlz27\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " pod="kube-system/cilium-qlz27" May 17 10:22:13.336396 systemd[1]: Created slice kubepods-besteffort-pod3968e703_1df7_40fe_9f67_e2bca1f2f27a.slice - libcontainer container kubepods-besteffort-pod3968e703_1df7_40fe_9f67_e2bca1f2f27a.slice. May 17 10:22:13.353147 kubelet[2684]: E0517 10:22:13.353085 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.353651 kubelet[2684]: E0517 10:22:13.353416 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.353868 containerd[1560]: time="2025-05-17T10:22:13.353825259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjxj4,Uid:ec9f4787-60ce-485e-bc01-db9490cf0810,Namespace:kube-system,Attempt:0,}" May 17 10:22:13.354618 containerd[1560]: time="2025-05-17T10:22:13.354291260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlz27,Uid:2cdb9171-a0a6-4938-ac79-0069b7567752,Namespace:kube-system,Attempt:0,}" May 17 10:22:13.390776 kubelet[2684]: I0517 10:22:13.390727 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3968e703-1df7-40fe-9f67-e2bca1f2f27a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rv2z4\" (UID: \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\") " pod="kube-system/cilium-operator-6c4d7847fc-rv2z4" May 17 10:22:13.391051 kubelet[2684]: I0517 10:22:13.390785 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrwp\" (UniqueName: \"kubernetes.io/projected/3968e703-1df7-40fe-9f67-e2bca1f2f27a-kube-api-access-fvrwp\") pod \"cilium-operator-6c4d7847fc-rv2z4\" (UID: \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\") " pod="kube-system/cilium-operator-6c4d7847fc-rv2z4" May 17 10:22:13.393736 containerd[1560]: time="2025-05-17T10:22:13.393641839Z" level=info msg="connecting to shim e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e" address="unix:///run/containerd/s/5a788ecc4e64cdb093544afd78d4eff1f30fba36a2d7843b3a1a6b57eacd9bd2" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:13.396151 containerd[1560]: time="2025-05-17T10:22:13.396109501Z" level=info msg="connecting to shim f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:13.426667 systemd[1]: Started cri-containerd-e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e.scope - libcontainer container e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e. May 17 10:22:13.430239 systemd[1]: Started cri-containerd-f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf.scope - libcontainer container f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf. May 17 10:22:13.462119 containerd[1560]: time="2025-05-17T10:22:13.462051509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlz27,Uid:2cdb9171-a0a6-4938-ac79-0069b7567752,Namespace:kube-system,Attempt:0,} returns sandbox id \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\"" May 17 10:22:13.463764 kubelet[2684]: E0517 10:22:13.463736 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.465424 containerd[1560]: time="2025-05-17T10:22:13.465341164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjxj4,Uid:ec9f4787-60ce-485e-bc01-db9490cf0810,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e\"" May 17 10:22:13.466850 kubelet[2684]: E0517 10:22:13.466542 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.468005 containerd[1560]: time="2025-05-17T10:22:13.467963413Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 10:22:13.468938 containerd[1560]: time="2025-05-17T10:22:13.468892130Z" level=info msg="CreateContainer within sandbox \"e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 10:22:13.486601 containerd[1560]: time="2025-05-17T10:22:13.486549022Z" level=info msg="Container fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:13.501904 containerd[1560]: time="2025-05-17T10:22:13.501851687Z" level=info msg="CreateContainer within sandbox \"e9fe07dff18fffbfd64bb347e4e08091d3de51e3ec30d8ceb20c494173b4289e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2\"" May 17 10:22:13.502642 containerd[1560]: time="2025-05-17T10:22:13.502615960Z" level=info msg="StartContainer for \"fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2\"" May 17 10:22:13.504440 containerd[1560]: time="2025-05-17T10:22:13.504397781Z" level=info msg="connecting to shim fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2" address="unix:///run/containerd/s/5a788ecc4e64cdb093544afd78d4eff1f30fba36a2d7843b3a1a6b57eacd9bd2" protocol=ttrpc version=3 May 17 10:22:13.533742 systemd[1]: Started cri-containerd-fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2.scope - libcontainer container fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2. May 17 10:22:13.587529 containerd[1560]: time="2025-05-17T10:22:13.587462627Z" level=info msg="StartContainer for \"fe0bd35494750a61005b4efa2b225109e302007e341513d59ed24d33d0a43da2\" returns successfully" May 17 10:22:13.642248 kubelet[2684]: E0517 10:22:13.642092 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:13.642844 containerd[1560]: time="2025-05-17T10:22:13.642807094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rv2z4,Uid:3968e703-1df7-40fe-9f67-e2bca1f2f27a,Namespace:kube-system,Attempt:0,}" May 17 10:22:13.685400 containerd[1560]: time="2025-05-17T10:22:13.685323030Z" level=info msg="connecting to shim 6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc" address="unix:///run/containerd/s/f02030b17adbc74f06e26e26c9eba83c61dfa3ad8366d20f91dd52eee4268328" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:13.749830 systemd[1]: Started cri-containerd-6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc.scope - libcontainer container 6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc. May 17 10:22:13.805250 containerd[1560]: time="2025-05-17T10:22:13.805189347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rv2z4,Uid:3968e703-1df7-40fe-9f67-e2bca1f2f27a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\"" May 17 10:22:13.805824 kubelet[2684]: E0517 10:22:13.805792 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:14.304372 kubelet[2684]: E0517 10:22:14.304335 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:14.313362 kubelet[2684]: I0517 10:22:14.313282 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cjxj4" podStartSLOduration=2.313259339 podStartE2EDuration="2.313259339s" podCreationTimestamp="2025-05-17 10:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:14.313230283 +0000 UTC m=+6.142539170" watchObservedRunningTime="2025-05-17 10:22:14.313259339 +0000 UTC m=+6.142568216" May 17 10:22:15.706694 kubelet[2684]: E0517 10:22:15.706583 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:15.862743 kubelet[2684]: E0517 10:22:15.862682 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:16.307794 kubelet[2684]: E0517 10:22:16.307628 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:16.307794 kubelet[2684]: E0517 10:22:16.307698 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:17.309242 kubelet[2684]: E0517 10:22:17.309202 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:21.499291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100916475.mount: Deactivated successfully. May 17 10:22:24.760692 containerd[1560]: time="2025-05-17T10:22:24.760634899Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:22:24.761339 containerd[1560]: time="2025-05-17T10:22:24.761302315Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 17 10:22:24.762572 containerd[1560]: time="2025-05-17T10:22:24.762544177Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:22:24.763911 containerd[1560]: time="2025-05-17T10:22:24.763875991Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.295870728s" May 17 10:22:24.763911 containerd[1560]: time="2025-05-17T10:22:24.763908272Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 10:22:24.765042 containerd[1560]: time="2025-05-17T10:22:24.764815441Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 10:22:24.766567 containerd[1560]: time="2025-05-17T10:22:24.766526113Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 10:22:24.777310 containerd[1560]: time="2025-05-17T10:22:24.777255833Z" level=info msg="Container 379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:24.784770 containerd[1560]: time="2025-05-17T10:22:24.784725767Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\"" May 17 10:22:24.785072 containerd[1560]: time="2025-05-17T10:22:24.785034021Z" level=info msg="StartContainer for \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\"" May 17 10:22:24.785993 containerd[1560]: time="2025-05-17T10:22:24.785924669Z" level=info msg="connecting to shim 379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" protocol=ttrpc version=3 May 17 10:22:24.812650 systemd[1]: Started cri-containerd-379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13.scope - libcontainer container 379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13. May 17 10:22:24.849305 containerd[1560]: time="2025-05-17T10:22:24.849250739Z" level=info msg="StartContainer for \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" returns successfully" May 17 10:22:24.858769 systemd[1]: cri-containerd-379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13.scope: Deactivated successfully. May 17 10:22:24.860657 containerd[1560]: time="2025-05-17T10:22:24.860619991Z" level=info msg="received exit event container_id:\"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" id:\"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" pid:3103 exited_at:{seconds:1747477344 nanos:859985799}" May 17 10:22:24.861213 containerd[1560]: time="2025-05-17T10:22:24.861165063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" id:\"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" pid:3103 exited_at:{seconds:1747477344 nanos:859985799}" May 17 10:22:24.880825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13-rootfs.mount: Deactivated successfully. May 17 10:22:24.940781 update_engine[1549]: I20250517 10:22:24.940659 1549 update_attempter.cc:509] Updating boot flags... May 17 10:22:25.328694 kubelet[2684]: E0517 10:22:25.328621 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:25.335436 containerd[1560]: time="2025-05-17T10:22:25.333394518Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 10:22:25.368099 containerd[1560]: time="2025-05-17T10:22:25.366070605Z" level=info msg="Container 759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:25.377968 containerd[1560]: time="2025-05-17T10:22:25.377834141Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\"" May 17 10:22:25.379947 containerd[1560]: time="2025-05-17T10:22:25.379431445Z" level=info msg="StartContainer for \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\"" May 17 10:22:25.381080 containerd[1560]: time="2025-05-17T10:22:25.380987492Z" level=info msg="connecting to shim 759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" protocol=ttrpc version=3 May 17 10:22:25.452632 systemd[1]: Started cri-containerd-759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a.scope - libcontainer container 759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a. May 17 10:22:25.483549 containerd[1560]: time="2025-05-17T10:22:25.483462383Z" level=info msg="StartContainer for \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" returns successfully" May 17 10:22:25.497771 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 10:22:25.498144 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 10:22:25.498698 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 10:22:25.501537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 10:22:25.502021 systemd[1]: cri-containerd-759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a.scope: Deactivated successfully. May 17 10:22:25.502467 containerd[1560]: time="2025-05-17T10:22:25.502306117Z" level=info msg="received exit event container_id:\"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" id:\"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" pid:3165 exited_at:{seconds:1747477345 nanos:502047196}" May 17 10:22:25.502467 containerd[1560]: time="2025-05-17T10:22:25.502335422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" id:\"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" pid:3165 exited_at:{seconds:1747477345 nanos:502047196}" May 17 10:22:25.525305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 10:22:26.331653 kubelet[2684]: E0517 10:22:26.331618 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:26.334007 containerd[1560]: time="2025-05-17T10:22:26.333874993Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 10:22:26.351323 containerd[1560]: time="2025-05-17T10:22:26.351267709Z" level=info msg="Container 044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:26.355737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925890825.mount: Deactivated successfully. May 17 10:22:26.362217 containerd[1560]: time="2025-05-17T10:22:26.362181515Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\"" May 17 10:22:26.362759 containerd[1560]: time="2025-05-17T10:22:26.362726396Z" level=info msg="StartContainer for \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\"" May 17 10:22:26.364152 containerd[1560]: time="2025-05-17T10:22:26.364126666Z" level=info msg="connecting to shim 044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" protocol=ttrpc version=3 May 17 10:22:26.386685 systemd[1]: Started cri-containerd-044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9.scope - libcontainer container 044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9. May 17 10:22:26.432762 systemd[1]: cri-containerd-044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9.scope: Deactivated successfully. May 17 10:22:26.434092 containerd[1560]: time="2025-05-17T10:22:26.433992783Z" level=info msg="StartContainer for \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" returns successfully" May 17 10:22:26.435006 containerd[1560]: time="2025-05-17T10:22:26.434983758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" id:\"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" pid:3213 exited_at:{seconds:1747477346 nanos:434709710}" May 17 10:22:26.435094 containerd[1560]: time="2025-05-17T10:22:26.435037901Z" level=info msg="received exit event container_id:\"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" id:\"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" pid:3213 exited_at:{seconds:1747477346 nanos:434709710}" May 17 10:22:26.778305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841188943.mount: Deactivated successfully. May 17 10:22:26.778434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9-rootfs.mount: Deactivated successfully. May 17 10:22:27.336533 kubelet[2684]: E0517 10:22:27.336466 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:27.344100 containerd[1560]: time="2025-05-17T10:22:27.344049760Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 10:22:27.355959 containerd[1560]: time="2025-05-17T10:22:27.355906277Z" level=info msg="Container 03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:27.360557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725153352.mount: Deactivated successfully. May 17 10:22:27.365537 containerd[1560]: time="2025-05-17T10:22:27.365503812Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\"" May 17 10:22:27.366026 containerd[1560]: time="2025-05-17T10:22:27.365993377Z" level=info msg="StartContainer for \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\"" May 17 10:22:27.367104 containerd[1560]: time="2025-05-17T10:22:27.367064503Z" level=info msg="connecting to shim 03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" protocol=ttrpc version=3 May 17 10:22:27.389657 systemd[1]: Started cri-containerd-03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce.scope - libcontainer container 03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce. May 17 10:22:27.419611 systemd[1]: cri-containerd-03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce.scope: Deactivated successfully. May 17 10:22:27.420346 containerd[1560]: time="2025-05-17T10:22:27.420308518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" id:\"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" pid:3260 exited_at:{seconds:1747477347 nanos:419878514}" May 17 10:22:27.421316 containerd[1560]: time="2025-05-17T10:22:27.421278843Z" level=info msg="received exit event container_id:\"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" id:\"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" pid:3260 exited_at:{seconds:1747477347 nanos:419878514}" May 17 10:22:27.428823 containerd[1560]: time="2025-05-17T10:22:27.428776045Z" level=info msg="StartContainer for \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" returns successfully" May 17 10:22:27.442515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce-rootfs.mount: Deactivated successfully. May 17 10:22:28.344246 kubelet[2684]: E0517 10:22:28.344200 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:28.347300 containerd[1560]: time="2025-05-17T10:22:28.347248704Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 10:22:28.362401 containerd[1560]: time="2025-05-17T10:22:28.362344655Z" level=info msg="Container 128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:28.369744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867404016.mount: Deactivated successfully. May 17 10:22:28.374625 containerd[1560]: time="2025-05-17T10:22:28.374171385Z" level=info msg="CreateContainer within sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\"" May 17 10:22:28.375989 containerd[1560]: time="2025-05-17T10:22:28.375954636Z" level=info msg="StartContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\"" May 17 10:22:28.377634 containerd[1560]: time="2025-05-17T10:22:28.377534744Z" level=info msg="connecting to shim 128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9" address="unix:///run/containerd/s/d401e95761a2d67e649810cfba6e6f70d8d6d70c49ff5eae7a2c5bceab2aa8d7" protocol=ttrpc version=3 May 17 10:22:28.408679 systemd[1]: Started cri-containerd-128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9.scope - libcontainer container 128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9. May 17 10:22:28.471195 containerd[1560]: time="2025-05-17T10:22:28.471133322Z" level=info msg="StartContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" returns successfully" May 17 10:22:28.556678 containerd[1560]: time="2025-05-17T10:22:28.554606528Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:22:28.556678 containerd[1560]: time="2025-05-17T10:22:28.555488536Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 17 10:22:28.566771 containerd[1560]: time="2025-05-17T10:22:28.566487921Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.801638025s" May 17 10:22:28.566771 containerd[1560]: time="2025-05-17T10:22:28.566582059Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 10:22:28.567209 containerd[1560]: time="2025-05-17T10:22:28.567154300Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 10:22:28.572514 containerd[1560]: time="2025-05-17T10:22:28.572330196Z" level=info msg="CreateContainer within sandbox \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 10:22:28.580978 containerd[1560]: time="2025-05-17T10:22:28.580914977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" id:\"cde19581bd6bd528c25a1f3263d20a567ff4ee33bcab6e663369f9f1b757dabd\" pid:3334 exited_at:{seconds:1747477348 nanos:579779200}" May 17 10:22:28.674963 kubelet[2684]: I0517 10:22:28.674793 2684 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 10:22:28.701649 containerd[1560]: time="2025-05-17T10:22:28.700694814Z" level=info msg="Container 70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:28.714530 containerd[1560]: time="2025-05-17T10:22:28.714411146Z" level=info msg="CreateContainer within sandbox \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\"" May 17 10:22:28.716369 containerd[1560]: time="2025-05-17T10:22:28.715979631Z" level=info msg="StartContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\"" May 17 10:22:28.718068 containerd[1560]: time="2025-05-17T10:22:28.718036351Z" level=info msg="connecting to shim 70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2" address="unix:///run/containerd/s/f02030b17adbc74f06e26e26c9eba83c61dfa3ad8366d20f91dd52eee4268328" protocol=ttrpc version=3 May 17 10:22:28.744694 systemd[1]: Created slice kubepods-burstable-podf87cee3b_c08b_4ec0_a398_5f0dc405fb69.slice - libcontainer container kubepods-burstable-podf87cee3b_c08b_4ec0_a398_5f0dc405fb69.slice. May 17 10:22:28.756030 systemd[1]: Created slice kubepods-burstable-podd01b5d2f_6db1_4f62_ac23_5d68f032eca9.slice - libcontainer container kubepods-burstable-podd01b5d2f_6db1_4f62_ac23_5d68f032eca9.slice. May 17 10:22:28.772895 systemd[1]: Started cri-containerd-70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2.scope - libcontainer container 70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2. May 17 10:22:28.792504 kubelet[2684]: I0517 10:22:28.792377 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d01b5d2f-6db1-4f62-ac23-5d68f032eca9-config-volume\") pod \"coredns-668d6bf9bc-sx65c\" (UID: \"d01b5d2f-6db1-4f62-ac23-5d68f032eca9\") " pod="kube-system/coredns-668d6bf9bc-sx65c" May 17 10:22:28.792717 kubelet[2684]: I0517 10:22:28.792533 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phpp\" (UniqueName: \"kubernetes.io/projected/f87cee3b-c08b-4ec0-a398-5f0dc405fb69-kube-api-access-5phpp\") pod \"coredns-668d6bf9bc-m52nb\" (UID: \"f87cee3b-c08b-4ec0-a398-5f0dc405fb69\") " pod="kube-system/coredns-668d6bf9bc-m52nb" May 17 10:22:28.792717 kubelet[2684]: I0517 10:22:28.792604 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zlg\" (UniqueName: \"kubernetes.io/projected/d01b5d2f-6db1-4f62-ac23-5d68f032eca9-kube-api-access-z6zlg\") pod \"coredns-668d6bf9bc-sx65c\" (UID: \"d01b5d2f-6db1-4f62-ac23-5d68f032eca9\") " pod="kube-system/coredns-668d6bf9bc-sx65c" May 17 10:22:28.792717 kubelet[2684]: I0517 10:22:28.792634 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f87cee3b-c08b-4ec0-a398-5f0dc405fb69-config-volume\") pod \"coredns-668d6bf9bc-m52nb\" (UID: \"f87cee3b-c08b-4ec0-a398-5f0dc405fb69\") " pod="kube-system/coredns-668d6bf9bc-m52nb" May 17 10:22:28.818740 containerd[1560]: time="2025-05-17T10:22:28.818690809Z" level=info msg="StartContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" returns successfully" May 17 10:22:29.051694 kubelet[2684]: E0517 10:22:29.051631 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:29.052983 containerd[1560]: time="2025-05-17T10:22:29.052935779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m52nb,Uid:f87cee3b-c08b-4ec0-a398-5f0dc405fb69,Namespace:kube-system,Attempt:0,}" May 17 10:22:29.062563 kubelet[2684]: E0517 10:22:29.062084 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:29.063374 containerd[1560]: time="2025-05-17T10:22:29.063326946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx65c,Uid:d01b5d2f-6db1-4f62-ac23-5d68f032eca9,Namespace:kube-system,Attempt:0,}" May 17 10:22:29.356157 kubelet[2684]: E0517 10:22:29.355987 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:29.367173 kubelet[2684]: E0517 10:22:29.367121 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:29.425601 kubelet[2684]: I0517 10:22:29.425519 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qlz27" podStartSLOduration=6.126155203 podStartE2EDuration="17.425474465s" podCreationTimestamp="2025-05-17 10:22:12 +0000 UTC" firstStartedPulling="2025-05-17 10:22:13.465387433 +0000 UTC m=+5.294696310" lastFinishedPulling="2025-05-17 10:22:24.764706695 +0000 UTC m=+16.594015572" observedRunningTime="2025-05-17 10:22:29.424030848 +0000 UTC m=+21.253339725" watchObservedRunningTime="2025-05-17 10:22:29.425474465 +0000 UTC m=+21.254783342" May 17 10:22:29.425819 kubelet[2684]: I0517 10:22:29.425713 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rv2z4" podStartSLOduration=1.663493179 podStartE2EDuration="16.425706784s" podCreationTimestamp="2025-05-17 10:22:13 +0000 UTC" firstStartedPulling="2025-05-17 10:22:13.806434711 +0000 UTC m=+5.635743588" lastFinishedPulling="2025-05-17 10:22:28.568648316 +0000 UTC m=+20.397957193" observedRunningTime="2025-05-17 10:22:29.403609607 +0000 UTC m=+21.232918484" watchObservedRunningTime="2025-05-17 10:22:29.425706784 +0000 UTC m=+21.255015671" May 17 10:22:30.368877 kubelet[2684]: E0517 10:22:30.368826 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:30.368877 kubelet[2684]: E0517 10:22:30.368892 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:31.371994 kubelet[2684]: E0517 10:22:31.371938 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:31.685807 systemd-networkd[1492]: cilium_host: Link UP May 17 10:22:31.685976 systemd-networkd[1492]: cilium_net: Link UP May 17 10:22:31.686185 systemd-networkd[1492]: cilium_net: Gained carrier May 17 10:22:31.686364 systemd-networkd[1492]: cilium_host: Gained carrier May 17 10:22:31.797683 systemd-networkd[1492]: cilium_vxlan: Link UP May 17 10:22:31.797694 systemd-networkd[1492]: cilium_vxlan: Gained carrier May 17 10:22:31.894753 systemd-networkd[1492]: cilium_net: Gained IPv6LL May 17 10:22:32.021542 kernel: NET: Registered PF_ALG protocol family May 17 10:22:32.055840 systemd-networkd[1492]: cilium_host: Gained IPv6LL May 17 10:22:32.725362 systemd-networkd[1492]: lxc_health: Link UP May 17 10:22:32.725819 systemd-networkd[1492]: lxc_health: Gained carrier May 17 10:22:33.105543 kernel: eth0: renamed from tmpec7ff May 17 10:22:33.106148 systemd-networkd[1492]: lxce3889936bb1c: Link UP May 17 10:22:33.108786 systemd-networkd[1492]: lxce3889936bb1c: Gained carrier May 17 10:22:33.132548 kernel: eth0: renamed from tmpc453f May 17 10:22:33.134013 systemd-networkd[1492]: lxc8df2762f6836: Link UP May 17 10:22:33.136840 systemd-networkd[1492]: lxc8df2762f6836: Gained carrier May 17 10:22:33.207685 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL May 17 10:22:33.358858 kubelet[2684]: E0517 10:22:33.357999 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:33.377647 kubelet[2684]: E0517 10:22:33.377166 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:34.038723 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 17 10:22:34.378958 kubelet[2684]: E0517 10:22:34.378833 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:34.806768 systemd-networkd[1492]: lxce3889936bb1c: Gained IPv6LL May 17 10:22:35.126721 systemd-networkd[1492]: lxc8df2762f6836: Gained IPv6LL May 17 10:22:36.886351 containerd[1560]: time="2025-05-17T10:22:36.886282447Z" level=info msg="connecting to shim c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653" address="unix:///run/containerd/s/cc2a7a0c5cfd09b7ceac9ff652a1f8c6c0da3051f49191d7e38112d656dd30dd" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:36.907624 containerd[1560]: time="2025-05-17T10:22:36.907568688Z" level=info msg="connecting to shim ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2" address="unix:///run/containerd/s/b22757e3e90f81489ffac1892f07283c238b0eb866f5b04232ba152ab14ff68f" namespace=k8s.io protocol=ttrpc version=3 May 17 10:22:36.920743 systemd[1]: Started cri-containerd-c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653.scope - libcontainer container c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653. May 17 10:22:36.937451 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 10:22:36.959028 systemd[1]: Started cri-containerd-ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2.scope - libcontainer container ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2. May 17 10:22:36.978670 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 10:22:36.994397 containerd[1560]: time="2025-05-17T10:22:36.994266855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sx65c,Uid:d01b5d2f-6db1-4f62-ac23-5d68f032eca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653\"" May 17 10:22:37.000751 kubelet[2684]: E0517 10:22:37.000711 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:37.003425 containerd[1560]: time="2025-05-17T10:22:37.002932729Z" level=info msg="CreateContainer within sandbox \"c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 10:22:37.028672 containerd[1560]: time="2025-05-17T10:22:37.028605345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m52nb,Uid:f87cee3b-c08b-4ec0-a398-5f0dc405fb69,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2\"" May 17 10:22:37.030967 kubelet[2684]: E0517 10:22:37.030915 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:37.031797 containerd[1560]: time="2025-05-17T10:22:37.031754850Z" level=info msg="Container 4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:37.037332 containerd[1560]: time="2025-05-17T10:22:37.037287555Z" level=info msg="CreateContainer within sandbox \"ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 10:22:37.045829 containerd[1560]: time="2025-05-17T10:22:37.045787322Z" level=info msg="CreateContainer within sandbox \"c453fa48db64e679a688911e88e6eff32a4fdecf06129081df06ba4de639b653\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af\"" May 17 10:22:37.046378 containerd[1560]: time="2025-05-17T10:22:37.046351806Z" level=info msg="StartContainer for \"4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af\"" May 17 10:22:37.047226 containerd[1560]: time="2025-05-17T10:22:37.047195765Z" level=info msg="connecting to shim 4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af" address="unix:///run/containerd/s/cc2a7a0c5cfd09b7ceac9ff652a1f8c6c0da3051f49191d7e38112d656dd30dd" protocol=ttrpc version=3 May 17 10:22:37.050452 containerd[1560]: time="2025-05-17T10:22:37.050408019Z" level=info msg="Container 9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc: CDI devices from CRI Config.CDIDevices: []" May 17 10:22:37.058790 containerd[1560]: time="2025-05-17T10:22:37.058740570Z" level=info msg="CreateContainer within sandbox \"ec7ff61732dc41c68b5d634a77acd07d180220dbbc74c18918aedc033015b3c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc\"" May 17 10:22:37.059527 containerd[1560]: time="2025-05-17T10:22:37.059308601Z" level=info msg="StartContainer for \"9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc\"" May 17 10:22:37.060900 containerd[1560]: time="2025-05-17T10:22:37.060871466Z" level=info msg="connecting to shim 9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc" address="unix:///run/containerd/s/b22757e3e90f81489ffac1892f07283c238b0eb866f5b04232ba152ab14ff68f" protocol=ttrpc version=3 May 17 10:22:37.072792 systemd[1]: Started cri-containerd-4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af.scope - libcontainer container 4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af. May 17 10:22:37.094685 systemd[1]: Started cri-containerd-9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc.scope - libcontainer container 9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc. May 17 10:22:37.126836 containerd[1560]: time="2025-05-17T10:22:37.126784129Z" level=info msg="StartContainer for \"4e7dbe47606675b96e22b699e52337e54700a3e860045d7ff46cc7dc97c988af\" returns successfully" May 17 10:22:37.134745 containerd[1560]: time="2025-05-17T10:22:37.134707029Z" level=info msg="StartContainer for \"9aee9708b509841e6d9005becf0a63c4ff6d6638cc1f1c34f39a5bbe601f58bc\" returns successfully" May 17 10:22:37.356644 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). May 17 10:22:37.415166 kubelet[2684]: E0517 10:22:37.415129 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:37.419293 kubelet[2684]: E0517 10:22:37.419260 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:37.421946 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:37.423748 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:37.430977 systemd-logind[1545]: New session 8 of user core. May 17 10:22:37.438793 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 10:22:37.446552 kubelet[2684]: I0517 10:22:37.446417 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m52nb" podStartSLOduration=24.442934081 podStartE2EDuration="24.442934081s" podCreationTimestamp="2025-05-17 10:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:37.428740695 +0000 UTC m=+29.258049582" watchObservedRunningTime="2025-05-17 10:22:37.442934081 +0000 UTC m=+29.272242968" May 17 10:22:37.447287 kubelet[2684]: I0517 10:22:37.447130 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sx65c" podStartSLOduration=24.447109128 podStartE2EDuration="24.447109128s" podCreationTimestamp="2025-05-17 10:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:22:37.441473328 +0000 UTC m=+29.270782205" watchObservedRunningTime="2025-05-17 10:22:37.447109128 +0000 UTC m=+29.276418025" May 17 10:22:37.584821 sshd[4011]: Connection closed by 10.0.0.1 port 57644 May 17 10:22:37.585176 sshd-session[4008]: pam_unix(sshd:session): session closed for user core May 17 10:22:37.589439 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:57644.service: Deactivated successfully. May 17 10:22:37.591757 systemd[1]: session-8.scope: Deactivated successfully. May 17 10:22:37.592570 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. May 17 10:22:37.593828 systemd-logind[1545]: Removed session 8. May 17 10:22:37.878141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812465323.mount: Deactivated successfully. May 17 10:22:38.420938 kubelet[2684]: E0517 10:22:38.420890 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:38.421399 kubelet[2684]: E0517 10:22:38.420970 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:39.422620 kubelet[2684]: E0517 10:22:39.422576 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:39.423129 kubelet[2684]: E0517 10:22:39.422641 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:22:42.603271 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:57652.service - OpenSSH per-connection server daemon (10.0.0.1:57652). May 17 10:22:42.663725 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 57652 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:42.665569 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:42.670655 systemd-logind[1545]: New session 9 of user core. May 17 10:22:42.676705 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 10:22:42.803138 sshd[4039]: Connection closed by 10.0.0.1 port 57652 May 17 10:22:42.803627 sshd-session[4037]: pam_unix(sshd:session): session closed for user core May 17 10:22:42.807948 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:57652.service: Deactivated successfully. May 17 10:22:42.810785 systemd[1]: session-9.scope: Deactivated successfully. May 17 10:22:42.812903 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. May 17 10:22:42.815195 systemd-logind[1545]: Removed session 9. May 17 10:22:47.816830 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:44838.service - OpenSSH per-connection server daemon (10.0.0.1:44838). May 17 10:22:47.887127 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 44838 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:47.888655 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:47.893623 systemd-logind[1545]: New session 10 of user core. May 17 10:22:47.905685 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 10:22:48.030682 sshd[4057]: Connection closed by 10.0.0.1 port 44838 May 17 10:22:48.031036 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 17 10:22:48.034467 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:44838.service: Deactivated successfully. May 17 10:22:48.036682 systemd[1]: session-10.scope: Deactivated successfully. May 17 10:22:48.039170 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. May 17 10:22:48.040299 systemd-logind[1545]: Removed session 10. May 17 10:22:53.048697 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:44850.service - OpenSSH per-connection server daemon (10.0.0.1:44850). May 17 10:22:53.101915 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 44850 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:53.103569 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:53.108105 systemd-logind[1545]: New session 11 of user core. May 17 10:22:53.118658 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 10:22:53.233384 sshd[4073]: Connection closed by 10.0.0.1 port 44850 May 17 10:22:53.233952 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 17 10:22:53.250065 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:44850.service: Deactivated successfully. May 17 10:22:53.252600 systemd[1]: session-11.scope: Deactivated successfully. May 17 10:22:53.253675 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. May 17 10:22:53.257519 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:44856.service - OpenSSH per-connection server daemon (10.0.0.1:44856). May 17 10:22:53.258469 systemd-logind[1545]: Removed session 11. May 17 10:22:53.317505 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 44856 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:53.319205 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:53.323592 systemd-logind[1545]: New session 12 of user core. May 17 10:22:53.335615 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 10:22:53.508338 sshd[4089]: Connection closed by 10.0.0.1 port 44856 May 17 10:22:53.508751 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 17 10:22:53.523970 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:44856.service: Deactivated successfully. May 17 10:22:53.527915 systemd[1]: session-12.scope: Deactivated successfully. May 17 10:22:53.529019 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. May 17 10:22:53.533516 systemd-logind[1545]: Removed session 12. May 17 10:22:53.535260 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:44860.service - OpenSSH per-connection server daemon (10.0.0.1:44860). May 17 10:22:53.597167 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 44860 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:53.599239 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:53.604401 systemd-logind[1545]: New session 13 of user core. May 17 10:22:53.615642 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 10:22:53.732540 sshd[4102]: Connection closed by 10.0.0.1 port 44860 May 17 10:22:53.733099 sshd-session[4100]: pam_unix(sshd:session): session closed for user core May 17 10:22:53.738693 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:44860.service: Deactivated successfully. May 17 10:22:53.740967 systemd[1]: session-13.scope: Deactivated successfully. May 17 10:22:53.741753 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. May 17 10:22:53.743335 systemd-logind[1545]: Removed session 13. May 17 10:22:58.754619 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). May 17 10:22:58.824217 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:22:58.826127 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:22:58.831543 systemd-logind[1545]: New session 14 of user core. May 17 10:22:58.843773 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 10:22:58.965684 sshd[4118]: Connection closed by 10.0.0.1 port 54830 May 17 10:22:58.966059 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 17 10:22:58.969692 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:54830.service: Deactivated successfully. May 17 10:22:58.972212 systemd[1]: session-14.scope: Deactivated successfully. May 17 10:22:58.974048 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. May 17 10:22:58.975485 systemd-logind[1545]: Removed session 14. May 17 10:23:03.981792 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). May 17 10:23:04.033393 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:04.034968 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:04.039916 systemd-logind[1545]: New session 15 of user core. May 17 10:23:04.053631 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 10:23:04.165430 sshd[4134]: Connection closed by 10.0.0.1 port 54838 May 17 10:23:04.165761 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 17 10:23:04.169970 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:54838.service: Deactivated successfully. May 17 10:23:04.171980 systemd[1]: session-15.scope: Deactivated successfully. May 17 10:23:04.172712 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. May 17 10:23:04.173835 systemd-logind[1545]: Removed session 15. May 17 10:23:09.192320 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:35966.service - OpenSSH per-connection server daemon (10.0.0.1:35966). May 17 10:23:09.250451 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 35966 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:09.252135 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:09.257297 systemd-logind[1545]: New session 16 of user core. May 17 10:23:09.264687 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 10:23:09.391611 sshd[4152]: Connection closed by 10.0.0.1 port 35966 May 17 10:23:09.392052 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 17 10:23:09.408722 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:35966.service: Deactivated successfully. May 17 10:23:09.410875 systemd[1]: session-16.scope: Deactivated successfully. May 17 10:23:09.411885 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. May 17 10:23:09.415506 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:35976.service - OpenSSH per-connection server daemon (10.0.0.1:35976). May 17 10:23:09.416365 systemd-logind[1545]: Removed session 16. May 17 10:23:09.484151 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 35976 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:09.485769 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:09.490726 systemd-logind[1545]: New session 17 of user core. May 17 10:23:09.500681 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 10:23:09.903340 sshd[4167]: Connection closed by 10.0.0.1 port 35976 May 17 10:23:09.904031 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 17 10:23:09.917982 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:35976.service: Deactivated successfully. May 17 10:23:09.920679 systemd[1]: session-17.scope: Deactivated successfully. May 17 10:23:09.921716 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. May 17 10:23:09.926300 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). May 17 10:23:09.927000 systemd-logind[1545]: Removed session 17. May 17 10:23:09.985460 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:09.987542 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:09.993827 systemd-logind[1545]: New session 18 of user core. May 17 10:23:10.002799 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 10:23:10.987124 sshd[4181]: Connection closed by 10.0.0.1 port 35988 May 17 10:23:10.987957 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 17 10:23:11.001128 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:35988.service: Deactivated successfully. May 17 10:23:11.004144 systemd[1]: session-18.scope: Deactivated successfully. May 17 10:23:11.005456 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. May 17 10:23:11.011196 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). May 17 10:23:11.012711 systemd-logind[1545]: Removed session 18. May 17 10:23:11.067100 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:11.068971 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:11.073592 systemd-logind[1545]: New session 19 of user core. May 17 10:23:11.086682 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 10:23:11.333981 sshd[4204]: Connection closed by 10.0.0.1 port 35998 May 17 10:23:11.334693 sshd-session[4202]: pam_unix(sshd:session): session closed for user core May 17 10:23:11.344594 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:35998.service: Deactivated successfully. May 17 10:23:11.346909 systemd[1]: session-19.scope: Deactivated successfully. May 17 10:23:11.347705 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. May 17 10:23:11.351832 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:36004.service - OpenSSH per-connection server daemon (10.0.0.1:36004). May 17 10:23:11.352631 systemd-logind[1545]: Removed session 19. May 17 10:23:11.408320 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 36004 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:11.410052 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:11.414781 systemd-logind[1545]: New session 20 of user core. May 17 10:23:11.424639 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 10:23:11.533032 sshd[4217]: Connection closed by 10.0.0.1 port 36004 May 17 10:23:11.533351 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 17 10:23:11.537904 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:36004.service: Deactivated successfully. May 17 10:23:11.540108 systemd[1]: session-20.scope: Deactivated successfully. May 17 10:23:11.541121 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. May 17 10:23:11.542450 systemd-logind[1545]: Removed session 20. May 17 10:23:16.547516 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:38986.service - OpenSSH per-connection server daemon (10.0.0.1:38986). May 17 10:23:16.615857 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 38986 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:16.617975 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:16.622867 systemd-logind[1545]: New session 21 of user core. May 17 10:23:16.631624 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 10:23:16.742235 sshd[4235]: Connection closed by 10.0.0.1 port 38986 May 17 10:23:16.742684 sshd-session[4233]: pam_unix(sshd:session): session closed for user core May 17 10:23:16.748077 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:38986.service: Deactivated successfully. May 17 10:23:16.750358 systemd[1]: session-21.scope: Deactivated successfully. May 17 10:23:16.751515 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. May 17 10:23:16.754418 systemd-logind[1545]: Removed session 21. May 17 10:23:21.756545 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:38992.service - OpenSSH per-connection server daemon (10.0.0.1:38992). May 17 10:23:21.816056 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 38992 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:21.817869 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:21.822588 systemd-logind[1545]: New session 22 of user core. May 17 10:23:21.830637 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 10:23:21.937568 sshd[4253]: Connection closed by 10.0.0.1 port 38992 May 17 10:23:21.937886 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 17 10:23:21.942163 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:38992.service: Deactivated successfully. May 17 10:23:21.944365 systemd[1]: session-22.scope: Deactivated successfully. May 17 10:23:21.945307 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. May 17 10:23:21.947640 systemd-logind[1545]: Removed session 22. May 17 10:23:26.950956 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:51838.service - OpenSSH per-connection server daemon (10.0.0.1:51838). May 17 10:23:27.007097 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 51838 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:27.008705 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:27.013384 systemd-logind[1545]: New session 23 of user core. May 17 10:23:27.020629 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 10:23:27.133017 sshd[4268]: Connection closed by 10.0.0.1 port 51838 May 17 10:23:27.133335 sshd-session[4266]: pam_unix(sshd:session): session closed for user core May 17 10:23:27.138193 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:51838.service: Deactivated successfully. May 17 10:23:27.140432 systemd[1]: session-23.scope: Deactivated successfully. May 17 10:23:27.141390 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. May 17 10:23:27.142751 systemd-logind[1545]: Removed session 23. May 17 10:23:27.280976 kubelet[2684]: E0517 10:23:27.280827 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:32.157862 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:51846.service - OpenSSH per-connection server daemon (10.0.0.1:51846). May 17 10:23:32.211411 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 51846 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:32.212982 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:32.217852 systemd-logind[1545]: New session 24 of user core. May 17 10:23:32.227695 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 10:23:32.347610 sshd[4284]: Connection closed by 10.0.0.1 port 51846 May 17 10:23:32.347956 sshd-session[4282]: pam_unix(sshd:session): session closed for user core May 17 10:23:32.363092 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:51846.service: Deactivated successfully. May 17 10:23:32.365424 systemd[1]: session-24.scope: Deactivated successfully. May 17 10:23:32.366385 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. May 17 10:23:32.369832 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:51856.service - OpenSSH per-connection server daemon (10.0.0.1:51856). May 17 10:23:32.370722 systemd-logind[1545]: Removed session 24. May 17 10:23:32.436012 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 51856 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:32.437998 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:32.443668 systemd-logind[1545]: New session 25 of user core. May 17 10:23:32.457640 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 10:23:33.281379 kubelet[2684]: E0517 10:23:33.281329 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:33.803912 containerd[1560]: time="2025-05-17T10:23:33.803839815Z" level=info msg="StopContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" with timeout 30 (s)" May 17 10:23:33.827822 containerd[1560]: time="2025-05-17T10:23:33.827775244Z" level=info msg="Stop container \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" with signal terminated" May 17 10:23:33.838045 containerd[1560]: time="2025-05-17T10:23:33.837993064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" id:\"8411229c26da0fc50c65b80adc76aef9c68ebb618046d96876411ee480ea4ff2\" pid:4319 exited_at:{seconds:1747477413 nanos:837307556}" May 17 10:23:33.840893 systemd[1]: cri-containerd-70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2.scope: Deactivated successfully. May 17 10:23:33.844212 containerd[1560]: time="2025-05-17T10:23:33.844162567Z" level=info msg="received exit event container_id:\"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" id:\"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" pid:3385 exited_at:{seconds:1747477413 nanos:843868657}" May 17 10:23:33.844385 containerd[1560]: time="2025-05-17T10:23:33.844365916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" id:\"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" pid:3385 exited_at:{seconds:1747477413 nanos:843868657}" May 17 10:23:33.845066 containerd[1560]: time="2025-05-17T10:23:33.845020856Z" level=info msg="StopContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" with timeout 2 (s)" May 17 10:23:33.845337 containerd[1560]: time="2025-05-17T10:23:33.845308024Z" level=info msg="Stop container \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" with signal terminated" May 17 10:23:33.848897 containerd[1560]: time="2025-05-17T10:23:33.848856245Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 10:23:33.854713 systemd-networkd[1492]: lxc_health: Link DOWN May 17 10:23:33.854727 systemd-networkd[1492]: lxc_health: Lost carrier May 17 10:23:33.874751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2-rootfs.mount: Deactivated successfully. May 17 10:23:33.876090 systemd[1]: cri-containerd-128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9.scope: Deactivated successfully. May 17 10:23:33.876623 systemd[1]: cri-containerd-128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9.scope: Consumed 7.077s CPU time, 122.7M memory peak, 184K read from disk, 13.3M written to disk. May 17 10:23:33.877239 containerd[1560]: time="2025-05-17T10:23:33.877065961Z" level=info msg="received exit event container_id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" pid:3302 exited_at:{seconds:1747477413 nanos:876387687}" May 17 10:23:33.877344 containerd[1560]: time="2025-05-17T10:23:33.877189968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" id:\"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" pid:3302 exited_at:{seconds:1747477413 nanos:876387687}" May 17 10:23:33.898127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9-rootfs.mount: Deactivated successfully. May 17 10:23:33.901663 containerd[1560]: time="2025-05-17T10:23:33.901623878Z" level=info msg="StopContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" returns successfully" May 17 10:23:33.902381 containerd[1560]: time="2025-05-17T10:23:33.902335927Z" level=info msg="StopPodSandbox for \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\"" May 17 10:23:33.902476 containerd[1560]: time="2025-05-17T10:23:33.902426811Z" level=info msg="Container to stop \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.909651 systemd[1]: cri-containerd-6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc.scope: Deactivated successfully. May 17 10:23:33.913963 containerd[1560]: time="2025-05-17T10:23:33.913923050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" id:\"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" pid:2953 exit_status:137 exited_at:{seconds:1747477413 nanos:913655229}" May 17 10:23:33.916903 containerd[1560]: time="2025-05-17T10:23:33.916855516Z" level=info msg="StopContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" returns successfully" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917437176Z" level=info msg="StopPodSandbox for \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\"" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917509464Z" level=info msg="Container to stop \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917521989Z" level=info msg="Container to stop \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917530675Z" level=info msg="Container to stop \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917539091Z" level=info msg="Container to stop \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.917622 containerd[1560]: time="2025-05-17T10:23:33.917547807Z" level=info msg="Container to stop \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 10:23:33.924007 systemd[1]: cri-containerd-f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf.scope: Deactivated successfully. May 17 10:23:33.943508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc-rootfs.mount: Deactivated successfully. May 17 10:23:33.947773 containerd[1560]: time="2025-05-17T10:23:33.947737602Z" level=info msg="shim disconnected" id=6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc namespace=k8s.io May 17 10:23:33.947773 containerd[1560]: time="2025-05-17T10:23:33.947769183Z" level=warning msg="cleaning up after shim disconnected" id=6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc namespace=k8s.io May 17 10:23:33.949700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf-rootfs.mount: Deactivated successfully. May 17 10:23:33.960647 containerd[1560]: time="2025-05-17T10:23:33.947776877Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 10:23:33.960721 containerd[1560]: time="2025-05-17T10:23:33.951415842Z" level=info msg="shim disconnected" id=f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf namespace=k8s.io May 17 10:23:33.960750 containerd[1560]: time="2025-05-17T10:23:33.960730609Z" level=warning msg="cleaning up after shim disconnected" id=f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf namespace=k8s.io May 17 10:23:33.960775 containerd[1560]: time="2025-05-17T10:23:33.960740548Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 10:23:33.983620 containerd[1560]: time="2025-05-17T10:23:33.983471738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" id:\"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" pid:2837 exit_status:137 exited_at:{seconds:1747477413 nanos:925157740}" May 17 10:23:33.985471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc-shm.mount: Deactivated successfully. May 17 10:23:33.985626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf-shm.mount: Deactivated successfully. May 17 10:23:33.994046 containerd[1560]: time="2025-05-17T10:23:33.994007685Z" level=info msg="received exit event sandbox_id:\"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" exit_status:137 exited_at:{seconds:1747477413 nanos:925157740}" May 17 10:23:33.994193 containerd[1560]: time="2025-05-17T10:23:33.994165837Z" level=info msg="received exit event sandbox_id:\"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" exit_status:137 exited_at:{seconds:1747477413 nanos:913655229}" May 17 10:23:33.997752 containerd[1560]: time="2025-05-17T10:23:33.997718176Z" level=info msg="TearDown network for sandbox \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" successfully" May 17 10:23:33.997849 containerd[1560]: time="2025-05-17T10:23:33.997829830Z" level=info msg="StopPodSandbox for \"f81dfc3600fcad304ac06993a4632193bc994215419e1ea7fe40d92953437fdf\" returns successfully" May 17 10:23:33.998062 containerd[1560]: time="2025-05-17T10:23:33.997722534Z" level=info msg="TearDown network for sandbox \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" successfully" May 17 10:23:33.998210 containerd[1560]: time="2025-05-17T10:23:33.998179447Z" level=info msg="StopPodSandbox for \"6ec6667adaea646cb3b70796199b31d93d1bb15afca4de21d34600d1b59ab5dc\" returns successfully" May 17 10:23:34.045686 kubelet[2684]: I0517 10:23:34.045640 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-xtables-lock\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.045686 kubelet[2684]: I0517 10:23:34.045686 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cni-path\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.045686 kubelet[2684]: I0517 10:23:34.045678 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.045921 kubelet[2684]: I0517 10:23:34.045701 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-kernel\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.045921 kubelet[2684]: I0517 10:23:34.045721 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cdb9171-a0a6-4938-ac79-0069b7567752-clustermesh-secrets\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.045921 kubelet[2684]: I0517 10:23:34.045723 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cni-path" (OuterVolumeSpecName: "cni-path") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.045921 kubelet[2684]: I0517 10:23:34.045737 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvrwp\" (UniqueName: \"kubernetes.io/projected/3968e703-1df7-40fe-9f67-e2bca1f2f27a-kube-api-access-fvrwp\") pod \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\" (UID: \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\") " May 17 10:23:34.045921 kubelet[2684]: I0517 10:23:34.045741 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045753 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-run\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045766 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-hubble-tls\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045778 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-bpf-maps\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045792 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3968e703-1df7-40fe-9f67-e2bca1f2f27a-cilium-config-path\") pod \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\" (UID: \"3968e703-1df7-40fe-9f67-e2bca1f2f27a\") " May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045804 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-cgroup\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046033 kubelet[2684]: I0517 10:23:34.045822 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-config-path\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045838 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-net\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045851 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-hostproc\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045864 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-etc-cni-netd\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045881 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpjbp\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-kube-api-access-qpjbp\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045894 2684 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-lib-modules\") pod \"2cdb9171-a0a6-4938-ac79-0069b7567752\" (UID: \"2cdb9171-a0a6-4938-ac79-0069b7567752\") " May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045919 2684 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.046162 kubelet[2684]: I0517 10:23:34.045928 2684 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.046309 kubelet[2684]: I0517 10:23:34.045936 2684 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.046309 kubelet[2684]: I0517 10:23:34.045958 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.046309 kubelet[2684]: I0517 10:23:34.046103 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.046309 kubelet[2684]: I0517 10:23:34.046161 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.046309 kubelet[2684]: I0517 10:23:34.046252 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-hostproc" (OuterVolumeSpecName: "hostproc") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.046750 kubelet[2684]: I0517 10:23:34.046718 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.048057 kubelet[2684]: I0517 10:23:34.048031 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.049509 kubelet[2684]: I0517 10:23:34.048664 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 10:23:34.050401 kubelet[2684]: I0517 10:23:34.050350 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-kube-api-access-qpjbp" (OuterVolumeSpecName: "kube-api-access-qpjbp") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "kube-api-access-qpjbp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:23:34.050626 kubelet[2684]: I0517 10:23:34.050596 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:23:34.050626 kubelet[2684]: I0517 10:23:34.050599 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdb9171-a0a6-4938-ac79-0069b7567752-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 10:23:34.051079 kubelet[2684]: I0517 10:23:34.051044 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3968e703-1df7-40fe-9f67-e2bca1f2f27a-kube-api-access-fvrwp" (OuterVolumeSpecName: "kube-api-access-fvrwp") pod "3968e703-1df7-40fe-9f67-e2bca1f2f27a" (UID: "3968e703-1df7-40fe-9f67-e2bca1f2f27a"). InnerVolumeSpecName "kube-api-access-fvrwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 10:23:34.051575 kubelet[2684]: I0517 10:23:34.051481 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2cdb9171-a0a6-4938-ac79-0069b7567752" (UID: "2cdb9171-a0a6-4938-ac79-0069b7567752"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 10:23:34.052392 kubelet[2684]: I0517 10:23:34.052363 2684 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3968e703-1df7-40fe-9f67-e2bca1f2f27a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3968e703-1df7-40fe-9f67-e2bca1f2f27a" (UID: "3968e703-1df7-40fe-9f67-e2bca1f2f27a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146321 2684 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146350 2684 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146359 2684 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146368 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qpjbp\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-kube-api-access-qpjbp\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146380 2684 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cdb9171-a0a6-4938-ac79-0069b7567752-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146389 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvrwp\" (UniqueName: \"kubernetes.io/projected/3968e703-1df7-40fe-9f67-e2bca1f2f27a-kube-api-access-fvrwp\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146396 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.146597 kubelet[2684]: I0517 10:23:34.146404 2684 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cdb9171-a0a6-4938-ac79-0069b7567752-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.147141 kubelet[2684]: I0517 10:23:34.146412 2684 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.147141 kubelet[2684]: I0517 10:23:34.146419 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3968e703-1df7-40fe-9f67-e2bca1f2f27a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.147141 kubelet[2684]: I0517 10:23:34.146426 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.147141 kubelet[2684]: I0517 10:23:34.146434 2684 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cdb9171-a0a6-4938-ac79-0069b7567752-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.147141 kubelet[2684]: I0517 10:23:34.146443 2684 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cdb9171-a0a6-4938-ac79-0069b7567752-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 10:23:34.291010 systemd[1]: Removed slice kubepods-burstable-pod2cdb9171_a0a6_4938_ac79_0069b7567752.slice - libcontainer container kubepods-burstable-pod2cdb9171_a0a6_4938_ac79_0069b7567752.slice. May 17 10:23:34.291137 systemd[1]: kubepods-burstable-pod2cdb9171_a0a6_4938_ac79_0069b7567752.slice: Consumed 7.197s CPU time, 123M memory peak, 192K read from disk, 13.3M written to disk. May 17 10:23:34.292551 systemd[1]: Removed slice kubepods-besteffort-pod3968e703_1df7_40fe_9f67_e2bca1f2f27a.slice - libcontainer container kubepods-besteffort-pod3968e703_1df7_40fe_9f67_e2bca1f2f27a.slice. May 17 10:23:34.532386 kubelet[2684]: I0517 10:23:34.532266 2684 scope.go:117] "RemoveContainer" containerID="70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2" May 17 10:23:34.534194 containerd[1560]: time="2025-05-17T10:23:34.534152907Z" level=info msg="RemoveContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\"" May 17 10:23:34.543048 containerd[1560]: time="2025-05-17T10:23:34.542999753Z" level=info msg="RemoveContainer for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" returns successfully" May 17 10:23:34.543430 kubelet[2684]: I0517 10:23:34.543316 2684 scope.go:117] "RemoveContainer" containerID="70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2" May 17 10:23:34.550386 containerd[1560]: time="2025-05-17T10:23:34.544449189Z" level=error msg="ContainerStatus for \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\": not found" May 17 10:23:34.554093 kubelet[2684]: E0517 10:23:34.554046 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\": not found" containerID="70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2" May 17 10:23:34.554275 kubelet[2684]: I0517 10:23:34.554093 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2"} err="failed to get container status \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\": rpc error: code = NotFound desc = an error occurred when try to find container \"70d64a0dbdafda44a72e65168c5d16486b037307189932b75f721545634f6ee2\": not found" May 17 10:23:34.554275 kubelet[2684]: I0517 10:23:34.554174 2684 scope.go:117] "RemoveContainer" containerID="128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9" May 17 10:23:34.555751 containerd[1560]: time="2025-05-17T10:23:34.555716622Z" level=info msg="RemoveContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\"" May 17 10:23:34.561902 containerd[1560]: time="2025-05-17T10:23:34.561856614Z" level=info msg="RemoveContainer for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" returns successfully" May 17 10:23:34.562138 kubelet[2684]: I0517 10:23:34.562113 2684 scope.go:117] "RemoveContainer" containerID="03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce" May 17 10:23:34.565111 containerd[1560]: time="2025-05-17T10:23:34.565021592Z" level=info msg="RemoveContainer for \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\"" May 17 10:23:34.570022 containerd[1560]: time="2025-05-17T10:23:34.569969421Z" level=info msg="RemoveContainer for \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" returns successfully" May 17 10:23:34.570285 kubelet[2684]: I0517 10:23:34.570243 2684 scope.go:117] "RemoveContainer" containerID="044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9" May 17 10:23:34.572694 containerd[1560]: time="2025-05-17T10:23:34.572606922Z" level=info msg="RemoveContainer for \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\"" May 17 10:23:34.581623 containerd[1560]: time="2025-05-17T10:23:34.581557406Z" level=info msg="RemoveContainer for \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" returns successfully" May 17 10:23:34.581899 kubelet[2684]: I0517 10:23:34.581847 2684 scope.go:117] "RemoveContainer" containerID="759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a" May 17 10:23:34.583510 containerd[1560]: time="2025-05-17T10:23:34.583449736Z" level=info msg="RemoveContainer for \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\"" May 17 10:23:34.587960 containerd[1560]: time="2025-05-17T10:23:34.587922949Z" level=info msg="RemoveContainer for \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" returns successfully" May 17 10:23:34.588125 kubelet[2684]: I0517 10:23:34.588096 2684 scope.go:117] "RemoveContainer" containerID="379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13" May 17 10:23:34.589574 containerd[1560]: time="2025-05-17T10:23:34.589526898Z" level=info msg="RemoveContainer for \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\"" May 17 10:23:34.593512 containerd[1560]: time="2025-05-17T10:23:34.593468378Z" level=info msg="RemoveContainer for \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" returns successfully" May 17 10:23:34.593746 kubelet[2684]: I0517 10:23:34.593701 2684 scope.go:117] "RemoveContainer" containerID="128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9" May 17 10:23:34.594000 containerd[1560]: time="2025-05-17T10:23:34.593943383Z" level=error msg="ContainerStatus for \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\": not found" May 17 10:23:34.594164 kubelet[2684]: E0517 10:23:34.594121 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\": not found" containerID="128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9" May 17 10:23:34.594241 kubelet[2684]: I0517 10:23:34.594160 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9"} err="failed to get container status \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\": rpc error: code = NotFound desc = an error occurred when try to find container \"128d8c75f71f49a55dede452e2b6d3a8073cd449fc1e1a923e3d9fcc01b80ed9\": not found" May 17 10:23:34.594241 kubelet[2684]: I0517 10:23:34.594198 2684 scope.go:117] "RemoveContainer" containerID="03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce" May 17 10:23:34.594407 containerd[1560]: time="2025-05-17T10:23:34.594370919Z" level=error msg="ContainerStatus for \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\": not found" May 17 10:23:34.594564 kubelet[2684]: E0517 10:23:34.594528 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\": not found" containerID="03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce" May 17 10:23:34.594647 kubelet[2684]: I0517 10:23:34.594570 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce"} err="failed to get container status \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"03ef1712b8192d4cb6f3e5961931ec47ba884267d1f3b9b612d2a260400768ce\": not found" May 17 10:23:34.594647 kubelet[2684]: I0517 10:23:34.594605 2684 scope.go:117] "RemoveContainer" containerID="044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9" May 17 10:23:34.594829 containerd[1560]: time="2025-05-17T10:23:34.594786472Z" level=error msg="ContainerStatus for \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\": not found" May 17 10:23:34.594948 kubelet[2684]: E0517 10:23:34.594918 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\": not found" containerID="044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9" May 17 10:23:34.595007 kubelet[2684]: I0517 10:23:34.594944 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9"} err="failed to get container status \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"044305886b22b9239c2f7dd48206e7245e4321d52f405a8e00c0cc726a0844f9\": not found" May 17 10:23:34.595007 kubelet[2684]: I0517 10:23:34.594968 2684 scope.go:117] "RemoveContainer" containerID="759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a" May 17 10:23:34.595198 containerd[1560]: time="2025-05-17T10:23:34.595160615Z" level=error msg="ContainerStatus for \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\": not found" May 17 10:23:34.595333 kubelet[2684]: E0517 10:23:34.595309 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\": not found" containerID="759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a" May 17 10:23:34.595378 kubelet[2684]: I0517 10:23:34.595337 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a"} err="failed to get container status \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\": rpc error: code = NotFound desc = an error occurred when try to find container \"759fc7ef18bd6b5cfea3fbbe621a5c300a50646aee39593d86563c0ce829144a\": not found" May 17 10:23:34.595378 kubelet[2684]: I0517 10:23:34.595356 2684 scope.go:117] "RemoveContainer" containerID="379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13" May 17 10:23:34.595624 containerd[1560]: time="2025-05-17T10:23:34.595566961Z" level=error msg="ContainerStatus for \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\": not found" May 17 10:23:34.595770 kubelet[2684]: E0517 10:23:34.595739 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\": not found" containerID="379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13" May 17 10:23:34.595839 kubelet[2684]: I0517 10:23:34.595770 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13"} err="failed to get container status \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\": rpc error: code = NotFound desc = an error occurred when try to find container \"379b7a20d60d06b30a4f8bde4c75e85143bb74ebd1b874725e64fa99a4e35a13\": not found" May 17 10:23:34.874522 systemd[1]: var-lib-kubelet-pods-3968e703\x2d1df7\x2d40fe\x2d9f67\x2de2bca1f2f27a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvrwp.mount: Deactivated successfully. May 17 10:23:34.874660 systemd[1]: var-lib-kubelet-pods-2cdb9171\x2da0a6\x2d4938\x2dac79\x2d0069b7567752-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqpjbp.mount: Deactivated successfully. May 17 10:23:34.874751 systemd[1]: var-lib-kubelet-pods-2cdb9171\x2da0a6\x2d4938\x2dac79\x2d0069b7567752-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 10:23:34.874833 systemd[1]: var-lib-kubelet-pods-2cdb9171\x2da0a6\x2d4938\x2dac79\x2d0069b7567752-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 10:23:35.769579 sshd[4299]: Connection closed by 10.0.0.1 port 51856 May 17 10:23:35.770090 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 17 10:23:35.784587 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:51856.service: Deactivated successfully. May 17 10:23:35.787059 systemd[1]: session-25.scope: Deactivated successfully. May 17 10:23:35.788219 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. May 17 10:23:35.792477 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). May 17 10:23:35.793137 systemd-logind[1545]: Removed session 25. May 17 10:23:35.854902 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:35.856561 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:35.861232 systemd-logind[1545]: New session 26 of user core. May 17 10:23:35.868648 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 10:23:36.284005 kubelet[2684]: I0517 10:23:36.283950 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdb9171-a0a6-4938-ac79-0069b7567752" path="/var/lib/kubelet/pods/2cdb9171-a0a6-4938-ac79-0069b7567752/volumes" May 17 10:23:36.284803 kubelet[2684]: I0517 10:23:36.284773 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3968e703-1df7-40fe-9f67-e2bca1f2f27a" path="/var/lib/kubelet/pods/3968e703-1df7-40fe-9f67-e2bca1f2f27a/volumes" May 17 10:23:36.372304 sshd[4448]: Connection closed by 10.0.0.1 port 51864 May 17 10:23:36.373770 sshd-session[4446]: pam_unix(sshd:session): session closed for user core May 17 10:23:36.384581 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:51864.service: Deactivated successfully. May 17 10:23:36.388184 systemd[1]: session-26.scope: Deactivated successfully. May 17 10:23:36.389150 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. May 17 10:23:36.392154 kubelet[2684]: I0517 10:23:36.392028 2684 memory_manager.go:355] "RemoveStaleState removing state" podUID="2cdb9171-a0a6-4938-ac79-0069b7567752" containerName="cilium-agent" May 17 10:23:36.392154 kubelet[2684]: I0517 10:23:36.392054 2684 memory_manager.go:355] "RemoveStaleState removing state" podUID="3968e703-1df7-40fe-9f67-e2bca1f2f27a" containerName="cilium-operator" May 17 10:23:36.396814 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:51866.service - OpenSSH per-connection server daemon (10.0.0.1:51866). May 17 10:23:36.399648 systemd-logind[1545]: Removed session 26. May 17 10:23:36.412521 systemd[1]: Created slice kubepods-burstable-pod204873d5_145f_4e31_b619_dfcaef5c44c8.slice - libcontainer container kubepods-burstable-pod204873d5_145f_4e31_b619_dfcaef5c44c8.slice. May 17 10:23:36.455968 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:36.457803 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:36.462408 systemd-logind[1545]: New session 27 of user core. May 17 10:23:36.481659 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 10:23:36.533716 sshd[4462]: Connection closed by 10.0.0.1 port 51866 May 17 10:23:36.534010 sshd-session[4460]: pam_unix(sshd:session): session closed for user core May 17 10:23:36.546791 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:51866.service: Deactivated successfully. May 17 10:23:36.549128 systemd[1]: session-27.scope: Deactivated successfully. May 17 10:23:36.549947 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. May 17 10:23:36.554075 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:40978.service - OpenSSH per-connection server daemon (10.0.0.1:40978). May 17 10:23:36.554975 systemd-logind[1545]: Removed session 27. May 17 10:23:36.561052 kubelet[2684]: I0517 10:23:36.561012 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-bpf-maps\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561052 kubelet[2684]: I0517 10:23:36.561049 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-xtables-lock\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561052 kubelet[2684]: I0517 10:23:36.561063 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-host-proc-sys-net\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561080 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-host-proc-sys-kernel\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561097 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-cilium-run\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561111 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-lib-modules\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561124 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk4j\" (UniqueName: \"kubernetes.io/projected/204873d5-145f-4e31-b619-dfcaef5c44c8-kube-api-access-prk4j\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561138 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-cni-path\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561227 kubelet[2684]: I0517 10:23:36.561153 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-hostproc\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561167 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-cilium-cgroup\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561182 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/204873d5-145f-4e31-b619-dfcaef5c44c8-hubble-tls\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561197 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/204873d5-145f-4e31-b619-dfcaef5c44c8-etc-cni-netd\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561224 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/204873d5-145f-4e31-b619-dfcaef5c44c8-clustermesh-secrets\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561288 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/204873d5-145f-4e31-b619-dfcaef5c44c8-cilium-config-path\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.561410 kubelet[2684]: I0517 10:23:36.561329 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/204873d5-145f-4e31-b619-dfcaef5c44c8-cilium-ipsec-secrets\") pod \"cilium-4blmf\" (UID: \"204873d5-145f-4e31-b619-dfcaef5c44c8\") " pod="kube-system/cilium-4blmf" May 17 10:23:36.606765 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 40978 ssh2: RSA SHA256:fqd0Zw1c0TOc8VjEN/TY5HphIWm94006yyZoyFyzIuE May 17 10:23:36.608565 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 10:23:36.613517 systemd-logind[1545]: New session 28 of user core. May 17 10:23:36.630646 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 10:23:36.717072 kubelet[2684]: E0517 10:23:36.716744 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:36.717521 containerd[1560]: time="2025-05-17T10:23:36.717443052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4blmf,Uid:204873d5-145f-4e31-b619-dfcaef5c44c8,Namespace:kube-system,Attempt:0,}" May 17 10:23:36.745798 containerd[1560]: time="2025-05-17T10:23:36.745742782Z" level=info msg="connecting to shim 98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" namespace=k8s.io protocol=ttrpc version=3 May 17 10:23:36.784836 systemd[1]: Started cri-containerd-98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6.scope - libcontainer container 98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6. May 17 10:23:36.813214 containerd[1560]: time="2025-05-17T10:23:36.813157937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4blmf,Uid:204873d5-145f-4e31-b619-dfcaef5c44c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\"" May 17 10:23:36.813920 kubelet[2684]: E0517 10:23:36.813884 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:36.815679 containerd[1560]: time="2025-05-17T10:23:36.815639848Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 10:23:36.823026 containerd[1560]: time="2025-05-17T10:23:36.822986351Z" level=info msg="Container 09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367: CDI devices from CRI Config.CDIDevices: []" May 17 10:23:36.834954 containerd[1560]: time="2025-05-17T10:23:36.834847618Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\"" May 17 10:23:36.836230 containerd[1560]: time="2025-05-17T10:23:36.835366046Z" level=info msg="StartContainer for \"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\"" May 17 10:23:36.836382 containerd[1560]: time="2025-05-17T10:23:36.836328490Z" level=info msg="connecting to shim 09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" protocol=ttrpc version=3 May 17 10:23:36.860633 systemd[1]: Started cri-containerd-09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367.scope - libcontainer container 09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367. May 17 10:23:36.891279 containerd[1560]: time="2025-05-17T10:23:36.891227009Z" level=info msg="StartContainer for \"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\" returns successfully" May 17 10:23:36.900933 systemd[1]: cri-containerd-09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367.scope: Deactivated successfully. May 17 10:23:36.902212 containerd[1560]: time="2025-05-17T10:23:36.902174906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\" id:\"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\" pid:4542 exited_at:{seconds:1747477416 nanos:901659623}" May 17 10:23:36.902318 containerd[1560]: time="2025-05-17T10:23:36.902272672Z" level=info msg="received exit event container_id:\"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\" id:\"09a3fe7093f8543a0c6f26bf35b8a87749d389ff1688460c76e0fd008da03367\" pid:4542 exited_at:{seconds:1747477416 nanos:901659623}" May 17 10:23:37.546400 kubelet[2684]: E0517 10:23:37.546363 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:37.548767 containerd[1560]: time="2025-05-17T10:23:37.548259009Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 10:23:37.640338 containerd[1560]: time="2025-05-17T10:23:37.640261910Z" level=info msg="Container c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583: CDI devices from CRI Config.CDIDevices: []" May 17 10:23:37.648744 containerd[1560]: time="2025-05-17T10:23:37.648689097Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\"" May 17 10:23:37.649516 containerd[1560]: time="2025-05-17T10:23:37.649440988Z" level=info msg="StartContainer for \"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\"" May 17 10:23:37.650630 containerd[1560]: time="2025-05-17T10:23:37.650575270Z" level=info msg="connecting to shim c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" protocol=ttrpc version=3 May 17 10:23:37.688893 systemd[1]: Started cri-containerd-c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583.scope - libcontainer container c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583. May 17 10:23:37.723085 containerd[1560]: time="2025-05-17T10:23:37.723030096Z" level=info msg="StartContainer for \"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\" returns successfully" May 17 10:23:37.730463 systemd[1]: cri-containerd-c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583.scope: Deactivated successfully. May 17 10:23:37.731267 containerd[1560]: time="2025-05-17T10:23:37.731203119Z" level=info msg="received exit event container_id:\"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\" id:\"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\" pid:4588 exited_at:{seconds:1747477417 nanos:730920429}" May 17 10:23:37.732185 containerd[1560]: time="2025-05-17T10:23:37.732136196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\" id:\"c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583\" pid:4588 exited_at:{seconds:1747477417 nanos:730920429}" May 17 10:23:37.756793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c28d5ad36a814e6f1555f8ca90a775f4b58c37535d4c5d332a38eaa0778c5583-rootfs.mount: Deactivated successfully. May 17 10:23:38.357044 kubelet[2684]: E0517 10:23:38.356995 2684 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 10:23:38.550483 kubelet[2684]: E0517 10:23:38.550445 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:38.552684 containerd[1560]: time="2025-05-17T10:23:38.552635257Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 10:23:38.575585 containerd[1560]: time="2025-05-17T10:23:38.573250399Z" level=info msg="Container 7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829: CDI devices from CRI Config.CDIDevices: []" May 17 10:23:38.584327 containerd[1560]: time="2025-05-17T10:23:38.584252694Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\"" May 17 10:23:38.584979 containerd[1560]: time="2025-05-17T10:23:38.584933941Z" level=info msg="StartContainer for \"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\"" May 17 10:23:38.587050 containerd[1560]: time="2025-05-17T10:23:38.586988765Z" level=info msg="connecting to shim 7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" protocol=ttrpc version=3 May 17 10:23:38.616877 systemd[1]: Started cri-containerd-7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829.scope - libcontainer container 7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829. May 17 10:23:38.666909 systemd[1]: cri-containerd-7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829.scope: Deactivated successfully. May 17 10:23:38.670915 containerd[1560]: time="2025-05-17T10:23:38.670752137Z" level=info msg="received exit event container_id:\"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\" id:\"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\" pid:4632 exited_at:{seconds:1747477418 nanos:669409099}" May 17 10:23:38.670915 containerd[1560]: time="2025-05-17T10:23:38.670810708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\" id:\"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\" pid:4632 exited_at:{seconds:1747477418 nanos:669409099}" May 17 10:23:38.690747 containerd[1560]: time="2025-05-17T10:23:38.690676233Z" level=info msg="StartContainer for \"7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829\" returns successfully" May 17 10:23:38.717460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7461a9e371d53e2f1e8be2229a9e42ce772dfe973b16a5f5e2acc65a67986829-rootfs.mount: Deactivated successfully. May 17 10:23:39.556314 kubelet[2684]: E0517 10:23:39.556275 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:39.558376 containerd[1560]: time="2025-05-17T10:23:39.558329803Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 10:23:39.566896 containerd[1560]: time="2025-05-17T10:23:39.566758230Z" level=info msg="Container 555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328: CDI devices from CRI Config.CDIDevices: []" May 17 10:23:39.577744 containerd[1560]: time="2025-05-17T10:23:39.577675283Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\"" May 17 10:23:39.578412 containerd[1560]: time="2025-05-17T10:23:39.578338356Z" level=info msg="StartContainer for \"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\"" May 17 10:23:39.579412 containerd[1560]: time="2025-05-17T10:23:39.579388686Z" level=info msg="connecting to shim 555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" protocol=ttrpc version=3 May 17 10:23:39.600642 systemd[1]: Started cri-containerd-555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328.scope - libcontainer container 555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328. May 17 10:23:39.626678 systemd[1]: cri-containerd-555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328.scope: Deactivated successfully. May 17 10:23:39.627958 containerd[1560]: time="2025-05-17T10:23:39.627913340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\" id:\"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\" pid:4670 exited_at:{seconds:1747477419 nanos:627534378}" May 17 10:23:39.629077 containerd[1560]: time="2025-05-17T10:23:39.629039384Z" level=info msg="received exit event container_id:\"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\" id:\"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\" pid:4670 exited_at:{seconds:1747477419 nanos:627534378}" May 17 10:23:39.637085 containerd[1560]: time="2025-05-17T10:23:39.637022782Z" level=info msg="StartContainer for \"555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328\" returns successfully" May 17 10:23:39.650062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-555312cef38a8c654fa9e9373a5a56880255d4bc737859a7af90cb8196a98328-rootfs.mount: Deactivated successfully. May 17 10:23:40.126216 kubelet[2684]: I0517 10:23:40.126119 2684 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T10:23:40Z","lastTransitionTime":"2025-05-17T10:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 10:23:40.284583 kubelet[2684]: E0517 10:23:40.284408 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:40.561843 kubelet[2684]: E0517 10:23:40.561801 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:40.563482 containerd[1560]: time="2025-05-17T10:23:40.563419375Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 10:23:40.576565 containerd[1560]: time="2025-05-17T10:23:40.575831378Z" level=info msg="Container ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a: CDI devices from CRI Config.CDIDevices: []" May 17 10:23:40.585419 containerd[1560]: time="2025-05-17T10:23:40.585365462Z" level=info msg="CreateContainer within sandbox \"98b547a0da85118df5a525e5ec301c19629ea9b8688197050a3951e65fc0e4c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\"" May 17 10:23:40.586011 containerd[1560]: time="2025-05-17T10:23:40.585943131Z" level=info msg="StartContainer for \"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\"" May 17 10:23:40.587130 containerd[1560]: time="2025-05-17T10:23:40.587090545Z" level=info msg="connecting to shim ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a" address="unix:///run/containerd/s/3ffa85dc1cdbab7c39f6db5b880c73489f6b179d4db285edde2f4bab9da0660b" protocol=ttrpc version=3 May 17 10:23:40.613748 systemd[1]: Started cri-containerd-ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a.scope - libcontainer container ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a. May 17 10:23:40.652015 containerd[1560]: time="2025-05-17T10:23:40.651953711Z" level=info msg="StartContainer for \"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" returns successfully" May 17 10:23:40.718239 containerd[1560]: time="2025-05-17T10:23:40.718187215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" id:\"db9f852c3dfa2fe7fb2c4ff9973f1cfe9b20e02cb6b08ff1dd5aba260361a009\" pid:4740 exited_at:{seconds:1747477420 nanos:717840254}" May 17 10:23:41.093529 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 17 10:23:41.570912 kubelet[2684]: E0517 10:23:41.570855 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:42.281108 kubelet[2684]: E0517 10:23:42.281057 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:42.717871 kubelet[2684]: E0517 10:23:42.717789 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:43.045214 containerd[1560]: time="2025-05-17T10:23:43.045087331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" id:\"9af093d1427e2c60f866985fde89a1edf1dd9a4dcff248a6dcd6806977505278\" pid:4903 exit_status:1 exited_at:{seconds:1747477423 nanos:44684295}" May 17 10:23:43.281376 kubelet[2684]: E0517 10:23:43.281329 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:44.287242 systemd-networkd[1492]: lxc_health: Link UP May 17 10:23:44.289700 systemd-networkd[1492]: lxc_health: Gained carrier May 17 10:23:44.718970 kubelet[2684]: E0517 10:23:44.718687 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:44.737255 kubelet[2684]: I0517 10:23:44.736932 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4blmf" podStartSLOduration=8.736909354 podStartE2EDuration="8.736909354s" podCreationTimestamp="2025-05-17 10:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 10:23:41.584862531 +0000 UTC m=+93.414171408" watchObservedRunningTime="2025-05-17 10:23:44.736909354 +0000 UTC m=+96.566218221" May 17 10:23:45.167401 containerd[1560]: time="2025-05-17T10:23:45.167235839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" id:\"697d7cbd94efeb97bbbb09a734a6c436a7432d5e103a438d5eb719399a5c7809\" pid:5276 exited_at:{seconds:1747477425 nanos:166587317}" May 17 10:23:45.580201 kubelet[2684]: E0517 10:23:45.580137 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:45.847984 systemd-networkd[1492]: lxc_health: Gained IPv6LL May 17 10:23:46.582215 kubelet[2684]: E0517 10:23:46.582164 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 10:23:47.259076 containerd[1560]: time="2025-05-17T10:23:47.259021068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" id:\"2a9c97572c6fa764ac33002d414e48a731c8e3932139ddb7eb92a5653d15a5df\" pid:5313 exited_at:{seconds:1747477427 nanos:258387585}" May 17 10:23:49.362264 containerd[1560]: time="2025-05-17T10:23:49.362215583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffaa2e291696666cb731dafcaf4b2a072e2397e01af8428ff8f18566036b2f5a\" id:\"72919f3e5f1cc701e6b90473ed9888e8c5e1d803bafc6ba4374ebbb8deef4f73\" pid:5344 exited_at:{seconds:1747477429 nanos:361723640}" May 17 10:23:49.368345 sshd[4472]: Connection closed by 10.0.0.1 port 40978 May 17 10:23:49.368834 sshd-session[4469]: pam_unix(sshd:session): session closed for user core May 17 10:23:49.373217 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:40978.service: Deactivated successfully. May 17 10:23:49.375211 systemd[1]: session-28.scope: Deactivated successfully. May 17 10:23:49.376133 systemd-logind[1545]: Session 28 logged out. Waiting for processes to exit. May 17 10:23:49.377942 systemd-logind[1545]: Removed session 28.