May 14 18:02:39.839344 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:02:39.839380 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:39.839395 kernel: BIOS-provided physical RAM map: May 14 18:02:39.839404 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:02:39.839413 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:02:39.839422 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:02:39.839433 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:02:39.839442 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:02:39.839454 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:02:39.839463 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:02:39.839472 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 18:02:39.839481 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:02:39.839489 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:02:39.839498 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:02:39.839526 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:02:39.839537 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:02:39.839546 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:02:39.839556 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:02:39.839566 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:02:39.839575 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:02:39.839585 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:02:39.839594 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:02:39.839604 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:02:39.839613 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:02:39.839623 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:02:39.839635 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:02:39.839644 kernel: NX (Execute Disable) protection: active May 14 18:02:39.839654 kernel: APIC: Static calls initialized May 14 18:02:39.839663 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 14 18:02:39.839673 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 14 18:02:39.839683 kernel: extended physical RAM map: May 14 18:02:39.839692 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:02:39.839702 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:02:39.839712 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:02:39.839722 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:02:39.839731 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:02:39.839744 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:02:39.839753 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:02:39.839763 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 14 18:02:39.839773 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 14 18:02:39.839787 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 14 18:02:39.839797 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 14 18:02:39.839809 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 14 18:02:39.839820 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:02:39.839830 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:02:39.839840 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:02:39.839850 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:02:39.839861 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:02:39.839871 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:02:39.839881 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:02:39.839891 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:02:39.839903 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:02:39.839913 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:02:39.839923 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:02:39.839934 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:02:39.839944 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:02:39.839954 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:02:39.839964 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:02:39.839974 kernel: efi: EFI v2.7 by EDK II May 14 18:02:39.839984 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 14 18:02:39.839994 kernel: random: crng init done May 14 18:02:39.840005 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 18:02:39.840015 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 18:02:39.840028 kernel: secureboot: Secure boot disabled May 14 18:02:39.840038 kernel: SMBIOS 2.8 present. May 14 18:02:39.840048 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 18:02:39.840058 kernel: DMI: Memory slots populated: 1/1 May 14 18:02:39.840068 kernel: Hypervisor detected: KVM May 14 18:02:39.840102 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:02:39.840112 kernel: kvm-clock: using sched offset of 3745865709 cycles May 14 18:02:39.840123 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:02:39.840134 kernel: tsc: Detected 2794.746 MHz processor May 14 18:02:39.840144 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:02:39.840155 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:02:39.840168 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 18:02:39.840179 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 18:02:39.840189 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:02:39.840200 kernel: Using GB pages for direct mapping May 14 18:02:39.840211 kernel: ACPI: Early table checksum verification disabled May 14 18:02:39.840221 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 18:02:39.840232 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 18:02:39.840243 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840253 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840266 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 18:02:39.840277 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840287 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840298 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840308 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:02:39.840319 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 18:02:39.840329 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 18:02:39.840340 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 18:02:39.840352 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 18:02:39.840363 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 18:02:39.840373 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 18:02:39.840384 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 18:02:39.840394 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 18:02:39.840404 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 18:02:39.840414 kernel: No NUMA configuration found May 14 18:02:39.840425 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 18:02:39.840435 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 14 18:02:39.840448 kernel: Zone ranges: May 14 18:02:39.840459 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:02:39.840469 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 18:02:39.840480 kernel: Normal empty May 14 18:02:39.840490 kernel: Device empty May 14 18:02:39.840500 kernel: Movable zone start for each node May 14 18:02:39.840522 kernel: Early memory node ranges May 14 18:02:39.840533 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 18:02:39.840543 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 18:02:39.840553 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 18:02:39.840566 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 18:02:39.840577 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 18:02:39.840587 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 18:02:39.840597 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 14 18:02:39.840608 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 14 18:02:39.840618 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 18:02:39.840629 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:02:39.840640 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 18:02:39.840660 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 18:02:39.840671 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:02:39.840682 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 18:02:39.840692 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 18:02:39.840706 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 18:02:39.840717 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 18:02:39.840728 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 18:02:39.840739 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:02:39.840750 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:02:39.840763 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:02:39.840774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:02:39.840785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:02:39.840796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:02:39.840807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:02:39.840818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:02:39.840829 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:02:39.840840 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:02:39.840850 kernel: TSC deadline timer available May 14 18:02:39.840864 kernel: CPU topo: Max. logical packages: 1 May 14 18:02:39.840874 kernel: CPU topo: Max. logical dies: 1 May 14 18:02:39.840885 kernel: CPU topo: Max. dies per package: 1 May 14 18:02:39.840896 kernel: CPU topo: Max. threads per core: 1 May 14 18:02:39.840906 kernel: CPU topo: Num. cores per package: 4 May 14 18:02:39.840917 kernel: CPU topo: Num. threads per package: 4 May 14 18:02:39.840928 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 14 18:02:39.840939 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:02:39.840950 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 18:02:39.840960 kernel: kvm-guest: setup PV sched yield May 14 18:02:39.840974 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 18:02:39.840985 kernel: Booting paravirtualized kernel on KVM May 14 18:02:39.840996 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:02:39.841007 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 18:02:39.841018 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 14 18:02:39.841029 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 14 18:02:39.841041 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 18:02:39.841051 kernel: kvm-guest: PV spinlocks enabled May 14 18:02:39.841062 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:02:39.841114 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:39.841135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:02:39.841146 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:02:39.841157 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:02:39.841168 kernel: Fallback order for Node 0: 0 May 14 18:02:39.841179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 14 18:02:39.841190 kernel: Policy zone: DMA32 May 14 18:02:39.841201 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:02:39.841215 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:02:39.841226 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:02:39.841237 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:02:39.841248 kernel: Dynamic Preempt: voluntary May 14 18:02:39.841259 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:02:39.841270 kernel: rcu: RCU event tracing is enabled. May 14 18:02:39.841281 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:02:39.841292 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:02:39.841303 kernel: Rude variant of Tasks RCU enabled. May 14 18:02:39.841317 kernel: Tracing variant of Tasks RCU enabled. May 14 18:02:39.841328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:02:39.841339 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:02:39.841350 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:39.841361 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:39.841372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:02:39.841383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 18:02:39.841394 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:02:39.841405 kernel: Console: colour dummy device 80x25 May 14 18:02:39.841418 kernel: printk: legacy console [ttyS0] enabled May 14 18:02:39.841429 kernel: ACPI: Core revision 20240827 May 14 18:02:39.841440 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:02:39.841451 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:02:39.841462 kernel: x2apic enabled May 14 18:02:39.841473 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:02:39.841484 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 18:02:39.841495 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 18:02:39.841506 kernel: kvm-guest: setup PV IPIs May 14 18:02:39.841528 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:02:39.841539 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:02:39.841551 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 18:02:39.841562 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:02:39.841573 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 18:02:39.841584 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 18:02:39.841595 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:02:39.841606 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:02:39.841617 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:02:39.841631 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:02:39.841642 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 18:02:39.841652 kernel: RETBleed: Mitigation: untrained return thunk May 14 18:02:39.841663 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:02:39.841675 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:02:39.841685 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 18:02:39.841697 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 18:02:39.841709 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 18:02:39.841722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:02:39.841733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:02:39.841744 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:02:39.841756 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:02:39.841767 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 18:02:39.841778 kernel: Freeing SMP alternatives memory: 32K May 14 18:02:39.841789 kernel: pid_max: default: 32768 minimum: 301 May 14 18:02:39.841799 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:02:39.841810 kernel: landlock: Up and running. May 14 18:02:39.841824 kernel: SELinux: Initializing. May 14 18:02:39.841835 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:02:39.841846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:02:39.841857 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 18:02:39.841868 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 18:02:39.841879 kernel: ... version: 0 May 14 18:02:39.841889 kernel: ... bit width: 48 May 14 18:02:39.841900 kernel: ... generic registers: 6 May 14 18:02:39.841911 kernel: ... value mask: 0000ffffffffffff May 14 18:02:39.841924 kernel: ... max period: 00007fffffffffff May 14 18:02:39.841935 kernel: ... fixed-purpose events: 0 May 14 18:02:39.841946 kernel: ... event mask: 000000000000003f May 14 18:02:39.841957 kernel: signal: max sigframe size: 1776 May 14 18:02:39.841968 kernel: rcu: Hierarchical SRCU implementation. May 14 18:02:39.841979 kernel: rcu: Max phase no-delay instances is 400. May 14 18:02:39.841990 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:02:39.842001 kernel: smp: Bringing up secondary CPUs ... May 14 18:02:39.842012 kernel: smpboot: x86: Booting SMP configuration: May 14 18:02:39.842023 kernel: .... node #0, CPUs: #1 #2 #3 May 14 18:02:39.842036 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:02:39.842047 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 18:02:39.842059 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 137196K reserved, 0K cma-reserved) May 14 18:02:39.842070 kernel: devtmpfs: initialized May 14 18:02:39.842094 kernel: x86/mm: Memory block size: 128MB May 14 18:02:39.842105 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 18:02:39.842127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 18:02:39.842138 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 18:02:39.842153 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 18:02:39.842164 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 14 18:02:39.842175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 18:02:39.842186 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:02:39.842197 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:02:39.842208 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:02:39.842219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:02:39.842230 kernel: audit: initializing netlink subsys (disabled) May 14 18:02:39.842241 kernel: audit: type=2000 audit(1747245757.121:1): state=initialized audit_enabled=0 res=1 May 14 18:02:39.842254 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:02:39.842265 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:02:39.842276 kernel: cpuidle: using governor menu May 14 18:02:39.842287 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:02:39.842299 kernel: dca service started, version 1.12.1 May 14 18:02:39.842312 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 14 18:02:39.842325 kernel: PCI: Using configuration type 1 for base access May 14 18:02:39.842336 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:02:39.842347 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:02:39.842361 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:02:39.842372 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:02:39.842383 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:02:39.842393 kernel: ACPI: Added _OSI(Module Device) May 14 18:02:39.842404 kernel: ACPI: Added _OSI(Processor Device) May 14 18:02:39.842415 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:02:39.842426 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:02:39.842437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:02:39.842448 kernel: ACPI: Interpreter enabled May 14 18:02:39.842461 kernel: ACPI: PM: (supports S0 S3 S5) May 14 18:02:39.842472 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:02:39.842483 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:02:39.842494 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:02:39.842505 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 18:02:39.842525 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:02:39.842765 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:02:39.842917 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 18:02:39.844538 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 18:02:39.844554 kernel: PCI host bridge to bus 0000:00 May 14 18:02:39.844687 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:02:39.844794 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:02:39.844903 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:02:39.845007 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 18:02:39.845125 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 18:02:39.845234 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 18:02:39.845344 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:02:39.845495 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 18:02:39.845656 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 18:02:39.845785 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 14 18:02:39.845902 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 14 18:02:39.846019 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 14 18:02:39.846205 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:02:39.846349 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:02:39.846465 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 14 18:02:39.846591 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 14 18:02:39.846706 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 14 18:02:39.846829 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:02:39.846949 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 14 18:02:39.847092 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 14 18:02:39.847212 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 14 18:02:39.847362 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:02:39.847483 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 14 18:02:39.847613 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 14 18:02:39.847729 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 18:02:39.847848 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 14 18:02:39.847973 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 18:02:39.848103 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 18:02:39.848226 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 18:02:39.848339 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 14 18:02:39.848451 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 14 18:02:39.848589 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 18:02:39.848704 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 14 18:02:39.848715 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:02:39.848723 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:02:39.848731 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:02:39.848739 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:02:39.848747 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 18:02:39.848755 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 18:02:39.848766 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 18:02:39.848773 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 18:02:39.848781 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 18:02:39.848789 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 18:02:39.848797 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 18:02:39.848804 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 18:02:39.848812 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 18:02:39.848819 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 18:02:39.848827 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 18:02:39.848837 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 18:02:39.848845 kernel: iommu: Default domain type: Translated May 14 18:02:39.848852 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:02:39.848860 kernel: efivars: Registered efivars operations May 14 18:02:39.848868 kernel: PCI: Using ACPI for IRQ routing May 14 18:02:39.848876 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:02:39.848884 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 18:02:39.848892 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 18:02:39.848899 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 14 18:02:39.848907 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 14 18:02:39.848916 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 18:02:39.848924 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 18:02:39.848932 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 14 18:02:39.848939 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 18:02:39.849052 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 18:02:39.849179 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 18:02:39.849293 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:02:39.849307 kernel: vgaarb: loaded May 14 18:02:39.849315 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:02:39.849323 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:02:39.849331 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:02:39.849338 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:02:39.849346 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:02:39.849354 kernel: pnp: PnP ACPI init May 14 18:02:39.849491 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 18:02:39.849505 kernel: pnp: PnP ACPI: found 6 devices May 14 18:02:39.849524 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:02:39.849532 kernel: NET: Registered PF_INET protocol family May 14 18:02:39.849540 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:02:39.849549 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:02:39.849557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:02:39.849565 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:02:39.849573 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:02:39.849581 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:02:39.849591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:02:39.849599 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:02:39.849607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:02:39.849615 kernel: NET: Registered PF_XDP protocol family May 14 18:02:39.849732 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 14 18:02:39.849847 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 14 18:02:39.849953 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:02:39.850058 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:02:39.850180 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:02:39.850285 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 18:02:39.850412 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 18:02:39.850551 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 18:02:39.850565 kernel: PCI: CLS 0 bytes, default 64 May 14 18:02:39.850574 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:02:39.850582 kernel: Initialise system trusted keyrings May 14 18:02:39.850594 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:02:39.850603 kernel: Key type asymmetric registered May 14 18:02:39.850611 kernel: Asymmetric key parser 'x509' registered May 14 18:02:39.850619 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:02:39.850627 kernel: io scheduler mq-deadline registered May 14 18:02:39.850635 kernel: io scheduler kyber registered May 14 18:02:39.850643 kernel: io scheduler bfq registered May 14 18:02:39.850653 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:02:39.850665 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 18:02:39.850676 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 18:02:39.850687 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 18:02:39.850697 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:02:39.850705 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:02:39.850713 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:02:39.850721 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:02:39.850729 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:02:39.850857 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 18:02:39.850873 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:02:39.850983 kernel: rtc_cmos 00:04: registered as rtc0 May 14 18:02:39.851107 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T18:02:39 UTC (1747245759) May 14 18:02:39.851217 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 18:02:39.851228 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 18:02:39.851237 kernel: efifb: probing for efifb May 14 18:02:39.851245 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 18:02:39.851257 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 18:02:39.851265 kernel: efifb: scrolling: redraw May 14 18:02:39.851273 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 18:02:39.851281 kernel: Console: switching to colour frame buffer device 160x50 May 14 18:02:39.851290 kernel: fb0: EFI VGA frame buffer device May 14 18:02:39.851298 kernel: pstore: Using crash dump compression: deflate May 14 18:02:39.851306 kernel: pstore: Registered efi_pstore as persistent store backend May 14 18:02:39.851314 kernel: NET: Registered PF_INET6 protocol family May 14 18:02:39.851322 kernel: Segment Routing with IPv6 May 14 18:02:39.851330 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:02:39.851340 kernel: NET: Registered PF_PACKET protocol family May 14 18:02:39.851348 kernel: Key type dns_resolver registered May 14 18:02:39.851356 kernel: IPI shorthand broadcast: enabled May 14 18:02:39.851365 kernel: sched_clock: Marking stable (3069002874, 163659774)->(3265641412, -32978764) May 14 18:02:39.851373 kernel: registered taskstats version 1 May 14 18:02:39.851381 kernel: Loading compiled-in X.509 certificates May 14 18:02:39.851390 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:02:39.851398 kernel: Demotion targets for Node 0: null May 14 18:02:39.851406 kernel: Key type .fscrypt registered May 14 18:02:39.851416 kernel: Key type fscrypt-provisioning registered May 14 18:02:39.851424 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:02:39.851432 kernel: ima: Allocated hash algorithm: sha1 May 14 18:02:39.851440 kernel: ima: No architecture policies found May 14 18:02:39.851448 kernel: clk: Disabling unused clocks May 14 18:02:39.851456 kernel: Warning: unable to open an initial console. May 14 18:02:39.851465 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:02:39.851473 kernel: Write protecting the kernel read-only data: 24576k May 14 18:02:39.851483 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:02:39.851491 kernel: Run /init as init process May 14 18:02:39.851499 kernel: with arguments: May 14 18:02:39.851506 kernel: /init May 14 18:02:39.851522 kernel: with environment: May 14 18:02:39.851532 kernel: HOME=/ May 14 18:02:39.851539 kernel: TERM=linux May 14 18:02:39.851547 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:02:39.851556 systemd[1]: Successfully made /usr/ read-only. May 14 18:02:39.851570 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:02:39.851580 systemd[1]: Detected virtualization kvm. May 14 18:02:39.851588 systemd[1]: Detected architecture x86-64. May 14 18:02:39.851597 systemd[1]: Running in initrd. May 14 18:02:39.851605 systemd[1]: No hostname configured, using default hostname. May 14 18:02:39.851614 systemd[1]: Hostname set to . May 14 18:02:39.851623 systemd[1]: Initializing machine ID from VM UUID. May 14 18:02:39.851633 systemd[1]: Queued start job for default target initrd.target. May 14 18:02:39.851642 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:39.851651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:39.851660 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:02:39.851669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:02:39.851677 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:02:39.851687 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:02:39.851699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:02:39.851708 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:02:39.851716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:39.851725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:39.851734 systemd[1]: Reached target paths.target - Path Units. May 14 18:02:39.851743 systemd[1]: Reached target slices.target - Slice Units. May 14 18:02:39.851751 systemd[1]: Reached target swap.target - Swaps. May 14 18:02:39.851760 systemd[1]: Reached target timers.target - Timer Units. May 14 18:02:39.851768 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:02:39.851779 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:02:39.851787 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:02:39.851796 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:02:39.851804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:39.851813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:02:39.851822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:39.851830 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:02:39.851839 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:02:39.851850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:02:39.851859 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:02:39.851868 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:02:39.851877 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:02:39.851885 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:02:39.851894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:02:39.851903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:39.851911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:39.851922 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:02:39.851931 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:02:39.851940 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:02:39.851968 systemd-journald[218]: Collecting audit messages is disabled. May 14 18:02:39.851993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:02:39.852010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:39.852023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:02:39.852035 systemd-journald[218]: Journal started May 14 18:02:39.852064 systemd-journald[218]: Runtime Journal (/run/log/journal/2688271b02c94dd99f3d67d8ea2ab31e) is 6M, max 48.5M, 42.4M free. May 14 18:02:39.836217 systemd-modules-load[221]: Inserted module 'overlay' May 14 18:02:39.854354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:02:39.857107 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:02:39.863193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:02:39.865897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:39.870097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:02:39.873126 kernel: Bridge firewalling registered May 14 18:02:39.873038 systemd-modules-load[221]: Inserted module 'br_netfilter' May 14 18:02:39.874071 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:02:39.874449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:02:39.879920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:02:39.881471 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:02:39.882658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:39.898735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:02:39.907255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:39.909042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:02:39.918309 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:02:39.966238 systemd-resolved[268]: Positive Trust Anchors: May 14 18:02:39.966259 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:02:39.966296 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:02:39.969501 systemd-resolved[268]: Defaulting to hostname 'linux'. May 14 18:02:39.970739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:02:39.976208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:40.020107 kernel: SCSI subsystem initialized May 14 18:02:40.029102 kernel: Loading iSCSI transport class v2.0-870. May 14 18:02:40.041134 kernel: iscsi: registered transport (tcp) May 14 18:02:40.065114 kernel: iscsi: registered transport (qla4xxx) May 14 18:02:40.065165 kernel: QLogic iSCSI HBA Driver May 14 18:02:40.084810 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:02:40.102989 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:40.103418 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:02:40.179757 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:02:40.181279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:02:40.237132 kernel: raid6: avx2x4 gen() 29283 MB/s May 14 18:02:40.254136 kernel: raid6: avx2x2 gen() 26499 MB/s May 14 18:02:40.271248 kernel: raid6: avx2x1 gen() 22713 MB/s May 14 18:02:40.271333 kernel: raid6: using algorithm avx2x4 gen() 29283 MB/s May 14 18:02:40.289244 kernel: raid6: .... xor() 6881 MB/s, rmw enabled May 14 18:02:40.289360 kernel: raid6: using avx2x2 recovery algorithm May 14 18:02:40.310111 kernel: xor: automatically using best checksumming function avx May 14 18:02:40.477124 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:02:40.485444 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:02:40.487677 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:40.532319 systemd-udevd[474]: Using default interface naming scheme 'v255'. May 14 18:02:40.539042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:40.540005 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:02:40.561194 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation May 14 18:02:40.589300 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:02:40.592883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:02:40.661794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:40.666072 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:02:40.696109 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 18:02:40.709058 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:02:40.709258 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:02:40.709274 kernel: GPT:9289727 != 19775487 May 14 18:02:40.709293 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:02:40.709307 kernel: GPT:9289727 != 19775487 May 14 18:02:40.709320 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:02:40.709333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:40.715101 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:02:40.730121 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:02:40.730168 kernel: libata version 3.00 loaded. May 14 18:02:40.734196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:40.734356 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:40.738169 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:40.743255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:40.750194 kernel: AES CTR mode by8 optimization enabled May 14 18:02:40.753104 kernel: ahci 0000:00:1f.2: version 3.0 May 14 18:02:40.793242 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 18:02:40.793262 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 18:02:40.793440 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 18:02:40.793597 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 18:02:40.793740 kernel: scsi host0: ahci May 14 18:02:40.793895 kernel: scsi host1: ahci May 14 18:02:40.794050 kernel: scsi host2: ahci May 14 18:02:40.794232 kernel: scsi host3: ahci May 14 18:02:40.794428 kernel: scsi host4: ahci May 14 18:02:40.794576 kernel: scsi host5: ahci May 14 18:02:40.794720 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 14 18:02:40.794735 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 14 18:02:40.794753 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 14 18:02:40.794766 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 14 18:02:40.794779 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 14 18:02:40.794792 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 14 18:02:40.795846 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:02:40.814349 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:02:40.835226 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:02:40.843066 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:02:40.844339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:02:40.845225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:02:40.847931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:40.847994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:40.852210 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:40.863680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:40.865154 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:02:40.886712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:40.968643 disk-uuid[637]: Primary Header is updated. May 14 18:02:40.968643 disk-uuid[637]: Secondary Entries is updated. May 14 18:02:40.968643 disk-uuid[637]: Secondary Header is updated. May 14 18:02:40.974155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:40.980108 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:41.103118 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 18:02:41.103208 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 18:02:41.104115 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 18:02:41.105132 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 18:02:41.106117 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 18:02:41.107121 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 18:02:41.108111 kernel: ata3.00: applying bridge limits May 14 18:02:41.108136 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 18:02:41.109117 kernel: ata3.00: configured for UDMA/100 May 14 18:02:41.111112 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 18:02:41.166130 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 18:02:41.191975 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 18:02:41.191998 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 18:02:41.638135 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:02:41.641071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:02:41.643560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:41.645984 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:02:41.649241 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:02:41.673404 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:02:41.980844 disk-uuid[643]: The operation has completed successfully. May 14 18:02:41.982173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:02:42.013044 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:02:42.013194 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:02:42.045144 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:02:42.074254 sh[674]: Success May 14 18:02:42.095134 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:02:42.095233 kernel: device-mapper: uevent: version 1.0.3 May 14 18:02:42.095251 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:02:42.110120 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:02:42.143439 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:02:42.145608 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:02:42.159177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:02:42.168373 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:02:42.168432 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (686) May 14 18:02:42.169731 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:02:42.169757 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:42.170607 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:02:42.176752 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:02:42.179224 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:02:42.189610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:02:42.190782 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:02:42.194095 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:02:42.221103 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (719) May 14 18:02:42.223160 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:42.223222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:42.223235 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:42.231125 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:42.231635 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:02:42.233967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:02:42.385024 ignition[763]: Ignition 2.21.0 May 14 18:02:42.385038 ignition[763]: Stage: fetch-offline May 14 18:02:42.385089 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:42.385101 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:42.385201 ignition[763]: parsed url from cmdline: "" May 14 18:02:42.385205 ignition[763]: no config URL provided May 14 18:02:42.385210 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:02:42.385219 ignition[763]: no config at "/usr/lib/ignition/user.ign" May 14 18:02:42.385244 ignition[763]: op(1): [started] loading QEMU firmware config module May 14 18:02:42.385250 ignition[763]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:02:42.395200 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:02:42.397891 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:02:42.400553 ignition[763]: op(1): [finished] loading QEMU firmware config module May 14 18:02:42.445200 ignition[763]: parsing config with SHA512: 894b4639a1443ddb057852769986c86446b13c57fbbba37a11e9fc3291b60f56ad2b21d16454c52b23da2c3cfa2f3524ea24e02277be164e31590743c7ff5ef7 May 14 18:02:42.449945 unknown[763]: fetched base config from "system" May 14 18:02:42.449962 unknown[763]: fetched user config from "qemu" May 14 18:02:42.450415 ignition[763]: fetch-offline: fetch-offline passed May 14 18:02:42.450545 ignition[763]: Ignition finished successfully May 14 18:02:42.453562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:02:42.459162 systemd-networkd[865]: lo: Link UP May 14 18:02:42.459173 systemd-networkd[865]: lo: Gained carrier May 14 18:02:42.462189 systemd-networkd[865]: Enumeration completed May 14 18:02:42.462318 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:02:42.465162 systemd[1]: Reached target network.target - Network. May 14 18:02:42.466137 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:02:42.467189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:02:42.470679 systemd-networkd[865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:42.470686 systemd-networkd[865]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:02:42.476146 systemd-networkd[865]: eth0: Link UP May 14 18:02:42.476161 systemd-networkd[865]: eth0: Gained carrier May 14 18:02:42.476174 systemd-networkd[865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:42.500206 systemd-networkd[865]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:02:42.519251 ignition[869]: Ignition 2.21.0 May 14 18:02:42.519267 ignition[869]: Stage: kargs May 14 18:02:42.519399 ignition[869]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:42.519409 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:42.521712 ignition[869]: kargs: kargs passed May 14 18:02:42.522206 ignition[869]: Ignition finished successfully May 14 18:02:42.526531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:02:42.530199 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:02:42.572881 ignition[878]: Ignition 2.21.0 May 14 18:02:42.572901 ignition[878]: Stage: disks May 14 18:02:42.573689 ignition[878]: no configs at "/usr/lib/ignition/base.d" May 14 18:02:42.573711 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:42.575512 ignition[878]: disks: disks passed May 14 18:02:42.579415 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:02:42.575592 ignition[878]: Ignition finished successfully May 14 18:02:42.580704 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:02:42.582664 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:02:42.584726 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:02:42.586965 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:02:42.587024 systemd[1]: Reached target basic.target - Basic System. May 14 18:02:42.588672 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:02:42.625965 systemd-resolved[268]: Detected conflict on linux IN A 10.0.0.50 May 14 18:02:42.625985 systemd-resolved[268]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 14 18:02:42.626941 systemd-fsck[888]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:02:42.754894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:02:42.758266 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:02:42.893130 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:02:42.893987 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:02:42.895649 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:02:42.898345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:02:42.900369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:02:42.901730 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:02:42.901785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:02:42.901813 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:02:42.919626 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:02:42.921201 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:02:42.956655 initrd-setup-root[904]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:02:42.960985 initrd-setup-root[911]: cut: /sysroot/etc/group: No such file or directory May 14 18:02:42.966262 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (896) May 14 18:02:42.966296 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:42.966308 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:42.966786 initrd-setup-root[918]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:02:42.968747 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:42.971517 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:02:42.972397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:02:43.063765 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:02:43.066191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:02:43.066888 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:02:43.089135 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:43.105278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:02:43.167898 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:02:43.170955 ignition[1010]: INFO : Ignition 2.21.0 May 14 18:02:43.170955 ignition[1010]: INFO : Stage: mount May 14 18:02:43.172710 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:43.172710 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:43.175871 ignition[1010]: INFO : mount: mount passed May 14 18:02:43.176646 ignition[1010]: INFO : Ignition finished successfully May 14 18:02:43.180340 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:02:43.182561 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:02:43.212354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:02:43.225111 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1022) May 14 18:02:43.227298 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:02:43.227325 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:02:43.227336 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:02:43.231452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:02:43.268170 ignition[1039]: INFO : Ignition 2.21.0 May 14 18:02:43.268170 ignition[1039]: INFO : Stage: files May 14 18:02:43.270067 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:43.270067 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:43.274395 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping May 14 18:02:43.275817 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:02:43.275817 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:02:43.280665 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:02:43.282186 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:02:43.283996 unknown[1039]: wrote ssh authorized keys file for user: core May 14 18:02:43.285165 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:02:43.286586 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 18:02:43.286586 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 18:02:43.362861 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:02:43.713464 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 18:02:43.715801 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:02:43.715801 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 18:02:44.090862 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:02:44.208302 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:02:44.211220 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:02:44.233899 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:02:44.236302 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:02:44.236302 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:02:44.242150 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:02:44.245453 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:02:44.245453 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 18:02:44.384401 systemd-networkd[865]: eth0: Gained IPv6LL May 14 18:02:44.683326 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:02:45.386893 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 18:02:45.386893 ignition[1039]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:02:45.391939 ignition[1039]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:02:45.400626 ignition[1039]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:02:45.400626 ignition[1039]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:02:45.400626 ignition[1039]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 18:02:45.400626 ignition[1039]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:02:45.409038 ignition[1039]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:02:45.409038 ignition[1039]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 18:02:45.409038 ignition[1039]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:02:45.436571 ignition[1039]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:02:45.446591 ignition[1039]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:02:45.448751 ignition[1039]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:02:45.448751 ignition[1039]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 18:02:45.448751 ignition[1039]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:02:45.448751 ignition[1039]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:02:45.448751 ignition[1039]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:02:45.448751 ignition[1039]: INFO : files: files passed May 14 18:02:45.448751 ignition[1039]: INFO : Ignition finished successfully May 14 18:02:45.457645 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:02:45.461928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:02:45.464350 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:02:45.484894 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:02:45.485030 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:02:45.489196 initrd-setup-root-after-ignition[1068]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:02:45.491031 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:45.491031 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:45.494955 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:02:45.493990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:02:45.496600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:02:45.500776 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:02:45.579191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:02:45.579330 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:02:45.582196 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:02:45.583331 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:02:45.585391 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:02:45.586638 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:02:45.617957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:02:45.624521 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:02:45.657285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:45.657563 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:45.661851 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:02:45.665957 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:02:45.666166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:02:45.670230 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:02:45.671586 systemd[1]: Stopped target basic.target - Basic System. May 14 18:02:45.673828 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:02:45.677515 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:02:45.680202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:02:45.681660 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:02:45.684297 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:02:45.685566 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:02:45.685982 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:02:45.686552 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:02:45.686932 systemd[1]: Stopped target swap.target - Swaps. May 14 18:02:45.687490 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:02:45.687672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:02:45.688301 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:45.688692 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:45.689026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:02:45.690284 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:45.700463 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:02:45.700645 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:02:45.705964 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:02:45.706176 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:02:45.707265 systemd[1]: Stopped target paths.target - Path Units. May 14 18:02:45.707803 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:02:45.711230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:45.711703 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:02:45.712073 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:02:45.712642 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:02:45.712775 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:02:45.718056 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:02:45.718233 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:02:45.719928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:02:45.720136 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:02:45.722177 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:02:45.722345 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:02:45.726359 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:02:45.728970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:02:45.729954 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:02:45.730242 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:45.732031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:02:45.732215 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:02:45.743173 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:02:45.743324 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:02:45.766224 ignition[1095]: INFO : Ignition 2.21.0 May 14 18:02:45.766224 ignition[1095]: INFO : Stage: umount May 14 18:02:45.768579 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:02:45.768579 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:02:45.771103 ignition[1095]: INFO : umount: umount passed May 14 18:02:45.771103 ignition[1095]: INFO : Ignition finished successfully May 14 18:02:45.771891 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:02:45.776513 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:02:45.776678 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:02:45.779701 systemd[1]: Stopped target network.target - Network. May 14 18:02:45.780727 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:02:45.780811 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:02:45.781700 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:02:45.781761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:02:45.783762 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:02:45.783832 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:02:45.784114 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:02:45.784169 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:02:45.788930 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:02:45.793200 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:02:45.798822 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:02:45.798981 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:02:45.802917 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:02:45.803256 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:02:45.803418 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:02:45.809555 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:02:45.810443 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:02:45.811053 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:02:45.811127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:45.814371 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:02:45.821716 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:02:45.821816 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:02:45.822947 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:02:45.823006 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:45.829608 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:02:45.829684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:02:45.830623 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:02:45.830683 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:45.835107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:45.839985 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:02:45.840074 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:02:45.857018 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:02:45.857264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:45.858582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:02:45.858640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:02:45.860516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:02:45.860564 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:45.860875 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:02:45.860933 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:02:45.861736 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:02:45.861796 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:02:45.869577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:02:45.869643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:02:45.874878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:02:45.875325 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:02:45.875395 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:45.883196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:02:45.883265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:45.887145 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:02:45.887202 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:02:45.891065 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:02:45.891144 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:45.892697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:02:45.892757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:45.898872 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:02:45.898941 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 18:02:45.898993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:02:45.899052 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:02:45.899493 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:02:45.899614 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:02:45.905990 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:02:45.906133 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:02:45.960068 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:02:45.960250 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:02:45.962567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:02:45.963073 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:02:45.963155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:02:45.966330 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:02:45.990723 systemd[1]: Switching root. May 14 18:02:46.022790 systemd-journald[218]: Journal stopped May 14 18:02:47.538617 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 14 18:02:47.538695 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:02:47.538709 kernel: SELinux: policy capability open_perms=1 May 14 18:02:47.538720 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:02:47.538732 kernel: SELinux: policy capability always_check_network=0 May 14 18:02:47.538743 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:02:47.538757 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:02:47.538768 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:02:47.538779 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:02:47.538790 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:02:47.538801 kernel: audit: type=1403 audit(1747245766.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:02:47.538814 systemd[1]: Successfully loaded SELinux policy in 50.747ms. May 14 18:02:47.538842 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.042ms. May 14 18:02:47.538855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:02:47.538868 systemd[1]: Detected virtualization kvm. May 14 18:02:47.538881 systemd[1]: Detected architecture x86-64. May 14 18:02:47.538893 systemd[1]: Detected first boot. May 14 18:02:47.538905 systemd[1]: Initializing machine ID from VM UUID. May 14 18:02:47.538917 zram_generator::config[1140]: No configuration found. May 14 18:02:47.538930 kernel: Guest personality initialized and is inactive May 14 18:02:47.538941 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:02:47.538952 kernel: Initialized host personality May 14 18:02:47.538964 kernel: NET: Registered PF_VSOCK protocol family May 14 18:02:47.538979 systemd[1]: Populated /etc with preset unit settings. May 14 18:02:47.538998 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:02:47.539011 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:02:47.539024 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:02:47.539036 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:02:47.539048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:02:47.539060 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:02:47.539072 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:02:47.539102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:02:47.539120 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:02:47.539133 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:02:47.539145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:02:47.539157 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:02:47.539169 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:02:47.539181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:02:47.539192 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:02:47.539205 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:02:47.539238 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:02:47.539252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:02:47.539265 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:02:47.539277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:02:47.539289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:02:47.539301 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:02:47.539312 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:02:47.539324 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:02:47.539346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:02:47.539358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:02:47.539370 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:02:47.539382 systemd[1]: Reached target slices.target - Slice Units. May 14 18:02:47.539394 systemd[1]: Reached target swap.target - Swaps. May 14 18:02:47.539406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:02:47.539418 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:02:47.539433 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:02:47.539449 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:02:47.539465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:02:47.539484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:02:47.539500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:02:47.539517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:02:47.539544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:02:47.539560 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:02:47.539575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:47.539591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:02:47.539607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:02:47.539626 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:02:47.539643 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:02:47.539659 systemd[1]: Reached target machines.target - Containers. May 14 18:02:47.539676 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:02:47.539692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:47.539708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:02:47.539723 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:02:47.539739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:47.539755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:02:47.539773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:47.539789 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:02:47.539805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:47.539821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:02:47.539838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:02:47.539854 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:02:47.539870 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:02:47.539898 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:02:47.539918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:47.539941 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:02:47.539957 kernel: loop: module loaded May 14 18:02:47.539972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:02:47.540147 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:02:47.540170 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:02:47.540186 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:02:47.540205 kernel: ACPI: bus type drm_connector registered May 14 18:02:47.540219 kernel: fuse: init (API version 7.41) May 14 18:02:47.540235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:02:47.540252 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:02:47.540270 systemd[1]: Stopped verity-setup.service. May 14 18:02:47.540287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:47.540303 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:02:47.540319 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:02:47.540335 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:02:47.540365 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:02:47.540381 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:02:47.540396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:02:47.540417 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:02:47.540475 systemd-journald[1215]: Collecting audit messages is disabled. May 14 18:02:47.540507 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:02:47.540523 systemd-journald[1215]: Journal started May 14 18:02:47.540552 systemd-journald[1215]: Runtime Journal (/run/log/journal/2688271b02c94dd99f3d67d8ea2ab31e) is 6M, max 48.5M, 42.4M free. May 14 18:02:47.262642 systemd[1]: Queued start job for default target multi-user.target. May 14 18:02:47.282139 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:02:47.282690 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:02:47.542149 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:02:47.543948 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:02:47.544175 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:02:47.545652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:47.545880 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:47.547469 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:02:47.547678 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:02:47.549039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:47.549332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:47.550849 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:02:47.551056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:02:47.552720 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:47.552927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:47.554648 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:02:47.556404 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:02:47.558329 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:02:47.560098 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:02:47.572951 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:02:47.575872 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:02:47.579173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:02:47.580573 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:02:47.580610 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:02:47.582822 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:02:47.592406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:02:47.594431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:47.596532 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:02:47.600393 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:02:47.602093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:02:47.604365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:02:47.606191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:02:47.614331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:02:47.618757 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:02:47.621763 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:02:47.627770 systemd-journald[1215]: Time spent on flushing to /var/log/journal/2688271b02c94dd99f3d67d8ea2ab31e is 26.849ms for 1076 entries. May 14 18:02:47.627770 systemd-journald[1215]: System Journal (/var/log/journal/2688271b02c94dd99f3d67d8ea2ab31e) is 8M, max 195.6M, 187.6M free. May 14 18:02:47.669562 systemd-journald[1215]: Received client request to flush runtime journal. May 14 18:02:47.669625 kernel: loop0: detected capacity change from 0 to 146240 May 14 18:02:47.631423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:02:47.633239 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:02:47.672135 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:02:47.634959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:02:47.639261 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:02:47.646023 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:02:47.651114 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:02:47.673233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:02:47.678505 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:02:47.699125 kernel: loop1: detected capacity change from 0 to 218376 May 14 18:02:47.730910 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 14 18:02:47.730932 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. May 14 18:02:47.740175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:02:47.745232 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:02:47.748811 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:02:47.763208 kernel: loop2: detected capacity change from 0 to 113872 May 14 18:02:47.799118 kernel: loop3: detected capacity change from 0 to 146240 May 14 18:02:47.898803 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:02:47.904697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:02:47.909114 kernel: loop4: detected capacity change from 0 to 218376 May 14 18:02:47.924504 kernel: loop5: detected capacity change from 0 to 113872 May 14 18:02:47.942703 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:02:47.945230 (sd-merge)[1280]: Merged extensions into '/usr'. May 14 18:02:47.950385 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 14 18:02:47.950404 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 14 18:02:47.954453 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:02:47.954577 systemd[1]: Reloading... May 14 18:02:48.101117 zram_generator::config[1309]: No configuration found. May 14 18:02:48.330769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:48.354925 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:02:48.449736 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:02:48.449931 systemd[1]: Reloading finished in 494 ms. May 14 18:02:48.482889 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:02:48.484845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:02:48.486781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:02:48.516995 systemd[1]: Starting ensure-sysext.service... May 14 18:02:48.519254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:02:48.534236 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... May 14 18:02:48.534252 systemd[1]: Reloading... May 14 18:02:48.554649 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:02:48.555152 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:02:48.555614 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:02:48.555988 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:02:48.557383 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:02:48.557964 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. May 14 18:02:48.558071 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. May 14 18:02:48.565640 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:02:48.565961 systemd-tmpfiles[1349]: Skipping /boot May 14 18:02:48.595908 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:02:48.596487 systemd-tmpfiles[1349]: Skipping /boot May 14 18:02:48.616102 zram_generator::config[1372]: No configuration found. May 14 18:02:48.744234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:02:48.827788 systemd[1]: Reloading finished in 293 ms. May 14 18:02:48.852515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:02:48.871834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:02:48.883846 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:48.887155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:02:48.914461 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:02:48.918975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:02:48.925179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:02:48.930369 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:02:48.935607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:48.935827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:48.937554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:48.941442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:48.944670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:48.947184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:48.947342 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:48.951971 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:02:48.953368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:48.957633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:48.959754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:48.962689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:02:48.964810 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:48.965202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:48.968718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:48.968998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:48.976339 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:02:48.985358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:48.985580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:48.985712 systemd-udevd[1420]: Using default interface naming scheme 'v255'. May 14 18:02:48.987607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:48.990370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:48.993919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:48.995249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:48.995397 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:49.001192 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:02:49.001300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:49.006678 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:02:49.010012 augenrules[1453]: No rules May 14 18:02:49.010284 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:49.010557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:49.011218 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:49.011550 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:49.013891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:49.014200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:49.016605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:49.017385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:49.023127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:02:49.035058 systemd[1]: Finished ensure-sysext.service. May 14 18:02:49.036739 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:02:49.044911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:49.047784 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:49.049233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:02:49.118145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:02:49.122240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:02:49.124937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:02:49.129321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:02:49.130854 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:02:49.130924 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:02:49.138824 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:02:49.145279 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:02:49.146660 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:02:49.146704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:02:49.147107 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:02:49.148832 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:02:49.149194 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:02:49.163655 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:02:49.232499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:02:49.232882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:02:49.235021 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:02:49.242291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:02:49.244635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:02:49.244937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:02:49.250319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:02:49.250426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:02:49.263496 augenrules[1489]: /sbin/augenrules: No change May 14 18:02:49.277258 augenrules[1532]: No rules May 14 18:02:49.305863 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:49.306204 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:49.330109 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:02:49.334170 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:02:49.342110 kernel: ACPI: button: Power Button [PWRF] May 14 18:02:49.353529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:02:49.357758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:02:49.365443 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 18:02:49.365736 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 18:02:49.365897 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:02:49.392072 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:02:49.401546 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:02:49.402981 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:02:49.447694 systemd-networkd[1504]: lo: Link UP May 14 18:02:49.447707 systemd-networkd[1504]: lo: Gained carrier May 14 18:02:49.449600 systemd-networkd[1504]: Enumeration completed May 14 18:02:49.449702 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:02:49.452608 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:02:49.453131 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:49.453143 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:02:49.455435 systemd-networkd[1504]: eth0: Link UP May 14 18:02:49.455644 systemd-networkd[1504]: eth0: Gained carrier May 14 18:02:49.455658 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:02:49.487463 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:02:49.498105 systemd-resolved[1419]: Positive Trust Anchors: May 14 18:02:49.498124 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:02:49.498164 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:02:49.505612 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:02:49.506439 systemd-resolved[1419]: Defaulting to hostname 'linux'. May 14 18:02:49.507043 systemd-timesyncd[1505]: Network configuration changed, trying to establish connection. May 14 18:02:50.255789 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:02:50.255921 systemd-timesyncd[1505]: Initial clock synchronization to Wed 2025-05-14 18:02:50.255603 UTC. May 14 18:02:50.255921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:02:50.256971 systemd-resolved[1419]: Clock change detected. Flushing caches. May 14 18:02:50.257263 systemd[1]: Reached target network.target - Network. May 14 18:02:50.258737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:02:50.260686 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:02:50.261986 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:02:50.263395 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:02:50.265802 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:02:50.267704 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:02:50.269240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:02:50.270730 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:02:50.272822 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:02:50.272998 systemd[1]: Reached target paths.target - Path Units. May 14 18:02:50.274236 systemd[1]: Reached target timers.target - Timer Units. May 14 18:02:50.277262 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:02:50.284377 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:02:50.293199 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:02:50.295228 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:02:50.296617 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:02:50.306675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:02:50.308722 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:02:50.312842 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:02:50.314401 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:02:50.330668 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:02:50.330879 kernel: kvm_amd: TSC scaling supported May 14 18:02:50.330911 kernel: kvm_amd: Nested Virtualization enabled May 14 18:02:50.330927 kernel: kvm_amd: Nested Paging enabled May 14 18:02:50.330943 kernel: kvm_amd: LBR virtualization supported May 14 18:02:50.330959 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 18:02:50.333553 kernel: kvm_amd: Virtual GIF supported May 14 18:02:50.335107 systemd[1]: Reached target basic.target - Basic System. May 14 18:02:50.337585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:02:50.337929 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:02:50.343008 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:02:50.348177 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:02:50.353136 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:02:50.358984 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:02:50.361787 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:02:50.362921 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:02:50.365624 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:02:50.368899 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:02:50.371284 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:02:50.374858 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:02:50.378405 jq[1569]: false May 14 18:02:50.379240 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:02:50.385011 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:02:50.387017 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:02:50.389922 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:02:50.393910 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:02:50.396673 extend-filesystems[1570]: Found loop3 May 14 18:02:50.396673 extend-filesystems[1570]: Found loop4 May 14 18:02:50.396673 extend-filesystems[1570]: Found loop5 May 14 18:02:50.396673 extend-filesystems[1570]: Found sr0 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda May 14 18:02:50.396673 extend-filesystems[1570]: Found vda1 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda2 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda3 May 14 18:02:50.396673 extend-filesystems[1570]: Found usr May 14 18:02:50.396673 extend-filesystems[1570]: Found vda4 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda6 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda7 May 14 18:02:50.396673 extend-filesystems[1570]: Found vda9 May 14 18:02:50.396673 extend-filesystems[1570]: Checking size of /dev/vda9 May 14 18:02:50.431916 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:02:50.431945 kernel: EDAC MC: Ver: 3.0.0 May 14 18:02:50.431959 extend-filesystems[1570]: Resized partition /dev/vda9 May 14 18:02:50.397846 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:02:50.406371 oslogin_cache_refresh[1571]: Refreshing passwd entry cache May 14 18:02:50.432996 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache May 14 18:02:50.432996 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting May 14 18:02:50.432996 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:02:50.432996 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache May 14 18:02:50.433210 extend-filesystems[1587]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:02:50.435214 jq[1580]: true May 14 18:02:50.416772 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:02:50.423734 oslogin_cache_refresh[1571]: Failure getting users, quitting May 14 18:02:50.435627 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting May 14 18:02:50.435627 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:02:50.422362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:02:50.423766 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:02:50.422914 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:02:50.423834 oslogin_cache_refresh[1571]: Refreshing group entry cache May 14 18:02:50.432882 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:02:50.433268 oslogin_cache_refresh[1571]: Failure getting groups, quitting May 14 18:02:50.433148 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:02:50.433284 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:02:50.438368 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:02:50.441208 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:02:50.445510 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:02:50.445952 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:02:50.455836 jq[1594]: true May 14 18:02:50.460021 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:02:50.474457 update_engine[1578]: I20250514 18:02:50.474180 1578 main.cc:92] Flatcar Update Engine starting May 14 18:02:50.481553 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:02:50.563306 tar[1591]: linux-amd64/LICENSE May 14 18:02:50.563306 tar[1591]: linux-amd64/helm May 14 18:02:50.563790 update_engine[1578]: I20250514 18:02:50.546489 1578 update_check_scheduler.cc:74] Next update check in 10m37s May 14 18:02:50.540692 dbus-daemon[1567]: [system] SELinux support is enabled May 14 18:02:50.514641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:02:50.540850 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:02:50.544570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:02:50.544594 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:02:50.545957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:02:50.545972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:02:50.547972 systemd[1]: Started update-engine.service - Update Engine. May 14 18:02:50.550658 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:02:50.563750 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:02:50.563772 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:02:50.567356 systemd-logind[1577]: New seat seat0. May 14 18:02:50.568489 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:02:50.646628 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:02:50.646628 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:02:50.646628 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:02:50.647297 extend-filesystems[1570]: Resized filesystem in /dev/vda9 May 14 18:02:50.650110 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:02:50.650450 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:02:50.664889 bash[1626]: Updated "/home/core/.ssh/authorized_keys" May 14 18:02:50.687435 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:02:50.690961 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:02:50.695505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:02:50.701245 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:02:50.973860 containerd[1598]: time="2025-05-14T18:02:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:02:50.976861 containerd[1598]: time="2025-05-14T18:02:50.976813936Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:02:51.009934 containerd[1598]: time="2025-05-14T18:02:51.009798159Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.062µs" May 14 18:02:51.009934 containerd[1598]: time="2025-05-14T18:02:51.009875224Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:02:51.009934 containerd[1598]: time="2025-05-14T18:02:51.009909388Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:02:51.010245 containerd[1598]: time="2025-05-14T18:02:51.010205273Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:02:51.010245 containerd[1598]: time="2025-05-14T18:02:51.010238425Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:02:51.010364 containerd[1598]: time="2025-05-14T18:02:51.010287908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:02:51.010417 containerd[1598]: time="2025-05-14T18:02:51.010370543Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:02:51.010417 containerd[1598]: time="2025-05-14T18:02:51.010396712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:02:51.010925 containerd[1598]: time="2025-05-14T18:02:51.010853048Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:02:51.010925 containerd[1598]: time="2025-05-14T18:02:51.010895708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:02:51.010925 containerd[1598]: time="2025-05-14T18:02:51.010919663Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:02:51.011062 containerd[1598]: time="2025-05-14T18:02:51.010941414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:02:51.011126 containerd[1598]: time="2025-05-14T18:02:51.011088210Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:02:51.011514 containerd[1598]: time="2025-05-14T18:02:51.011462031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:02:51.011622 containerd[1598]: time="2025-05-14T18:02:51.011515071Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:02:51.011622 containerd[1598]: time="2025-05-14T18:02:51.011564443Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:02:51.011622 containerd[1598]: time="2025-05-14T18:02:51.011614247Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:02:51.012240 containerd[1598]: time="2025-05-14T18:02:51.012123432Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:02:51.012567 containerd[1598]: time="2025-05-14T18:02:51.012483558Z" level=info msg="metadata content store policy set" policy=shared May 14 18:02:51.036881 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:02:51.064742 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:02:51.069331 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:02:51.107309 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:02:51.107674 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:02:51.109471 containerd[1598]: time="2025-05-14T18:02:51.109409673Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:02:51.109555 containerd[1598]: time="2025-05-14T18:02:51.109499932Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:02:51.109555 containerd[1598]: time="2025-05-14T18:02:51.109518768Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:02:51.109555 containerd[1598]: time="2025-05-14T18:02:51.109545307Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109559294Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109570454Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109583579Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109596373Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109610319Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109620659Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:02:51.109632 containerd[1598]: time="2025-05-14T18:02:51.109630828Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:02:51.109824 containerd[1598]: time="2025-05-14T18:02:51.109674800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109892138Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109921994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109936992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109949856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109960436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109971878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:02:51.109979 containerd[1598]: time="2025-05-14T18:02:51.109986044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.109996664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110008036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110019046Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110029796Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110111540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110182162Z" level=info msg="Start snapshots syncer" May 14 18:02:51.110240 containerd[1598]: time="2025-05-14T18:02:51.110201819Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:02:51.110511 containerd[1598]: time="2025-05-14T18:02:51.110466135Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:02:51.110873 containerd[1598]: time="2025-05-14T18:02:51.110517321Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:02:51.112106 containerd[1598]: time="2025-05-14T18:02:51.112076236Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:02:51.112226 containerd[1598]: time="2025-05-14T18:02:51.112194909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:02:51.112266 containerd[1598]: time="2025-05-14T18:02:51.112232359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:02:51.112266 containerd[1598]: time="2025-05-14T18:02:51.112252237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:02:51.112315 containerd[1598]: time="2025-05-14T18:02:51.112267255Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:02:51.112315 containerd[1598]: time="2025-05-14T18:02:51.112288304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:02:51.112315 containerd[1598]: time="2025-05-14T18:02:51.112299034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:02:51.112315 containerd[1598]: time="2025-05-14T18:02:51.112310025Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:02:51.112417 containerd[1598]: time="2025-05-14T18:02:51.112334531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:02:51.112417 containerd[1598]: time="2025-05-14T18:02:51.112344881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:02:51.112417 containerd[1598]: time="2025-05-14T18:02:51.112354769Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:02:51.113237 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113212358Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113269475Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113280216Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113294082Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113310963Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113320722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113346560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113379913Z" level=info msg="runtime interface created" May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113385313Z" level=info msg="created NRI interface" May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113398257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113417273Z" level=info msg="Connect containerd service" May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.113475672Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:02:51.114748 containerd[1598]: time="2025-05-14T18:02:51.114393645Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:02:51.199222 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:02:51.205492 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:02:51.208674 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:02:51.211778 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:02:51.212217 systemd-networkd[1504]: eth0: Gained IPv6LL May 14 18:02:51.219030 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:02:51.222193 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:02:51.237408 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:02:51.255247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:02:51.259646 tar[1591]: linux-amd64/README.md May 14 18:02:51.262922 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:02:51.282819 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:02:51.300751 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:02:51.301025 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:02:51.305552 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:02:51.345990 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:02:51.379661 containerd[1598]: time="2025-05-14T18:02:51.379594305Z" level=info msg="Start subscribing containerd event" May 14 18:02:51.379817 containerd[1598]: time="2025-05-14T18:02:51.379699673Z" level=info msg="Start recovering state" May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379847791Z" level=info msg="Start event monitor" May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379872337Z" level=info msg="Start cni network conf syncer for default" May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379886633Z" level=info msg="Start streaming server" May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379916670Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379925947Z" level=info msg="runtime interface starting up..." May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379940835Z" level=info msg="starting plugins..." May 14 18:02:51.379974 containerd[1598]: time="2025-05-14T18:02:51.379968387Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:02:51.380162 containerd[1598]: time="2025-05-14T18:02:51.380128046Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:02:51.380552 containerd[1598]: time="2025-05-14T18:02:51.380185174Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:02:51.380552 containerd[1598]: time="2025-05-14T18:02:51.380333031Z" level=info msg="containerd successfully booted in 0.409502s" May 14 18:02:51.380479 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:02:52.607953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:02:52.610402 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:02:52.611816 systemd[1]: Startup finished in 3.143s (kernel) + 6.858s (initrd) + 5.410s (userspace) = 15.413s. May 14 18:02:52.617943 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:02:53.408875 kubelet[1706]: E0514 18:02:53.408808 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:02:53.413285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:02:53.413519 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:02:53.414014 systemd[1]: kubelet.service: Consumed 1.921s CPU time, 253.1M memory peak. May 14 18:02:53.466746 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:02:53.468239 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:57252.service - OpenSSH per-connection server daemon (10.0.0.1:57252). May 14 18:02:53.528221 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 57252 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:53.530505 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:53.598854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:02:53.600193 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:02:53.607150 systemd-logind[1577]: New session 1 of user core. May 14 18:02:53.636691 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:02:53.640185 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:02:53.657395 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:02:53.659959 systemd-logind[1577]: New session c1 of user core. May 14 18:02:53.834368 systemd[1723]: Queued start job for default target default.target. May 14 18:02:53.855168 systemd[1723]: Created slice app.slice - User Application Slice. May 14 18:02:53.855209 systemd[1723]: Reached target paths.target - Paths. May 14 18:02:53.855255 systemd[1723]: Reached target timers.target - Timers. May 14 18:02:53.857119 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:02:53.870154 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:02:53.870281 systemd[1723]: Reached target sockets.target - Sockets. May 14 18:02:53.870325 systemd[1723]: Reached target basic.target - Basic System. May 14 18:02:53.870363 systemd[1723]: Reached target default.target - Main User Target. May 14 18:02:53.870393 systemd[1723]: Startup finished in 203ms. May 14 18:02:53.870878 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:02:53.872737 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:02:53.944254 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:57268.service - OpenSSH per-connection server daemon (10.0.0.1:57268). May 14 18:02:54.006127 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.007805 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.012641 systemd-logind[1577]: New session 2 of user core. May 14 18:02:54.029644 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:02:54.086485 sshd[1736]: Connection closed by 10.0.0.1 port 57268 May 14 18:02:54.086890 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 14 18:02:54.096785 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:57268.service: Deactivated successfully. May 14 18:02:54.098278 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:02:54.099046 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. May 14 18:02:54.101742 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). May 14 18:02:54.102253 systemd-logind[1577]: Removed session 2. May 14 18:02:54.157401 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.159263 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.163762 systemd-logind[1577]: New session 3 of user core. May 14 18:02:54.179663 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:02:54.230026 sshd[1745]: Connection closed by 10.0.0.1 port 57270 May 14 18:02:54.230324 sshd-session[1742]: pam_unix(sshd:session): session closed for user core May 14 18:02:54.244997 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:57270.service: Deactivated successfully. May 14 18:02:54.247054 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:02:54.247854 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. May 14 18:02:54.250966 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:57278.service - OpenSSH per-connection server daemon (10.0.0.1:57278). May 14 18:02:54.251496 systemd-logind[1577]: Removed session 3. May 14 18:02:54.305622 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 57278 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.307496 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.313052 systemd-logind[1577]: New session 4 of user core. May 14 18:02:54.322797 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:02:54.379181 sshd[1753]: Connection closed by 10.0.0.1 port 57278 May 14 18:02:54.379468 sshd-session[1751]: pam_unix(sshd:session): session closed for user core May 14 18:02:54.391928 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:57278.service: Deactivated successfully. May 14 18:02:54.394010 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:02:54.394893 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. May 14 18:02:54.398480 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:57280.service - OpenSSH per-connection server daemon (10.0.0.1:57280). May 14 18:02:54.399205 systemd-logind[1577]: Removed session 4. May 14 18:02:54.465852 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 57280 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.467248 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.471806 systemd-logind[1577]: New session 5 of user core. May 14 18:02:54.485693 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:02:54.547630 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:02:54.548036 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:54.566817 sudo[1762]: pam_unix(sudo:session): session closed for user root May 14 18:02:54.568683 sshd[1761]: Connection closed by 10.0.0.1 port 57280 May 14 18:02:54.569041 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 14 18:02:54.578191 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:57280.service: Deactivated successfully. May 14 18:02:54.580044 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:02:54.580911 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. May 14 18:02:54.583954 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:57284.service - OpenSSH per-connection server daemon (10.0.0.1:57284). May 14 18:02:54.584728 systemd-logind[1577]: Removed session 5. May 14 18:02:54.645294 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 57284 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.647113 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.651714 systemd-logind[1577]: New session 6 of user core. May 14 18:02:54.661691 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:02:54.716205 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:02:54.716513 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:54.724272 sudo[1772]: pam_unix(sudo:session): session closed for user root May 14 18:02:54.732078 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:02:54.732473 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:54.743852 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:02:54.798598 augenrules[1794]: No rules May 14 18:02:54.800274 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:02:54.800565 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:02:54.801953 sudo[1771]: pam_unix(sudo:session): session closed for user root May 14 18:02:54.803887 sshd[1770]: Connection closed by 10.0.0.1 port 57284 May 14 18:02:54.804259 sshd-session[1768]: pam_unix(sshd:session): session closed for user core May 14 18:02:54.818261 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:57284.service: Deactivated successfully. May 14 18:02:54.820564 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:02:54.821395 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. May 14 18:02:54.825074 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:57300.service - OpenSSH per-connection server daemon (10.0.0.1:57300). May 14 18:02:54.825906 systemd-logind[1577]: Removed session 6. May 14 18:02:54.896384 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 57300 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:02:54.898175 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:02:54.903155 systemd-logind[1577]: New session 7 of user core. May 14 18:02:54.917700 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:02:54.971267 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:02:54.971603 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:02:55.457372 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:02:55.479957 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:02:55.876578 dockerd[1826]: time="2025-05-14T18:02:55.876371084Z" level=info msg="Starting up" May 14 18:02:55.878030 dockerd[1826]: time="2025-05-14T18:02:55.877994430Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:02:56.327255 dockerd[1826]: time="2025-05-14T18:02:56.327178762Z" level=info msg="Loading containers: start." May 14 18:02:56.338571 kernel: Initializing XFRM netlink socket May 14 18:02:56.624021 systemd-networkd[1504]: docker0: Link UP May 14 18:02:56.633654 dockerd[1826]: time="2025-05-14T18:02:56.633576946Z" level=info msg="Loading containers: done." May 14 18:02:56.648866 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck362478982-merged.mount: Deactivated successfully. May 14 18:02:56.652906 dockerd[1826]: time="2025-05-14T18:02:56.652836626Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:02:56.652991 dockerd[1826]: time="2025-05-14T18:02:56.652977390Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:02:56.653183 dockerd[1826]: time="2025-05-14T18:02:56.653159412Z" level=info msg="Initializing buildkit" May 14 18:02:56.688711 dockerd[1826]: time="2025-05-14T18:02:56.688635921Z" level=info msg="Completed buildkit initialization" May 14 18:02:56.696688 dockerd[1826]: time="2025-05-14T18:02:56.696573796Z" level=info msg="Daemon has completed initialization" May 14 18:02:56.696834 dockerd[1826]: time="2025-05-14T18:02:56.696704681Z" level=info msg="API listen on /run/docker.sock" May 14 18:02:56.696926 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:02:57.748557 containerd[1598]: time="2025-05-14T18:02:57.748488278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 18:02:59.477331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277793301.mount: Deactivated successfully. May 14 18:03:00.998256 containerd[1598]: time="2025-05-14T18:03:00.998165844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:00.999685 containerd[1598]: time="2025-05-14T18:03:00.999638558Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 14 18:03:01.001400 containerd[1598]: time="2025-05-14T18:03:01.001359537Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:01.004304 containerd[1598]: time="2025-05-14T18:03:01.004242447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:01.005412 containerd[1598]: time="2025-05-14T18:03:01.005367187Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 3.256805372s" May 14 18:03:01.005412 containerd[1598]: time="2025-05-14T18:03:01.005405970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 18:03:01.006421 containerd[1598]: time="2025-05-14T18:03:01.006378134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 18:03:03.027814 containerd[1598]: time="2025-05-14T18:03:03.027738712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:03.028599 containerd[1598]: time="2025-05-14T18:03:03.028504038Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 14 18:03:03.029933 containerd[1598]: time="2025-05-14T18:03:03.029850445Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:03.032465 containerd[1598]: time="2025-05-14T18:03:03.032420117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:03.033379 containerd[1598]: time="2025-05-14T18:03:03.033350122Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.026931251s" May 14 18:03:03.033379 containerd[1598]: time="2025-05-14T18:03:03.033377563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 18:03:03.033956 containerd[1598]: time="2025-05-14T18:03:03.033928337Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 18:03:03.436028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:03:03.438028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:03.687491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:03.691776 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:03:03.890602 kubelet[2103]: E0514 18:03:03.890508 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:03:03.897017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:03:03.897226 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:03:03.897634 systemd[1]: kubelet.service: Consumed 278ms CPU time, 105.2M memory peak. May 14 18:03:06.819187 containerd[1598]: time="2025-05-14T18:03:06.819091263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:06.853764 containerd[1598]: time="2025-05-14T18:03:06.853691949Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 14 18:03:06.858165 containerd[1598]: time="2025-05-14T18:03:06.858121090Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:06.860872 containerd[1598]: time="2025-05-14T18:03:06.860830454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:06.861740 containerd[1598]: time="2025-05-14T18:03:06.861707169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 3.827744598s" May 14 18:03:06.861789 containerd[1598]: time="2025-05-14T18:03:06.861741153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 18:03:06.862343 containerd[1598]: time="2025-05-14T18:03:06.862291556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 18:03:08.927690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340561657.mount: Deactivated successfully. May 14 18:03:09.284156 containerd[1598]: time="2025-05-14T18:03:09.284081651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:09.285197 containerd[1598]: time="2025-05-14T18:03:09.285155496Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 14 18:03:09.286808 containerd[1598]: time="2025-05-14T18:03:09.286736603Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:09.290599 containerd[1598]: time="2025-05-14T18:03:09.288990352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:09.290599 containerd[1598]: time="2025-05-14T18:03:09.290058827Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.427726855s" May 14 18:03:09.290599 containerd[1598]: time="2025-05-14T18:03:09.290089955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 18:03:09.291297 containerd[1598]: time="2025-05-14T18:03:09.291140466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 18:03:10.216184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303154336.mount: Deactivated successfully. May 14 18:03:11.451195 containerd[1598]: time="2025-05-14T18:03:11.451109804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:11.451892 containerd[1598]: time="2025-05-14T18:03:11.451846186Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 14 18:03:11.453363 containerd[1598]: time="2025-05-14T18:03:11.453317867Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:11.455942 containerd[1598]: time="2025-05-14T18:03:11.455902096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:11.456750 containerd[1598]: time="2025-05-14T18:03:11.456687280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.16539063s" May 14 18:03:11.456750 containerd[1598]: time="2025-05-14T18:03:11.456732705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 18:03:11.457351 containerd[1598]: time="2025-05-14T18:03:11.457225620Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:03:12.188735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355136996.mount: Deactivated successfully. May 14 18:03:12.195042 containerd[1598]: time="2025-05-14T18:03:12.194983460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:03:12.195833 containerd[1598]: time="2025-05-14T18:03:12.195774073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:03:12.197404 containerd[1598]: time="2025-05-14T18:03:12.197344911Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:03:12.200554 containerd[1598]: time="2025-05-14T18:03:12.200306428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:03:12.201868 containerd[1598]: time="2025-05-14T18:03:12.201823765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 744.507865ms" May 14 18:03:12.201928 containerd[1598]: time="2025-05-14T18:03:12.201874690Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:03:12.202919 containerd[1598]: time="2025-05-14T18:03:12.202719115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 18:03:12.759153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143337340.mount: Deactivated successfully. May 14 18:03:13.935870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:03:13.937479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:14.115421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:14.119280 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:03:14.160447 kubelet[2223]: E0514 18:03:14.160292 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:03:14.165191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:03:14.165435 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:03:14.165832 systemd[1]: kubelet.service: Consumed 209ms CPU time, 102.5M memory peak. May 14 18:03:15.807976 containerd[1598]: time="2025-05-14T18:03:15.807918052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:15.808712 containerd[1598]: time="2025-05-14T18:03:15.808671456Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 14 18:03:15.809855 containerd[1598]: time="2025-05-14T18:03:15.809819620Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:15.812545 containerd[1598]: time="2025-05-14T18:03:15.812496654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:15.813719 containerd[1598]: time="2025-05-14T18:03:15.813694050Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.610939289s" May 14 18:03:15.813788 containerd[1598]: time="2025-05-14T18:03:15.813719117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 18:03:18.250283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:18.250460 systemd[1]: kubelet.service: Consumed 209ms CPU time, 102.5M memory peak. May 14 18:03:18.252681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:18.290454 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... May 14 18:03:18.290471 systemd[1]: Reloading... May 14 18:03:18.377588 zram_generator::config[2324]: No configuration found. May 14 18:03:18.632362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:03:18.749542 systemd[1]: Reloading finished in 458 ms. May 14 18:03:18.838687 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:03:18.838797 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:03:18.839117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:18.839166 systemd[1]: kubelet.service: Consumed 156ms CPU time, 91.8M memory peak. May 14 18:03:18.840913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:19.008524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:19.013252 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:03:19.055109 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:03:19.055109 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 18:03:19.055109 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:03:19.055631 kubelet[2375]: I0514 18:03:19.055195 2375 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:03:19.329661 kubelet[2375]: I0514 18:03:19.329559 2375 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 18:03:19.329661 kubelet[2375]: I0514 18:03:19.329586 2375 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:03:19.329912 kubelet[2375]: I0514 18:03:19.329870 2375 server.go:954] "Client rotation is on, will bootstrap in background" May 14 18:03:19.358491 kubelet[2375]: E0514 18:03:19.358444 2375 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:19.359382 kubelet[2375]: I0514 18:03:19.359353 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:03:19.371347 kubelet[2375]: I0514 18:03:19.371309 2375 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:03:19.377044 kubelet[2375]: I0514 18:03:19.377007 2375 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:03:19.377327 kubelet[2375]: I0514 18:03:19.377290 2375 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:03:19.377523 kubelet[2375]: I0514 18:03:19.377320 2375 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:03:19.377672 kubelet[2375]: I0514 18:03:19.377543 2375 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:03:19.377672 kubelet[2375]: I0514 18:03:19.377553 2375 container_manager_linux.go:304] "Creating device plugin manager" May 14 18:03:19.377730 kubelet[2375]: I0514 18:03:19.377713 2375 state_mem.go:36] "Initialized new in-memory state store" May 14 18:03:19.381061 kubelet[2375]: I0514 18:03:19.381036 2375 kubelet.go:446] "Attempting to sync node with API server" May 14 18:03:19.381061 kubelet[2375]: I0514 18:03:19.381054 2375 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:03:19.381146 kubelet[2375]: I0514 18:03:19.381081 2375 kubelet.go:352] "Adding apiserver pod source" May 14 18:03:19.381146 kubelet[2375]: I0514 18:03:19.381104 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:03:19.386055 kubelet[2375]: I0514 18:03:19.386009 2375 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:03:19.386386 kubelet[2375]: I0514 18:03:19.386361 2375 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:03:19.387709 kubelet[2375]: W0514 18:03:19.387572 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:03:19.387795 kubelet[2375]: W0514 18:03:19.387740 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:19.387839 kubelet[2375]: E0514 18:03:19.387814 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:19.387884 kubelet[2375]: W0514 18:03:19.387863 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:19.387916 kubelet[2375]: E0514 18:03:19.387895 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:19.389953 kubelet[2375]: I0514 18:03:19.389921 2375 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 18:03:19.390001 kubelet[2375]: I0514 18:03:19.389963 2375 server.go:1287] "Started kubelet" May 14 18:03:19.390199 kubelet[2375]: I0514 18:03:19.390141 2375 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:03:19.390321 kubelet[2375]: I0514 18:03:19.390189 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:03:19.391201 kubelet[2375]: I0514 18:03:19.390502 2375 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:03:19.391312 kubelet[2375]: I0514 18:03:19.391299 2375 server.go:490] "Adding debug handlers to kubelet server" May 14 18:03:19.395159 kubelet[2375]: I0514 18:03:19.395123 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:03:19.395886 kubelet[2375]: I0514 18:03:19.395867 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:03:19.396787 kubelet[2375]: I0514 18:03:19.396758 2375 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 18:03:19.396946 kubelet[2375]: E0514 18:03:19.396928 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:19.397138 kubelet[2375]: I0514 18:03:19.397091 2375 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:03:19.397183 kubelet[2375]: I0514 18:03:19.397141 2375 reconciler.go:26] "Reconciler: start to sync state" May 14 18:03:19.397331 kubelet[2375]: E0514 18:03:19.397221 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" May 14 18:03:19.397331 kubelet[2375]: E0514 18:03:19.395705 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f76cc17ee3dbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:03:19.389937084 +0000 UTC m=+0.369435921,LastTimestamp:2025-05-14 18:03:19.389937084 +0000 UTC m=+0.369435921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:03:19.398077 kubelet[2375]: W0514 18:03:19.397996 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:19.398077 kubelet[2375]: E0514 18:03:19.398046 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:19.398543 kubelet[2375]: I0514 18:03:19.398403 2375 factory.go:221] Registration of the systemd container factory successfully May 14 18:03:19.398543 kubelet[2375]: I0514 18:03:19.398490 2375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:03:19.399016 kubelet[2375]: E0514 18:03:19.398987 2375 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:03:19.399381 kubelet[2375]: I0514 18:03:19.399358 2375 factory.go:221] Registration of the containerd container factory successfully May 14 18:03:19.415444 kubelet[2375]: I0514 18:03:19.415355 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:03:19.416742 kubelet[2375]: I0514 18:03:19.416693 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 18:03:19.416742 kubelet[2375]: I0514 18:03:19.416717 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 18:03:19.416742 kubelet[2375]: I0514 18:03:19.416738 2375 state_mem.go:36] "Initialized new in-memory state store" May 14 18:03:19.423346 kubelet[2375]: I0514 18:03:19.423310 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:03:19.423413 kubelet[2375]: I0514 18:03:19.423357 2375 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 18:03:19.423413 kubelet[2375]: I0514 18:03:19.423381 2375 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 18:03:19.423413 kubelet[2375]: I0514 18:03:19.423390 2375 kubelet.go:2388] "Starting kubelet main sync loop" May 14 18:03:19.423475 kubelet[2375]: E0514 18:03:19.423452 2375 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:03:19.423970 kubelet[2375]: W0514 18:03:19.423945 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:19.424133 kubelet[2375]: E0514 18:03:19.424064 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:19.497501 kubelet[2375]: E0514 18:03:19.497453 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:19.523671 kubelet[2375]: E0514 18:03:19.523637 2375 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:03:19.598190 kubelet[2375]: E0514 18:03:19.597960 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:19.598584 kubelet[2375]: E0514 18:03:19.598517 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" May 14 18:03:19.682320 kubelet[2375]: I0514 18:03:19.682254 2375 policy_none.go:49] "None policy: Start" May 14 18:03:19.682320 kubelet[2375]: I0514 18:03:19.682297 2375 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 18:03:19.682320 kubelet[2375]: I0514 18:03:19.682313 2375 state_mem.go:35] "Initializing new in-memory state store" May 14 18:03:19.695243 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:03:19.698183 kubelet[2375]: E0514 18:03:19.698144 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:19.711363 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:03:19.715331 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:03:19.724388 kubelet[2375]: E0514 18:03:19.724327 2375 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:03:19.725848 kubelet[2375]: I0514 18:03:19.725792 2375 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:03:19.726114 kubelet[2375]: I0514 18:03:19.726056 2375 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:03:19.726114 kubelet[2375]: I0514 18:03:19.726087 2375 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:03:19.726375 kubelet[2375]: I0514 18:03:19.726346 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:03:19.727357 kubelet[2375]: E0514 18:03:19.727329 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 18:03:19.727416 kubelet[2375]: E0514 18:03:19.727382 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:03:19.829321 kubelet[2375]: I0514 18:03:19.829281 2375 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 18:03:19.829923 kubelet[2375]: E0514 18:03:19.829870 2375 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 14 18:03:19.999957 kubelet[2375]: E0514 18:03:19.999901 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" May 14 18:03:20.032342 kubelet[2375]: I0514 18:03:20.032292 2375 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 18:03:20.032855 kubelet[2375]: E0514 18:03:20.032792 2375 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 14 18:03:20.134221 systemd[1]: Created slice kubepods-burstable-pod4d3cde8908ddda7fab7d08406caa1a23.slice - libcontainer container kubepods-burstable-pod4d3cde8908ddda7fab7d08406caa1a23.slice. May 14 18:03:20.152227 kubelet[2375]: E0514 18:03:20.152175 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:20.155246 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 18:03:20.174108 kubelet[2375]: E0514 18:03:20.174041 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:20.176870 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 18:03:20.178986 kubelet[2375]: E0514 18:03:20.178954 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:20.201449 kubelet[2375]: I0514 18:03:20.201390 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:20.201449 kubelet[2375]: I0514 18:03:20.201450 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:20.201752 kubelet[2375]: I0514 18:03:20.201487 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:20.201752 kubelet[2375]: I0514 18:03:20.201517 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:20.201752 kubelet[2375]: I0514 18:03:20.201565 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 18:03:20.201752 kubelet[2375]: I0514 18:03:20.201625 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:20.201752 kubelet[2375]: I0514 18:03:20.201650 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:20.201948 kubelet[2375]: I0514 18:03:20.201701 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:20.201948 kubelet[2375]: I0514 18:03:20.201748 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:20.249092 kubelet[2375]: W0514 18:03:20.249038 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:20.249249 kubelet[2375]: E0514 18:03:20.249101 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:20.434827 kubelet[2375]: I0514 18:03:20.434691 2375 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 18:03:20.435102 kubelet[2375]: E0514 18:03:20.435052 2375 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 14 18:03:20.453659 kubelet[2375]: E0514 18:03:20.453608 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.454462 containerd[1598]: time="2025-05-14T18:03:20.454400523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d3cde8908ddda7fab7d08406caa1a23,Namespace:kube-system,Attempt:0,}" May 14 18:03:20.474828 kubelet[2375]: E0514 18:03:20.474781 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.475496 containerd[1598]: time="2025-05-14T18:03:20.475412551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 18:03:20.479755 kubelet[2375]: E0514 18:03:20.479719 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.480264 containerd[1598]: time="2025-05-14T18:03:20.480231123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 18:03:20.570793 kubelet[2375]: W0514 18:03:20.570396 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:20.570793 kubelet[2375]: E0514 18:03:20.570780 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:20.584726 kubelet[2375]: W0514 18:03:20.584675 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:20.584726 kubelet[2375]: E0514 18:03:20.584724 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:20.668195 kubelet[2375]: W0514 18:03:20.668117 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 14 18:03:20.668322 kubelet[2375]: E0514 18:03:20.668208 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 14 18:03:20.801255 kubelet[2375]: E0514 18:03:20.801186 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" May 14 18:03:20.819677 containerd[1598]: time="2025-05-14T18:03:20.818150875Z" level=info msg="connecting to shim 90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35" address="unix:///run/containerd/s/83315d1d99b696ed9e92a5bc543a75cd3ff11ac2877f3afeb927d3c76b9bcdba" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:20.840569 containerd[1598]: time="2025-05-14T18:03:20.840270771Z" level=info msg="connecting to shim 5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829" address="unix:///run/containerd/s/c002651079f0e53dc580c6ed19ee0c6ae4fd889af8bb3d4374df3545de6f7823" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:20.841609 containerd[1598]: time="2025-05-14T18:03:20.841507361Z" level=info msg="connecting to shim 8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb" address="unix:///run/containerd/s/fa5b3079d247290fb6e0b572d9d802bb6c51eb0522ec851818323145d0f7ee7c" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:20.866749 systemd[1]: Started cri-containerd-90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35.scope - libcontainer container 90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35. May 14 18:03:20.870881 systemd[1]: Started cri-containerd-5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829.scope - libcontainer container 5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829. May 14 18:03:20.877251 systemd[1]: Started cri-containerd-8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb.scope - libcontainer container 8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb. May 14 18:03:20.964647 containerd[1598]: time="2025-05-14T18:03:20.964600377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829\"" May 14 18:03:20.965822 kubelet[2375]: E0514 18:03:20.965783 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.968817 containerd[1598]: time="2025-05-14T18:03:20.968775731Z" level=info msg="CreateContainer within sandbox \"5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:03:20.973145 containerd[1598]: time="2025-05-14T18:03:20.973085158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb\"" May 14 18:03:20.973872 kubelet[2375]: E0514 18:03:20.973844 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.974468 containerd[1598]: time="2025-05-14T18:03:20.974412919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d3cde8908ddda7fab7d08406caa1a23,Namespace:kube-system,Attempt:0,} returns sandbox id \"90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35\"" May 14 18:03:20.975491 kubelet[2375]: E0514 18:03:20.975437 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:20.976271 containerd[1598]: time="2025-05-14T18:03:20.976226222Z" level=info msg="CreateContainer within sandbox \"8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:03:20.978194 containerd[1598]: time="2025-05-14T18:03:20.978153548Z" level=info msg="CreateContainer within sandbox \"90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:03:20.991036 containerd[1598]: time="2025-05-14T18:03:20.990970478Z" level=info msg="Container 8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:20.996017 containerd[1598]: time="2025-05-14T18:03:20.995949351Z" level=info msg="Container e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:21.004513 containerd[1598]: time="2025-05-14T18:03:21.004452697Z" level=info msg="Container 8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:21.012081 containerd[1598]: time="2025-05-14T18:03:21.012018193Z" level=info msg="CreateContainer within sandbox \"8b060da339dce0850e453bdf4da99042f4756d8098a482aead2aa0025764dbcb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f\"" May 14 18:03:21.012836 containerd[1598]: time="2025-05-14T18:03:21.012793458Z" level=info msg="StartContainer for \"e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f\"" May 14 18:03:21.013816 containerd[1598]: time="2025-05-14T18:03:21.013795147Z" level=info msg="connecting to shim e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f" address="unix:///run/containerd/s/fa5b3079d247290fb6e0b572d9d802bb6c51eb0522ec851818323145d0f7ee7c" protocol=ttrpc version=3 May 14 18:03:21.014666 containerd[1598]: time="2025-05-14T18:03:21.014622339Z" level=info msg="CreateContainer within sandbox \"5c629efc2d2cb68c1f176ad850112ea9cfa5e03d7a8ac248f57d8e95a587c829\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac\"" May 14 18:03:21.014998 containerd[1598]: time="2025-05-14T18:03:21.014955705Z" level=info msg="StartContainer for \"8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac\"" May 14 18:03:21.016009 containerd[1598]: time="2025-05-14T18:03:21.015984676Z" level=info msg="connecting to shim 8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac" address="unix:///run/containerd/s/c002651079f0e53dc580c6ed19ee0c6ae4fd889af8bb3d4374df3545de6f7823" protocol=ttrpc version=3 May 14 18:03:21.018413 containerd[1598]: time="2025-05-14T18:03:21.018383517Z" level=info msg="CreateContainer within sandbox \"90d54720f1fa4e6198438c627e20fc96e5db00b0f1fbb86960d91527bbe7ef35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3\"" May 14 18:03:21.019298 containerd[1598]: time="2025-05-14T18:03:21.019242278Z" level=info msg="StartContainer for \"8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3\"" May 14 18:03:21.020415 containerd[1598]: time="2025-05-14T18:03:21.020374102Z" level=info msg="connecting to shim 8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3" address="unix:///run/containerd/s/83315d1d99b696ed9e92a5bc543a75cd3ff11ac2877f3afeb927d3c76b9bcdba" protocol=ttrpc version=3 May 14 18:03:21.035689 systemd[1]: Started cri-containerd-8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac.scope - libcontainer container 8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac. May 14 18:03:21.039938 systemd[1]: Started cri-containerd-e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f.scope - libcontainer container e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f. May 14 18:03:21.046387 systemd[1]: Started cri-containerd-8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3.scope - libcontainer container 8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3. May 14 18:03:21.102774 containerd[1598]: time="2025-05-14T18:03:21.102586498Z" level=info msg="StartContainer for \"8deb796086550ca5d16ebec763799d1f9a3aa9febdbdad936f7a18977a48acac\" returns successfully" May 14 18:03:21.148492 containerd[1598]: time="2025-05-14T18:03:21.148439177Z" level=info msg="StartContainer for \"e50109c1e7695490fc83148498936481d4bf7b5f21a26ba812059a93a570914f\" returns successfully" May 14 18:03:21.151543 containerd[1598]: time="2025-05-14T18:03:21.151490623Z" level=info msg="StartContainer for \"8195c181e3b4056b4830d3a2f34e6b3714ac4cc7a4fa6425acf04e78d6cdcca3\" returns successfully" May 14 18:03:21.239146 kubelet[2375]: I0514 18:03:21.239104 2375 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 18:03:21.434008 kubelet[2375]: E0514 18:03:21.433888 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:21.434131 kubelet[2375]: E0514 18:03:21.434054 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:21.435045 kubelet[2375]: E0514 18:03:21.435008 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:21.435168 kubelet[2375]: E0514 18:03:21.435146 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:21.438358 kubelet[2375]: E0514 18:03:21.438139 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:21.438358 kubelet[2375]: E0514 18:03:21.438271 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.446575 kubelet[2375]: E0514 18:03:22.444920 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:22.446575 kubelet[2375]: E0514 18:03:22.445062 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.447877 kubelet[2375]: E0514 18:03:22.447828 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:22.448244 kubelet[2375]: E0514 18:03:22.448108 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:22.475617 kubelet[2375]: E0514 18:03:22.475003 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 18:03:22.643699 kubelet[2375]: I0514 18:03:22.643641 2375 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 18:03:22.643699 kubelet[2375]: E0514 18:03:22.643683 2375 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:03:22.669909 kubelet[2375]: E0514 18:03:22.669864 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:22.770876 kubelet[2375]: E0514 18:03:22.770698 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:22.870848 kubelet[2375]: E0514 18:03:22.870789 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:22.971685 kubelet[2375]: E0514 18:03:22.971603 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.072417 kubelet[2375]: E0514 18:03:23.072256 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.172970 kubelet[2375]: E0514 18:03:23.172909 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.273701 kubelet[2375]: E0514 18:03:23.273631 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.375056 kubelet[2375]: E0514 18:03:23.374843 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.443480 kubelet[2375]: E0514 18:03:23.443431 2375 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 18:03:23.443648 kubelet[2375]: E0514 18:03:23.443579 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:23.475742 kubelet[2375]: E0514 18:03:23.475685 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.576648 kubelet[2375]: E0514 18:03:23.576596 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.677452 kubelet[2375]: E0514 18:03:23.677281 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.777450 kubelet[2375]: E0514 18:03:23.777391 2375 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:23.897826 kubelet[2375]: I0514 18:03:23.897768 2375 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 18:03:23.929560 kubelet[2375]: I0514 18:03:23.929441 2375 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 18:03:23.958551 kubelet[2375]: I0514 18:03:23.958504 2375 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 18:03:24.390659 kubelet[2375]: I0514 18:03:24.390572 2375 apiserver.go:52] "Watching apiserver" May 14 18:03:24.393002 kubelet[2375]: E0514 18:03:24.392942 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:24.393262 kubelet[2375]: E0514 18:03:24.393128 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:24.397898 kubelet[2375]: I0514 18:03:24.397876 2375 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:03:24.443924 kubelet[2375]: E0514 18:03:24.443887 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:24.962023 systemd[1]: Reload requested from client PID 2653 ('systemctl') (unit session-7.scope)... May 14 18:03:24.962048 systemd[1]: Reloading... May 14 18:03:25.044571 zram_generator::config[2699]: No configuration found. May 14 18:03:25.154418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:03:25.300455 systemd[1]: Reloading finished in 338 ms. May 14 18:03:25.338937 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:25.362855 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:03:25.363202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:25.363261 systemd[1]: kubelet.service: Consumed 878ms CPU time, 124.3M memory peak. May 14 18:03:25.365283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:25.583788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:25.593882 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:03:25.643406 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:03:25.643406 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 18:03:25.643406 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:03:25.643811 kubelet[2741]: I0514 18:03:25.643547 2741 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:03:25.650193 kubelet[2741]: I0514 18:03:25.650148 2741 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 18:03:25.650193 kubelet[2741]: I0514 18:03:25.650171 2741 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:03:25.650487 kubelet[2741]: I0514 18:03:25.650460 2741 server.go:954] "Client rotation is on, will bootstrap in background" May 14 18:03:25.651755 kubelet[2741]: I0514 18:03:25.651720 2741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:03:25.655920 kubelet[2741]: I0514 18:03:25.655812 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:03:25.660726 kubelet[2741]: I0514 18:03:25.660693 2741 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:03:25.665667 kubelet[2741]: I0514 18:03:25.665624 2741 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:03:25.665898 kubelet[2741]: I0514 18:03:25.665858 2741 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:03:25.666098 kubelet[2741]: I0514 18:03:25.665895 2741 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:03:25.666171 kubelet[2741]: I0514 18:03:25.666097 2741 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:03:25.666171 kubelet[2741]: I0514 18:03:25.666108 2741 container_manager_linux.go:304] "Creating device plugin manager" May 14 18:03:25.666171 kubelet[2741]: I0514 18:03:25.666152 2741 state_mem.go:36] "Initialized new in-memory state store" May 14 18:03:25.666323 kubelet[2741]: I0514 18:03:25.666318 2741 kubelet.go:446] "Attempting to sync node with API server" May 14 18:03:25.666344 kubelet[2741]: I0514 18:03:25.666330 2741 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:03:25.667100 kubelet[2741]: I0514 18:03:25.666783 2741 kubelet.go:352] "Adding apiserver pod source" May 14 18:03:25.667100 kubelet[2741]: I0514 18:03:25.666800 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:03:25.667427 kubelet[2741]: I0514 18:03:25.667413 2741 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:03:25.667822 kubelet[2741]: I0514 18:03:25.667803 2741 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:03:25.668217 kubelet[2741]: I0514 18:03:25.668192 2741 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 18:03:25.668217 kubelet[2741]: I0514 18:03:25.668219 2741 server.go:1287] "Started kubelet" May 14 18:03:25.669747 kubelet[2741]: I0514 18:03:25.669727 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:03:25.671546 kubelet[2741]: I0514 18:03:25.669964 2741 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:03:25.671546 kubelet[2741]: I0514 18:03:25.671091 2741 server.go:490] "Adding debug handlers to kubelet server" May 14 18:03:25.671641 kubelet[2741]: I0514 18:03:25.671625 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:03:25.674651 kubelet[2741]: I0514 18:03:25.674446 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:03:25.675511 kubelet[2741]: I0514 18:03:25.675497 2741 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 18:03:25.676815 kubelet[2741]: E0514 18:03:25.676799 2741 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:03:25.678207 kubelet[2741]: I0514 18:03:25.678195 2741 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:03:25.678267 kubelet[2741]: I0514 18:03:25.675560 2741 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:03:25.678481 kubelet[2741]: I0514 18:03:25.678469 2741 reconciler.go:26] "Reconciler: start to sync state" May 14 18:03:25.683402 kubelet[2741]: E0514 18:03:25.683375 2741 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:03:25.685320 kubelet[2741]: I0514 18:03:25.685285 2741 factory.go:221] Registration of the containerd container factory successfully May 14 18:03:25.685428 kubelet[2741]: I0514 18:03:25.685412 2741 factory.go:221] Registration of the systemd container factory successfully May 14 18:03:25.685652 kubelet[2741]: I0514 18:03:25.685629 2741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:03:25.708093 kubelet[2741]: I0514 18:03:25.708038 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:03:25.713990 kubelet[2741]: I0514 18:03:25.713939 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:03:25.714110 kubelet[2741]: I0514 18:03:25.714005 2741 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 18:03:25.714110 kubelet[2741]: I0514 18:03:25.714037 2741 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 18:03:25.714110 kubelet[2741]: I0514 18:03:25.714057 2741 kubelet.go:2388] "Starting kubelet main sync loop" May 14 18:03:25.714197 kubelet[2741]: E0514 18:03:25.714142 2741 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:03:25.734057 kubelet[2741]: I0514 18:03:25.734011 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 18:03:25.734057 kubelet[2741]: I0514 18:03:25.734029 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 18:03:25.734057 kubelet[2741]: I0514 18:03:25.734061 2741 state_mem.go:36] "Initialized new in-memory state store" May 14 18:03:25.734300 kubelet[2741]: I0514 18:03:25.734254 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:03:25.734300 kubelet[2741]: I0514 18:03:25.734281 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:03:25.734300 kubelet[2741]: I0514 18:03:25.734309 2741 policy_none.go:49] "None policy: Start" May 14 18:03:25.734511 kubelet[2741]: I0514 18:03:25.734320 2741 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 18:03:25.734511 kubelet[2741]: I0514 18:03:25.734335 2741 state_mem.go:35] "Initializing new in-memory state store" May 14 18:03:25.734511 kubelet[2741]: I0514 18:03:25.734458 2741 state_mem.go:75] "Updated machine memory state" May 14 18:03:25.740328 kubelet[2741]: I0514 18:03:25.740287 2741 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:03:25.740551 kubelet[2741]: I0514 18:03:25.740496 2741 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:03:25.740551 kubelet[2741]: I0514 18:03:25.740511 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:03:25.741405 kubelet[2741]: I0514 18:03:25.740743 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:03:25.741461 kubelet[2741]: E0514 18:03:25.741434 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 18:03:25.815813 kubelet[2741]: I0514 18:03:25.815764 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.816314 kubelet[2741]: I0514 18:03:25.816248 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 18:03:25.816314 kubelet[2741]: I0514 18:03:25.816283 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 18:03:25.824808 kubelet[2741]: E0514 18:03:25.824769 2741 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 18:03:25.825285 kubelet[2741]: E0514 18:03:25.825236 2741 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:03:25.825440 kubelet[2741]: E0514 18:03:25.825381 2741 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.846794 kubelet[2741]: I0514 18:03:25.846688 2741 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 18:03:25.855965 kubelet[2741]: I0514 18:03:25.855933 2741 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 18:03:25.856117 kubelet[2741]: I0514 18:03:25.856029 2741 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 18:03:25.963614 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:03:25.963980 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:03:25.980130 kubelet[2741]: I0514 18:03:25.980079 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:25.980130 kubelet[2741]: I0514 18:03:25.980134 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.980479 kubelet[2741]: I0514 18:03:25.980167 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.980479 kubelet[2741]: I0514 18:03:25.980202 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.980479 kubelet[2741]: I0514 18:03:25.980228 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 18:03:25.980479 kubelet[2741]: I0514 18:03:25.980257 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:25.980479 kubelet[2741]: I0514 18:03:25.980287 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d3cde8908ddda7fab7d08406caa1a23-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d3cde8908ddda7fab7d08406caa1a23\") " pod="kube-system/kube-apiserver-localhost" May 14 18:03:25.980677 kubelet[2741]: I0514 18:03:25.980312 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:25.980677 kubelet[2741]: I0514 18:03:25.980333 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:03:26.126869 kubelet[2741]: E0514 18:03:26.125850 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.126869 kubelet[2741]: E0514 18:03:26.125899 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.126869 kubelet[2741]: E0514 18:03:26.126104 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.484643 sudo[2777]: pam_unix(sudo:session): session closed for user root May 14 18:03:26.667590 kubelet[2741]: I0514 18:03:26.667512 2741 apiserver.go:52] "Watching apiserver" May 14 18:03:26.678650 kubelet[2741]: I0514 18:03:26.678614 2741 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:03:26.727262 kubelet[2741]: I0514 18:03:26.727220 2741 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 18:03:26.727575 kubelet[2741]: E0514 18:03:26.727554 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.728705 kubelet[2741]: E0514 18:03:26.728660 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.733466 kubelet[2741]: E0514 18:03:26.733367 2741 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 18:03:26.733566 kubelet[2741]: E0514 18:03:26.733516 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:26.749222 kubelet[2741]: I0514 18:03:26.748799 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.748772355 podStartE2EDuration="3.748772355s" podCreationTimestamp="2025-05-14 18:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:26.747765961 +0000 UTC m=+1.149937120" watchObservedRunningTime="2025-05-14 18:03:26.748772355 +0000 UTC m=+1.150943504" May 14 18:03:26.763872 kubelet[2741]: I0514 18:03:26.763259 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.763241452 podStartE2EDuration="3.763241452s" podCreationTimestamp="2025-05-14 18:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:26.755043577 +0000 UTC m=+1.157214726" watchObservedRunningTime="2025-05-14 18:03:26.763241452 +0000 UTC m=+1.165412591" May 14 18:03:26.763872 kubelet[2741]: I0514 18:03:26.763323 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.763319751 podStartE2EDuration="3.763319751s" podCreationTimestamp="2025-05-14 18:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:26.763201425 +0000 UTC m=+1.165372584" watchObservedRunningTime="2025-05-14 18:03:26.763319751 +0000 UTC m=+1.165490890" May 14 18:03:27.729200 kubelet[2741]: E0514 18:03:27.729165 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:27.729743 kubelet[2741]: E0514 18:03:27.729404 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:27.974448 sudo[1806]: pam_unix(sudo:session): session closed for user root May 14 18:03:27.976615 sshd[1805]: Connection closed by 10.0.0.1 port 57300 May 14 18:03:27.977208 sshd-session[1803]: pam_unix(sshd:session): session closed for user core May 14 18:03:27.981746 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:57300.service: Deactivated successfully. May 14 18:03:27.983998 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:03:27.984250 systemd[1]: session-7.scope: Consumed 5.066s CPU time, 265.1M memory peak. May 14 18:03:27.985651 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. May 14 18:03:27.987505 systemd-logind[1577]: Removed session 7. May 14 18:03:28.498297 kubelet[2741]: E0514 18:03:28.498250 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:28.730658 kubelet[2741]: E0514 18:03:28.730628 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:31.602372 kubelet[2741]: I0514 18:03:31.602334 2741 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:03:31.602811 containerd[1598]: time="2025-05-14T18:03:31.602673580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:03:31.603049 kubelet[2741]: I0514 18:03:31.602851 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:03:32.486599 systemd[1]: Created slice kubepods-besteffort-podc4b7578d_fb69_410f_8fd2_83918d16a89f.slice - libcontainer container kubepods-besteffort-podc4b7578d_fb69_410f_8fd2_83918d16a89f.slice. May 14 18:03:32.493554 kubelet[2741]: E0514 18:03:32.491352 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.517236 systemd[1]: Created slice kubepods-burstable-podfb6ba410_9588_4f10_acae_5d598473137a.slice - libcontainer container kubepods-burstable-podfb6ba410_9588_4f10_acae_5d598473137a.slice. May 14 18:03:32.521422 kubelet[2741]: I0514 18:03:32.521360 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8ft\" (UniqueName: \"kubernetes.io/projected/c4b7578d-fb69-410f-8fd2-83918d16a89f-kube-api-access-4j8ft\") pod \"kube-proxy-bwssm\" (UID: \"c4b7578d-fb69-410f-8fd2-83918d16a89f\") " pod="kube-system/kube-proxy-bwssm" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521424 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx57d\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-kube-api-access-fx57d\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521469 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4b7578d-fb69-410f-8fd2-83918d16a89f-xtables-lock\") pod \"kube-proxy-bwssm\" (UID: \"c4b7578d-fb69-410f-8fd2-83918d16a89f\") " pod="kube-system/kube-proxy-bwssm" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521490 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-run\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521550 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-hostproc\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521577 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb6ba410-9588-4f10-acae-5d598473137a-cilium-config-path\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521608 kubelet[2741]: I0514 18:03:32.521599 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-lib-modules\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521623 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb6ba410-9588-4f10-acae-5d598473137a-clustermesh-secrets\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521641 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-etc-cni-netd\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521657 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4b7578d-fb69-410f-8fd2-83918d16a89f-kube-proxy\") pod \"kube-proxy-bwssm\" (UID: \"c4b7578d-fb69-410f-8fd2-83918d16a89f\") " pod="kube-system/kube-proxy-bwssm" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521681 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-cgroup\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521695 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-hubble-tls\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521755 kubelet[2741]: I0514 18:03:32.521712 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cni-path\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521892 kubelet[2741]: I0514 18:03:32.521727 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-xtables-lock\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521892 kubelet[2741]: I0514 18:03:32.521741 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-kernel\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521892 kubelet[2741]: I0514 18:03:32.521761 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-net\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.521892 kubelet[2741]: I0514 18:03:32.521783 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4b7578d-fb69-410f-8fd2-83918d16a89f-lib-modules\") pod \"kube-proxy-bwssm\" (UID: \"c4b7578d-fb69-410f-8fd2-83918d16a89f\") " pod="kube-system/kube-proxy-bwssm" May 14 18:03:32.521892 kubelet[2741]: I0514 18:03:32.521802 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-bpf-maps\") pod \"cilium-zcpjg\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " pod="kube-system/cilium-zcpjg" May 14 18:03:32.568612 systemd[1]: Created slice kubepods-besteffort-poda13ab8f9_5a6b_455e_b246_20d18ce2987a.slice - libcontainer container kubepods-besteffort-poda13ab8f9_5a6b_455e_b246_20d18ce2987a.slice. May 14 18:03:32.624186 kubelet[2741]: I0514 18:03:32.622907 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49d7\" (UniqueName: \"kubernetes.io/projected/a13ab8f9-5a6b-455e-b246-20d18ce2987a-kube-api-access-n49d7\") pod \"cilium-operator-6c4d7847fc-txt9x\" (UID: \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\") " pod="kube-system/cilium-operator-6c4d7847fc-txt9x" May 14 18:03:32.624186 kubelet[2741]: I0514 18:03:32.622962 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a13ab8f9-5a6b-455e-b246-20d18ce2987a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-txt9x\" (UID: \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\") " pod="kube-system/cilium-operator-6c4d7847fc-txt9x" May 14 18:03:32.738149 kubelet[2741]: E0514 18:03:32.738037 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.802872 kubelet[2741]: E0514 18:03:32.802831 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.803719 containerd[1598]: time="2025-05-14T18:03:32.803670756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwssm,Uid:c4b7578d-fb69-410f-8fd2-83918d16a89f,Namespace:kube-system,Attempt:0,}" May 14 18:03:32.822145 kubelet[2741]: E0514 18:03:32.822087 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.822714 containerd[1598]: time="2025-05-14T18:03:32.822674569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcpjg,Uid:fb6ba410-9588-4f10-acae-5d598473137a,Namespace:kube-system,Attempt:0,}" May 14 18:03:32.829028 containerd[1598]: time="2025-05-14T18:03:32.828888944Z" level=info msg="connecting to shim 3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7" address="unix:///run/containerd/s/cb16fbf2e7f5650535c156b9c2734c99cbfd7391ca2b5f0c848ea5f6df78e7f9" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:32.846507 containerd[1598]: time="2025-05-14T18:03:32.846438955Z" level=info msg="connecting to shim 17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:32.865850 systemd[1]: Started cri-containerd-3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7.scope - libcontainer container 3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7. May 14 18:03:32.870319 systemd[1]: Started cri-containerd-17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c.scope - libcontainer container 17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c. May 14 18:03:32.873792 kubelet[2741]: E0514 18:03:32.873731 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.876016 containerd[1598]: time="2025-05-14T18:03:32.875963234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-txt9x,Uid:a13ab8f9-5a6b-455e-b246-20d18ce2987a,Namespace:kube-system,Attempt:0,}" May 14 18:03:32.909086 containerd[1598]: time="2025-05-14T18:03:32.909033600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcpjg,Uid:fb6ba410-9588-4f10-acae-5d598473137a,Namespace:kube-system,Attempt:0,} returns sandbox id \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\"" May 14 18:03:32.910032 kubelet[2741]: E0514 18:03:32.909998 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.914541 containerd[1598]: time="2025-05-14T18:03:32.914475357Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:03:32.924880 containerd[1598]: time="2025-05-14T18:03:32.924456429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwssm,Uid:c4b7578d-fb69-410f-8fd2-83918d16a89f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7\"" May 14 18:03:32.926566 containerd[1598]: time="2025-05-14T18:03:32.926483781Z" level=info msg="connecting to shim e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d" address="unix:///run/containerd/s/9cb7430341ead519f2854e51f54a2e13dc556d9cd8f951f36b743c6c30915ec5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:32.926930 kubelet[2741]: E0514 18:03:32.925914 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:32.929858 containerd[1598]: time="2025-05-14T18:03:32.929816762Z" level=info msg="CreateContainer within sandbox \"3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:03:32.942892 containerd[1598]: time="2025-05-14T18:03:32.942832970Z" level=info msg="Container 18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:32.960745 systemd[1]: Started cri-containerd-e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d.scope - libcontainer container e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d. May 14 18:03:32.960915 containerd[1598]: time="2025-05-14T18:03:32.960822856Z" level=info msg="CreateContainer within sandbox \"3477aff2320de9acf9775277b83c18de4056a6c58be814324d2b97b551c3f0a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382\"" May 14 18:03:32.961576 containerd[1598]: time="2025-05-14T18:03:32.961523767Z" level=info msg="StartContainer for \"18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382\"" May 14 18:03:32.965312 containerd[1598]: time="2025-05-14T18:03:32.965258974Z" level=info msg="connecting to shim 18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382" address="unix:///run/containerd/s/cb16fbf2e7f5650535c156b9c2734c99cbfd7391ca2b5f0c848ea5f6df78e7f9" protocol=ttrpc version=3 May 14 18:03:32.995700 systemd[1]: Started cri-containerd-18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382.scope - libcontainer container 18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382. May 14 18:03:33.019973 containerd[1598]: time="2025-05-14T18:03:33.019909429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-txt9x,Uid:a13ab8f9-5a6b-455e-b246-20d18ce2987a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\"" May 14 18:03:33.021052 kubelet[2741]: E0514 18:03:33.021009 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:33.094723 containerd[1598]: time="2025-05-14T18:03:33.094684645Z" level=info msg="StartContainer for \"18d10f66859955c57a22e6065aaeaa138c0f5d374b4314594eaa49eff8066382\" returns successfully" May 14 18:03:33.745404 kubelet[2741]: E0514 18:03:33.745368 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:33.745404 kubelet[2741]: E0514 18:03:33.745462 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:33.787241 kubelet[2741]: I0514 18:03:33.787025 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwssm" podStartSLOduration=1.787007505 podStartE2EDuration="1.787007505s" podCreationTimestamp="2025-05-14 18:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:33.786844185 +0000 UTC m=+8.189015314" watchObservedRunningTime="2025-05-14 18:03:33.787007505 +0000 UTC m=+8.189178644" May 14 18:03:36.200601 update_engine[1578]: I20250514 18:03:36.200471 1578 update_attempter.cc:509] Updating boot flags... May 14 18:03:37.367990 kubelet[2741]: E0514 18:03:37.367953 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:37.752677 kubelet[2741]: E0514 18:03:37.752640 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:38.503918 kubelet[2741]: E0514 18:03:38.503517 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:39.481191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176540545.mount: Deactivated successfully. May 14 18:03:43.507071 containerd[1598]: time="2025-05-14T18:03:43.506956648Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:43.509478 containerd[1598]: time="2025-05-14T18:03:43.509439596Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 18:03:43.511283 containerd[1598]: time="2025-05-14T18:03:43.511251167Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:43.512951 containerd[1598]: time="2025-05-14T18:03:43.512888357Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.598364888s" May 14 18:03:43.512951 containerd[1598]: time="2025-05-14T18:03:43.512929585Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 18:03:43.515977 containerd[1598]: time="2025-05-14T18:03:43.515922315Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:03:43.516076 containerd[1598]: time="2025-05-14T18:03:43.515995463Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:03:43.530319 containerd[1598]: time="2025-05-14T18:03:43.530261987Z" level=info msg="Container fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:43.545805 containerd[1598]: time="2025-05-14T18:03:43.545744466Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\"" May 14 18:03:43.546634 containerd[1598]: time="2025-05-14T18:03:43.546260941Z" level=info msg="StartContainer for \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\"" May 14 18:03:43.549606 containerd[1598]: time="2025-05-14T18:03:43.549560410Z" level=info msg="connecting to shim fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" protocol=ttrpc version=3 May 14 18:03:43.605722 systemd[1]: Started cri-containerd-fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f.scope - libcontainer container fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f. May 14 18:03:43.777512 systemd[1]: cri-containerd-fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f.scope: Deactivated successfully. May 14 18:03:43.777989 systemd[1]: cri-containerd-fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f.scope: Consumed 29ms CPU time, 6.7M memory peak, 40K read from disk, 3.2M written to disk. May 14 18:03:43.781699 containerd[1598]: time="2025-05-14T18:03:43.781646965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" id:\"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" pid:3176 exited_at:{seconds:1747245823 nanos:780960940}" May 14 18:03:43.810705 containerd[1598]: time="2025-05-14T18:03:43.810645802Z" level=info msg="received exit event container_id:\"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" id:\"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" pid:3176 exited_at:{seconds:1747245823 nanos:780960940}" May 14 18:03:43.811677 containerd[1598]: time="2025-05-14T18:03:43.811575858Z" level=info msg="StartContainer for \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" returns successfully" May 14 18:03:43.837911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f-rootfs.mount: Deactivated successfully. May 14 18:03:44.860755 kubelet[2741]: E0514 18:03:44.860675 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:45.726972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768571431.mount: Deactivated successfully. May 14 18:03:45.865270 kubelet[2741]: E0514 18:03:45.865238 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:45.868828 containerd[1598]: time="2025-05-14T18:03:45.868780844Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:03:45.892934 containerd[1598]: time="2025-05-14T18:03:45.892852415Z" level=info msg="Container b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:45.894612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544827196.mount: Deactivated successfully. May 14 18:03:45.901592 containerd[1598]: time="2025-05-14T18:03:45.901513183Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\"" May 14 18:03:45.902190 containerd[1598]: time="2025-05-14T18:03:45.902158641Z" level=info msg="StartContainer for \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\"" May 14 18:03:45.903478 containerd[1598]: time="2025-05-14T18:03:45.903440710Z" level=info msg="connecting to shim b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" protocol=ttrpc version=3 May 14 18:03:45.934747 systemd[1]: Started cri-containerd-b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad.scope - libcontainer container b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad. May 14 18:03:46.035181 containerd[1598]: time="2025-05-14T18:03:46.035052795Z" level=info msg="StartContainer for \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" returns successfully" May 14 18:03:46.048309 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:03:46.048645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:46.052253 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:46.054380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:46.060664 systemd[1]: cri-containerd-b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad.scope: Deactivated successfully. May 14 18:03:46.061340 containerd[1598]: time="2025-05-14T18:03:46.060759775Z" level=info msg="received exit event container_id:\"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" id:\"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" pid:3234 exited_at:{seconds:1747245826 nanos:60446875}" May 14 18:03:46.061340 containerd[1598]: time="2025-05-14T18:03:46.061016850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" id:\"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" pid:3234 exited_at:{seconds:1747245826 nanos:60446875}" May 14 18:03:46.061080 systemd[1]: cri-containerd-b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad.scope: Consumed 32ms CPU time, 7.7M memory peak, 256K read from disk, 2.2M written to disk. May 14 18:03:46.091220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:46.238745 containerd[1598]: time="2025-05-14T18:03:46.238672326Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:46.239703 containerd[1598]: time="2025-05-14T18:03:46.239648457Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 18:03:46.241080 containerd[1598]: time="2025-05-14T18:03:46.241046464Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:46.242422 containerd[1598]: time="2025-05-14T18:03:46.242391419Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.726433818s" May 14 18:03:46.242483 containerd[1598]: time="2025-05-14T18:03:46.242424903Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 18:03:46.244419 containerd[1598]: time="2025-05-14T18:03:46.244369771Z" level=info msg="CreateContainer within sandbox \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:03:46.256736 containerd[1598]: time="2025-05-14T18:03:46.256672212Z" level=info msg="Container 64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:46.265928 containerd[1598]: time="2025-05-14T18:03:46.265876851Z" level=info msg="CreateContainer within sandbox \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\"" May 14 18:03:46.266625 containerd[1598]: time="2025-05-14T18:03:46.266574556Z" level=info msg="StartContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\"" May 14 18:03:46.267814 containerd[1598]: time="2025-05-14T18:03:46.267725857Z" level=info msg="connecting to shim 64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2" address="unix:///run/containerd/s/9cb7430341ead519f2854e51f54a2e13dc556d9cd8f951f36b743c6c30915ec5" protocol=ttrpc version=3 May 14 18:03:46.295802 systemd[1]: Started cri-containerd-64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2.scope - libcontainer container 64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2. May 14 18:03:46.331728 containerd[1598]: time="2025-05-14T18:03:46.331675780Z" level=info msg="StartContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" returns successfully" May 14 18:03:46.727947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad-rootfs.mount: Deactivated successfully. May 14 18:03:46.869269 kubelet[2741]: E0514 18:03:46.869221 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:46.873055 kubelet[2741]: E0514 18:03:46.873020 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:46.875130 containerd[1598]: time="2025-05-14T18:03:46.875048232Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:03:46.911930 containerd[1598]: time="2025-05-14T18:03:46.911372035Z" level=info msg="Container a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:46.937176 containerd[1598]: time="2025-05-14T18:03:46.937124351Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\"" May 14 18:03:46.937322 kubelet[2741]: I0514 18:03:46.937144 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-txt9x" podStartSLOduration=1.715854449 podStartE2EDuration="14.937122016s" podCreationTimestamp="2025-05-14 18:03:32 +0000 UTC" firstStartedPulling="2025-05-14 18:03:33.021857877 +0000 UTC m=+7.424029026" lastFinishedPulling="2025-05-14 18:03:46.243125454 +0000 UTC m=+20.645296593" observedRunningTime="2025-05-14 18:03:46.898717399 +0000 UTC m=+21.300888548" watchObservedRunningTime="2025-05-14 18:03:46.937122016 +0000 UTC m=+21.339293155" May 14 18:03:46.939361 containerd[1598]: time="2025-05-14T18:03:46.939330481Z" level=info msg="StartContainer for \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\"" May 14 18:03:46.941097 containerd[1598]: time="2025-05-14T18:03:46.941066415Z" level=info msg="connecting to shim a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" protocol=ttrpc version=3 May 14 18:03:46.982737 systemd[1]: Started cri-containerd-a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e.scope - libcontainer container a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e. May 14 18:03:47.046255 systemd[1]: cri-containerd-a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e.scope: Deactivated successfully. May 14 18:03:47.048972 containerd[1598]: time="2025-05-14T18:03:47.048900779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" id:\"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" pid:3319 exited_at:{seconds:1747245827 nanos:48393713}" May 14 18:03:47.049097 containerd[1598]: time="2025-05-14T18:03:47.049011397Z" level=info msg="received exit event container_id:\"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" id:\"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" pid:3319 exited_at:{seconds:1747245827 nanos:48393713}" May 14 18:03:47.049796 containerd[1598]: time="2025-05-14T18:03:47.049764817Z" level=info msg="StartContainer for \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" returns successfully" May 14 18:03:47.723039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e-rootfs.mount: Deactivated successfully. May 14 18:03:47.878692 kubelet[2741]: E0514 18:03:47.878445 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:47.878692 kubelet[2741]: E0514 18:03:47.878473 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:47.880010 containerd[1598]: time="2025-05-14T18:03:47.879962097Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:03:47.893667 containerd[1598]: time="2025-05-14T18:03:47.893596061Z" level=info msg="Container 5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:47.897149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950072814.mount: Deactivated successfully. May 14 18:03:47.907996 containerd[1598]: time="2025-05-14T18:03:47.907931467Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\"" May 14 18:03:47.909144 containerd[1598]: time="2025-05-14T18:03:47.909079391Z" level=info msg="StartContainer for \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\"" May 14 18:03:47.910356 containerd[1598]: time="2025-05-14T18:03:47.910323457Z" level=info msg="connecting to shim 5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" protocol=ttrpc version=3 May 14 18:03:47.934741 systemd[1]: Started cri-containerd-5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe.scope - libcontainer container 5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe. May 14 18:03:47.963361 systemd[1]: cri-containerd-5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe.scope: Deactivated successfully. May 14 18:03:47.964403 containerd[1598]: time="2025-05-14T18:03:47.964199278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" id:\"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" pid:3358 exited_at:{seconds:1747245827 nanos:963979053}" May 14 18:03:47.967602 containerd[1598]: time="2025-05-14T18:03:47.967559072Z" level=info msg="received exit event container_id:\"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" id:\"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" pid:3358 exited_at:{seconds:1747245827 nanos:963979053}" May 14 18:03:47.977442 containerd[1598]: time="2025-05-14T18:03:47.977300859Z" level=info msg="StartContainer for \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" returns successfully" May 14 18:03:47.994176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe-rootfs.mount: Deactivated successfully. May 14 18:03:48.883824 kubelet[2741]: E0514 18:03:48.883787 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:48.886684 containerd[1598]: time="2025-05-14T18:03:48.886628761Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:03:49.005074 containerd[1598]: time="2025-05-14T18:03:49.005007420Z" level=info msg="Container e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:49.119426 containerd[1598]: time="2025-05-14T18:03:49.119359705Z" level=info msg="CreateContainer within sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\"" May 14 18:03:49.120170 containerd[1598]: time="2025-05-14T18:03:49.120002145Z" level=info msg="StartContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\"" May 14 18:03:49.121224 containerd[1598]: time="2025-05-14T18:03:49.121183492Z" level=info msg="connecting to shim e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870" address="unix:///run/containerd/s/134182fcb278e11e2f291f9a8b955c49d7975f47cc5121b31f4fd58c826ba612" protocol=ttrpc version=3 May 14 18:03:49.145748 systemd[1]: Started cri-containerd-e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870.scope - libcontainer container e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870. May 14 18:03:49.214020 containerd[1598]: time="2025-05-14T18:03:49.213963538Z" level=info msg="StartContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" returns successfully" May 14 18:03:49.288577 containerd[1598]: time="2025-05-14T18:03:49.288503869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" id:\"3c9a019c970e475d1721736b2f03d012801230248f50a61d73f9d281591ec4f2\" pid:3426 exited_at:{seconds:1747245829 nanos:287905251}" May 14 18:03:49.299830 kubelet[2741]: I0514 18:03:49.299785 2741 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 18:03:49.343283 systemd[1]: Created slice kubepods-burstable-pod3325e581_3b17_4425_a7bd_1dd37784c6f5.slice - libcontainer container kubepods-burstable-pod3325e581_3b17_4425_a7bd_1dd37784c6f5.slice. May 14 18:03:49.350670 systemd[1]: Created slice kubepods-burstable-pod3a57086d_2d63_4590_aad1_4953c44abb9c.slice - libcontainer container kubepods-burstable-pod3a57086d_2d63_4590_aad1_4953c44abb9c.slice. May 14 18:03:49.436205 kubelet[2741]: I0514 18:03:49.436166 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a57086d-2d63-4590-aad1-4953c44abb9c-config-volume\") pod \"coredns-668d6bf9bc-sc2p7\" (UID: \"3a57086d-2d63-4590-aad1-4953c44abb9c\") " pod="kube-system/coredns-668d6bf9bc-sc2p7" May 14 18:03:49.436361 kubelet[2741]: I0514 18:03:49.436228 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59nf\" (UniqueName: \"kubernetes.io/projected/3325e581-3b17-4425-a7bd-1dd37784c6f5-kube-api-access-h59nf\") pod \"coredns-668d6bf9bc-f6x42\" (UID: \"3325e581-3b17-4425-a7bd-1dd37784c6f5\") " pod="kube-system/coredns-668d6bf9bc-f6x42" May 14 18:03:49.436361 kubelet[2741]: I0514 18:03:49.436270 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ptm\" (UniqueName: \"kubernetes.io/projected/3a57086d-2d63-4590-aad1-4953c44abb9c-kube-api-access-45ptm\") pod \"coredns-668d6bf9bc-sc2p7\" (UID: \"3a57086d-2d63-4590-aad1-4953c44abb9c\") " pod="kube-system/coredns-668d6bf9bc-sc2p7" May 14 18:03:49.436361 kubelet[2741]: I0514 18:03:49.436307 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3325e581-3b17-4425-a7bd-1dd37784c6f5-config-volume\") pod \"coredns-668d6bf9bc-f6x42\" (UID: \"3325e581-3b17-4425-a7bd-1dd37784c6f5\") " pod="kube-system/coredns-668d6bf9bc-f6x42" May 14 18:03:49.647661 kubelet[2741]: E0514 18:03:49.647611 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:49.648759 containerd[1598]: time="2025-05-14T18:03:49.648683203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f6x42,Uid:3325e581-3b17-4425-a7bd-1dd37784c6f5,Namespace:kube-system,Attempt:0,}" May 14 18:03:49.655663 kubelet[2741]: E0514 18:03:49.655611 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:49.656372 containerd[1598]: time="2025-05-14T18:03:49.656317298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc2p7,Uid:3a57086d-2d63-4590-aad1-4953c44abb9c,Namespace:kube-system,Attempt:0,}" May 14 18:03:49.893228 kubelet[2741]: E0514 18:03:49.892840 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:50.893766 kubelet[2741]: E0514 18:03:50.893723 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:51.477039 systemd-networkd[1504]: cilium_host: Link UP May 14 18:03:51.477481 systemd-networkd[1504]: cilium_net: Link UP May 14 18:03:51.477966 systemd-networkd[1504]: cilium_net: Gained carrier May 14 18:03:51.478148 systemd-networkd[1504]: cilium_host: Gained carrier May 14 18:03:51.593287 systemd-networkd[1504]: cilium_vxlan: Link UP May 14 18:03:51.593299 systemd-networkd[1504]: cilium_vxlan: Gained carrier May 14 18:03:51.603729 systemd-networkd[1504]: cilium_net: Gained IPv6LL May 14 18:03:51.831557 kernel: NET: Registered PF_ALG protocol family May 14 18:03:52.460787 systemd-networkd[1504]: cilium_host: Gained IPv6LL May 14 18:03:52.652226 systemd-networkd[1504]: lxc_health: Link UP May 14 18:03:52.665107 systemd-networkd[1504]: lxc_health: Gained carrier May 14 18:03:52.823989 kubelet[2741]: E0514 18:03:52.823845 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:52.863454 systemd-networkd[1504]: lxcfc99394cc6d2: Link UP May 14 18:03:52.872567 kernel: eth0: renamed from tmpc97b0 May 14 18:03:52.872849 systemd-networkd[1504]: lxcfc99394cc6d2: Gained carrier May 14 18:03:52.896657 systemd-networkd[1504]: lxc29a64db7ea17: Link UP May 14 18:03:52.901779 kubelet[2741]: E0514 18:03:52.900972 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:52.904618 kernel: eth0: renamed from tmp28682 May 14 18:03:52.905438 systemd-networkd[1504]: lxc29a64db7ea17: Gained carrier May 14 18:03:53.086414 kubelet[2741]: I0514 18:03:53.085891 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zcpjg" podStartSLOduration=10.484595083 podStartE2EDuration="21.085747977s" podCreationTimestamp="2025-05-14 18:03:32 +0000 UTC" firstStartedPulling="2025-05-14 18:03:32.912658436 +0000 UTC m=+7.314829575" lastFinishedPulling="2025-05-14 18:03:43.51381133 +0000 UTC m=+17.915982469" observedRunningTime="2025-05-14 18:03:49.948432886 +0000 UTC m=+24.350604025" watchObservedRunningTime="2025-05-14 18:03:53.085747977 +0000 UTC m=+27.487919136" May 14 18:03:53.355780 systemd-networkd[1504]: cilium_vxlan: Gained IPv6LL May 14 18:03:53.903228 kubelet[2741]: E0514 18:03:53.903172 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:54.318561 systemd-networkd[1504]: lxc29a64db7ea17: Gained IPv6LL May 14 18:03:54.571699 systemd-networkd[1504]: lxcfc99394cc6d2: Gained IPv6LL May 14 18:03:54.635725 systemd-networkd[1504]: lxc_health: Gained IPv6LL May 14 18:03:55.296909 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:57046.service - OpenSSH per-connection server daemon (10.0.0.1:57046). May 14 18:03:55.372962 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 57046 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:03:55.375806 sshd-session[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:55.384707 systemd-logind[1577]: New session 8 of user core. May 14 18:03:55.395678 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:03:55.532294 sshd[3905]: Connection closed by 10.0.0.1 port 57046 May 14 18:03:55.532596 sshd-session[3901]: pam_unix(sshd:session): session closed for user core May 14 18:03:55.537187 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:57046.service: Deactivated successfully. May 14 18:03:55.539619 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:03:55.540380 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. May 14 18:03:55.541967 systemd-logind[1577]: Removed session 8. May 14 18:03:56.612694 containerd[1598]: time="2025-05-14T18:03:56.612635006Z" level=info msg="connecting to shim c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc" address="unix:///run/containerd/s/4d6c41eb639965b1c99b83009cddf1427ced009d1cad57a02d5a25c4590be7f7" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:56.613485 containerd[1598]: time="2025-05-14T18:03:56.613457724Z" level=info msg="connecting to shim 28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6" address="unix:///run/containerd/s/bebf75efb4440d0b3ff7719b8dd95de5a24e4abad6d909447579ccae79549a82" namespace=k8s.io protocol=ttrpc version=3 May 14 18:03:56.647771 systemd[1]: Started cri-containerd-28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6.scope - libcontainer container 28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6. May 14 18:03:56.651302 systemd[1]: Started cri-containerd-c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc.scope - libcontainer container c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc. May 14 18:03:56.668060 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:03:56.670615 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:03:56.709483 containerd[1598]: time="2025-05-14T18:03:56.709429945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sc2p7,Uid:3a57086d-2d63-4590-aad1-4953c44abb9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6\"" May 14 18:03:56.710208 kubelet[2741]: E0514 18:03:56.710181 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:56.712392 containerd[1598]: time="2025-05-14T18:03:56.712355719Z" level=info msg="CreateContainer within sandbox \"28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:03:56.717341 containerd[1598]: time="2025-05-14T18:03:56.717308998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f6x42,Uid:3325e581-3b17-4425-a7bd-1dd37784c6f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc\"" May 14 18:03:56.717957 kubelet[2741]: E0514 18:03:56.717924 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:56.719323 containerd[1598]: time="2025-05-14T18:03:56.719290877Z" level=info msg="CreateContainer within sandbox \"c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:03:56.737171 containerd[1598]: time="2025-05-14T18:03:56.737121393Z" level=info msg="Container 39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:56.760140 containerd[1598]: time="2025-05-14T18:03:56.760050792Z" level=info msg="Container f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee: CDI devices from CRI Config.CDIDevices: []" May 14 18:03:56.778067 containerd[1598]: time="2025-05-14T18:03:56.777983370Z" level=info msg="CreateContainer within sandbox \"28682d5211800f2f5e2c2db7a8e691d7dc39defc1954eb18fbf9eacad5e7b5a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76\"" May 14 18:03:56.778778 containerd[1598]: time="2025-05-14T18:03:56.778737369Z" level=info msg="StartContainer for \"39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76\"" May 14 18:03:56.779789 containerd[1598]: time="2025-05-14T18:03:56.779742951Z" level=info msg="connecting to shim 39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76" address="unix:///run/containerd/s/bebf75efb4440d0b3ff7719b8dd95de5a24e4abad6d909447579ccae79549a82" protocol=ttrpc version=3 May 14 18:03:56.790387 containerd[1598]: time="2025-05-14T18:03:56.790312135Z" level=info msg="CreateContainer within sandbox \"c97b09aacfff4eb468b08d8fd7a7a2767767d45b180c9d1de5da83bfcbd1cfcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee\"" May 14 18:03:56.792603 containerd[1598]: time="2025-05-14T18:03:56.792578770Z" level=info msg="StartContainer for \"f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee\"" May 14 18:03:56.794858 containerd[1598]: time="2025-05-14T18:03:56.793957623Z" level=info msg="connecting to shim f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee" address="unix:///run/containerd/s/4d6c41eb639965b1c99b83009cddf1427ced009d1cad57a02d5a25c4590be7f7" protocol=ttrpc version=3 May 14 18:03:56.806096 systemd[1]: Started cri-containerd-39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76.scope - libcontainer container 39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76. May 14 18:03:56.817711 systemd[1]: Started cri-containerd-f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee.scope - libcontainer container f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee. May 14 18:03:56.850438 containerd[1598]: time="2025-05-14T18:03:56.850393171Z" level=info msg="StartContainer for \"39c8c45345dea0f9e458837f53456d09215bb0f75ab83c9b222f6ece10582e76\" returns successfully" May 14 18:03:56.861041 containerd[1598]: time="2025-05-14T18:03:56.860993704Z" level=info msg="StartContainer for \"f93752a905b799abef8d3d9c03dd731f5086b5946f4fd9ff857ba36a738d00ee\" returns successfully" May 14 18:03:56.911602 kubelet[2741]: E0514 18:03:56.911438 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:56.913726 kubelet[2741]: E0514 18:03:56.913623 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:56.925120 kubelet[2741]: I0514 18:03:56.925007 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f6x42" podStartSLOduration=24.924872755 podStartE2EDuration="24.924872755s" podCreationTimestamp="2025-05-14 18:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:56.924745406 +0000 UTC m=+31.326916545" watchObservedRunningTime="2025-05-14 18:03:56.924872755 +0000 UTC m=+31.327043894" May 14 18:03:56.935783 kubelet[2741]: I0514 18:03:56.935719 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sc2p7" podStartSLOduration=24.935510729 podStartE2EDuration="24.935510729s" podCreationTimestamp="2025-05-14 18:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:03:56.934891003 +0000 UTC m=+31.337062152" watchObservedRunningTime="2025-05-14 18:03:56.935510729 +0000 UTC m=+31.337681868" May 14 18:03:57.915626 kubelet[2741]: E0514 18:03:57.915257 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:57.915626 kubelet[2741]: E0514 18:03:57.915329 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:58.936592 kubelet[2741]: E0514 18:03:58.936554 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:03:58.937020 kubelet[2741]: E0514 18:03:58.936674 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:00.547695 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:47000.service - OpenSSH per-connection server daemon (10.0.0.1:47000). May 14 18:04:00.608652 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 47000 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:00.610561 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:00.615243 systemd-logind[1577]: New session 9 of user core. May 14 18:04:00.625724 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:04:00.755216 sshd[4096]: Connection closed by 10.0.0.1 port 47000 May 14 18:04:00.755511 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 14 18:04:00.760129 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:47000.service: Deactivated successfully. May 14 18:04:00.762146 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:04:00.763159 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. May 14 18:04:00.764463 systemd-logind[1577]: Removed session 9. May 14 18:04:05.774028 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:47008.service - OpenSSH per-connection server daemon (10.0.0.1:47008). May 14 18:04:05.830960 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 47008 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:05.832608 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:05.837809 systemd-logind[1577]: New session 10 of user core. May 14 18:04:05.843760 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:04:05.969211 sshd[4116]: Connection closed by 10.0.0.1 port 47008 May 14 18:04:05.969616 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 14 18:04:05.974744 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:47008.service: Deactivated successfully. May 14 18:04:05.977063 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:04:05.978149 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. May 14 18:04:05.979505 systemd-logind[1577]: Removed session 10. May 14 18:04:10.985957 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:40082.service - OpenSSH per-connection server daemon (10.0.0.1:40082). May 14 18:04:11.046095 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 40082 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:11.047895 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:11.052232 systemd-logind[1577]: New session 11 of user core. May 14 18:04:11.066701 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:04:11.220662 sshd[4133]: Connection closed by 10.0.0.1 port 40082 May 14 18:04:11.220996 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 14 18:04:11.229405 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:40082.service: Deactivated successfully. May 14 18:04:11.231216 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:04:11.232089 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. May 14 18:04:11.235093 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). May 14 18:04:11.235827 systemd-logind[1577]: Removed session 11. May 14 18:04:11.298766 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:11.300315 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:11.304694 systemd-logind[1577]: New session 12 of user core. May 14 18:04:11.315711 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:04:11.510697 sshd[4150]: Connection closed by 10.0.0.1 port 40090 May 14 18:04:11.511171 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 14 18:04:11.520919 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:40090.service: Deactivated successfully. May 14 18:04:11.523040 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:04:11.523931 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. May 14 18:04:11.527239 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:40102.service - OpenSSH per-connection server daemon (10.0.0.1:40102). May 14 18:04:11.528074 systemd-logind[1577]: Removed session 12. May 14 18:04:11.583966 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 40102 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:11.585839 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:11.591064 systemd-logind[1577]: New session 13 of user core. May 14 18:04:11.599992 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:04:11.714975 sshd[4163]: Connection closed by 10.0.0.1 port 40102 May 14 18:04:11.715393 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 14 18:04:11.719480 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. May 14 18:04:11.719926 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:40102.service: Deactivated successfully. May 14 18:04:11.722131 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:04:11.723825 systemd-logind[1577]: Removed session 13. May 14 18:04:16.734819 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:34540.service - OpenSSH per-connection server daemon (10.0.0.1:34540). May 14 18:04:16.781374 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 34540 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:16.783115 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:16.788095 systemd-logind[1577]: New session 14 of user core. May 14 18:04:16.803761 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:04:16.917664 sshd[4178]: Connection closed by 10.0.0.1 port 34540 May 14 18:04:16.917984 sshd-session[4176]: pam_unix(sshd:session): session closed for user core May 14 18:04:16.921197 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:34540.service: Deactivated successfully. May 14 18:04:16.923146 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:04:16.925175 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. May 14 18:04:16.926168 systemd-logind[1577]: Removed session 14. May 14 18:04:21.934836 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:34546.service - OpenSSH per-connection server daemon (10.0.0.1:34546). May 14 18:04:21.993598 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 34546 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:21.995231 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:22.000207 systemd-logind[1577]: New session 15 of user core. May 14 18:04:22.009744 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:04:22.124356 sshd[4194]: Connection closed by 10.0.0.1 port 34546 May 14 18:04:22.124675 sshd-session[4192]: pam_unix(sshd:session): session closed for user core May 14 18:04:22.133208 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:34546.service: Deactivated successfully. May 14 18:04:22.135146 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:04:22.136099 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. May 14 18:04:22.139457 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). May 14 18:04:22.140388 systemd-logind[1577]: Removed session 15. May 14 18:04:22.196123 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:22.197718 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:22.202883 systemd-logind[1577]: New session 16 of user core. May 14 18:04:22.212681 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:04:22.607591 sshd[4209]: Connection closed by 10.0.0.1 port 34550 May 14 18:04:22.608099 sshd-session[4207]: pam_unix(sshd:session): session closed for user core May 14 18:04:22.621234 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:34550.service: Deactivated successfully. May 14 18:04:22.623401 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:04:22.624468 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. May 14 18:04:22.627601 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:34556.service - OpenSSH per-connection server daemon (10.0.0.1:34556). May 14 18:04:22.628388 systemd-logind[1577]: Removed session 16. May 14 18:04:22.698654 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 34556 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:22.700180 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:22.705103 systemd-logind[1577]: New session 17 of user core. May 14 18:04:22.715691 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:04:23.617192 sshd[4223]: Connection closed by 10.0.0.1 port 34556 May 14 18:04:23.618284 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 14 18:04:23.629409 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:34556.service: Deactivated successfully. May 14 18:04:23.632602 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:04:23.635150 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. May 14 18:04:23.639962 systemd-logind[1577]: Removed session 17. May 14 18:04:23.641997 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:34566.service - OpenSSH per-connection server daemon (10.0.0.1:34566). May 14 18:04:23.698425 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 34566 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:23.700027 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:23.704679 systemd-logind[1577]: New session 18 of user core. May 14 18:04:23.713671 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:04:23.959932 sshd[4247]: Connection closed by 10.0.0.1 port 34566 May 14 18:04:23.960486 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 14 18:04:23.972846 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:34566.service: Deactivated successfully. May 14 18:04:23.974926 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:04:23.975909 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. May 14 18:04:23.979386 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:34574.service - OpenSSH per-connection server daemon (10.0.0.1:34574). May 14 18:04:23.980322 systemd-logind[1577]: Removed session 18. May 14 18:04:24.032810 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 34574 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:24.034523 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:24.040597 systemd-logind[1577]: New session 19 of user core. May 14 18:04:24.048650 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:04:24.161313 sshd[4260]: Connection closed by 10.0.0.1 port 34574 May 14 18:04:24.161635 sshd-session[4258]: pam_unix(sshd:session): session closed for user core May 14 18:04:24.166034 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:34574.service: Deactivated successfully. May 14 18:04:24.168295 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:04:24.169214 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. May 14 18:04:24.170752 systemd-logind[1577]: Removed session 19. May 14 18:04:29.174237 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:36644.service - OpenSSH per-connection server daemon (10.0.0.1:36644). May 14 18:04:29.234362 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:29.236313 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:29.241271 systemd-logind[1577]: New session 20 of user core. May 14 18:04:29.251800 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:04:29.368445 sshd[4280]: Connection closed by 10.0.0.1 port 36644 May 14 18:04:29.369044 sshd-session[4278]: pam_unix(sshd:session): session closed for user core May 14 18:04:29.374714 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:36644.service: Deactivated successfully. May 14 18:04:29.377015 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:04:29.378113 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. May 14 18:04:29.379685 systemd-logind[1577]: Removed session 20. May 14 18:04:34.382383 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:36652.service - OpenSSH per-connection server daemon (10.0.0.1:36652). May 14 18:04:34.440681 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 36652 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:34.442314 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:34.447322 systemd-logind[1577]: New session 21 of user core. May 14 18:04:34.454712 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:04:34.565869 sshd[4298]: Connection closed by 10.0.0.1 port 36652 May 14 18:04:34.566215 sshd-session[4296]: pam_unix(sshd:session): session closed for user core May 14 18:04:34.570239 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:36652.service: Deactivated successfully. May 14 18:04:34.572471 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:04:34.573353 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. May 14 18:04:34.574724 systemd-logind[1577]: Removed session 21. May 14 18:04:39.583892 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:45306.service - OpenSSH per-connection server daemon (10.0.0.1:45306). May 14 18:04:39.651353 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 45306 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:39.653137 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:39.658076 systemd-logind[1577]: New session 22 of user core. May 14 18:04:39.664648 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:04:39.774988 sshd[4314]: Connection closed by 10.0.0.1 port 45306 May 14 18:04:39.775273 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 14 18:04:39.779183 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:45306.service: Deactivated successfully. May 14 18:04:39.781221 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:04:39.781998 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. May 14 18:04:39.783117 systemd-logind[1577]: Removed session 22. May 14 18:04:40.714741 kubelet[2741]: E0514 18:04:40.714678 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:44.792151 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:45308.service - OpenSSH per-connection server daemon (10.0.0.1:45308). May 14 18:04:44.849196 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 45308 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:44.850756 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:44.854888 systemd-logind[1577]: New session 23 of user core. May 14 18:04:44.865666 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:04:44.975062 sshd[4330]: Connection closed by 10.0.0.1 port 45308 May 14 18:04:44.975367 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 14 18:04:44.989394 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:45308.service: Deactivated successfully. May 14 18:04:44.991215 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:04:44.992035 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. May 14 18:04:44.994919 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:45322.service - OpenSSH per-connection server daemon (10.0.0.1:45322). May 14 18:04:44.995787 systemd-logind[1577]: Removed session 23. May 14 18:04:45.053940 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 45322 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:45.055679 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:45.060667 systemd-logind[1577]: New session 24 of user core. May 14 18:04:45.068726 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:04:45.715552 kubelet[2741]: E0514 18:04:45.715479 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:46.414722 containerd[1598]: time="2025-05-14T18:04:46.414497164Z" level=info msg="StopContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" with timeout 30 (s)" May 14 18:04:46.422544 containerd[1598]: time="2025-05-14T18:04:46.422480855Z" level=info msg="Stop container \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" with signal terminated" May 14 18:04:46.435852 systemd[1]: cri-containerd-64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2.scope: Deactivated successfully. May 14 18:04:46.437321 containerd[1598]: time="2025-05-14T18:04:46.436838777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" id:\"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" pid:3286 exited_at:{seconds:1747245886 nanos:436477989}" May 14 18:04:46.437321 containerd[1598]: time="2025-05-14T18:04:46.437025814Z" level=info msg="received exit event container_id:\"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" id:\"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" pid:3286 exited_at:{seconds:1747245886 nanos:436477989}" May 14 18:04:46.437471 systemd[1]: cri-containerd-64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2.scope: Consumed 356ms CPU time, 26.8M memory peak, 1.9M written to disk. May 14 18:04:46.445676 containerd[1598]: time="2025-05-14T18:04:46.445625331Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:04:46.447561 containerd[1598]: time="2025-05-14T18:04:46.447512711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" id:\"53ea94b765f8ae0876da7b60cdee7ee47a0d3ebafcc766e6db67f0ba73cebc74\" pid:4368 exited_at:{seconds:1747245886 nanos:447272824}" May 14 18:04:46.449742 containerd[1598]: time="2025-05-14T18:04:46.449701397Z" level=info msg="StopContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" with timeout 2 (s)" May 14 18:04:46.450011 containerd[1598]: time="2025-05-14T18:04:46.449988024Z" level=info msg="Stop container \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" with signal terminated" May 14 18:04:46.458285 systemd-networkd[1504]: lxc_health: Link DOWN May 14 18:04:46.458295 systemd-networkd[1504]: lxc_health: Lost carrier May 14 18:04:46.467105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2-rootfs.mount: Deactivated successfully. May 14 18:04:46.477355 systemd[1]: cri-containerd-e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870.scope: Deactivated successfully. May 14 18:04:46.477923 systemd[1]: cri-containerd-e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870.scope: Consumed 7.266s CPU time, 124.1M memory peak, 212K read from disk, 13.4M written to disk. May 14 18:04:46.478505 containerd[1598]: time="2025-05-14T18:04:46.478442730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" pid:3394 exited_at:{seconds:1747245886 nanos:477830092}" May 14 18:04:46.478505 containerd[1598]: time="2025-05-14T18:04:46.478484850Z" level=info msg="received exit event container_id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" id:\"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" pid:3394 exited_at:{seconds:1747245886 nanos:477830092}" May 14 18:04:46.482471 containerd[1598]: time="2025-05-14T18:04:46.482415620Z" level=info msg="StopContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" returns successfully" May 14 18:04:46.483253 containerd[1598]: time="2025-05-14T18:04:46.483208493Z" level=info msg="StopPodSandbox for \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\"" May 14 18:04:46.483306 containerd[1598]: time="2025-05-14T18:04:46.483290349Z" level=info msg="Container to stop \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.492216 systemd[1]: cri-containerd-e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d.scope: Deactivated successfully. May 14 18:04:46.493477 containerd[1598]: time="2025-05-14T18:04:46.493443179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" id:\"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" pid:2945 exit_status:137 exited_at:{seconds:1747245886 nanos:493173353}" May 14 18:04:46.502775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870-rootfs.mount: Deactivated successfully. May 14 18:04:46.514755 containerd[1598]: time="2025-05-14T18:04:46.514700995Z" level=info msg="StopContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" returns successfully" May 14 18:04:46.515289 containerd[1598]: time="2025-05-14T18:04:46.515248590Z" level=info msg="StopPodSandbox for \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\"" May 14 18:04:46.515353 containerd[1598]: time="2025-05-14T18:04:46.515329914Z" level=info msg="Container to stop \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.515401 containerd[1598]: time="2025-05-14T18:04:46.515350574Z" level=info msg="Container to stop \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.515401 containerd[1598]: time="2025-05-14T18:04:46.515362216Z" level=info msg="Container to stop \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.515401 containerd[1598]: time="2025-05-14T18:04:46.515372857Z" level=info msg="Container to stop \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.515401 containerd[1598]: time="2025-05-14T18:04:46.515383607Z" level=info msg="Container to stop \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:04:46.523439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d-rootfs.mount: Deactivated successfully. May 14 18:04:46.525082 systemd[1]: cri-containerd-17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c.scope: Deactivated successfully. May 14 18:04:46.529567 containerd[1598]: time="2025-05-14T18:04:46.529494568Z" level=info msg="shim disconnected" id=e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d namespace=k8s.io May 14 18:04:46.529960 containerd[1598]: time="2025-05-14T18:04:46.529936011Z" level=warning msg="cleaning up after shim disconnected" id=e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d namespace=k8s.io May 14 18:04:46.548411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c-rootfs.mount: Deactivated successfully. May 14 18:04:46.556245 containerd[1598]: time="2025-05-14T18:04:46.529955107Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:46.556353 containerd[1598]: time="2025-05-14T18:04:46.550634129Z" level=info msg="shim disconnected" id=17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c namespace=k8s.io May 14 18:04:46.556353 containerd[1598]: time="2025-05-14T18:04:46.556333843Z" level=warning msg="cleaning up after shim disconnected" id=17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c namespace=k8s.io May 14 18:04:46.556421 containerd[1598]: time="2025-05-14T18:04:46.556341237Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:04:46.588344 containerd[1598]: time="2025-05-14T18:04:46.588241988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" id:\"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" pid:2888 exit_status:137 exited_at:{seconds:1747245886 nanos:526288131}" May 14 18:04:46.590356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d-shm.mount: Deactivated successfully. May 14 18:04:46.590474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c-shm.mount: Deactivated successfully. May 14 18:04:46.602976 containerd[1598]: time="2025-05-14T18:04:46.602942524Z" level=info msg="received exit event sandbox_id:\"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" exit_status:137 exited_at:{seconds:1747245886 nanos:493173353}" May 14 18:04:46.603096 containerd[1598]: time="2025-05-14T18:04:46.603077671Z" level=info msg="received exit event sandbox_id:\"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" exit_status:137 exited_at:{seconds:1747245886 nanos:526288131}" May 14 18:04:46.608047 containerd[1598]: time="2025-05-14T18:04:46.608014310Z" level=info msg="TearDown network for sandbox \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" successfully" May 14 18:04:46.608047 containerd[1598]: time="2025-05-14T18:04:46.608040710Z" level=info msg="StopPodSandbox for \"e9ac90334f9a57a78df2588577cfa074c3822ed89a8cb8893c7a5ee3001fad5d\" returns successfully" May 14 18:04:46.608124 containerd[1598]: time="2025-05-14T18:04:46.608037133Z" level=info msg="TearDown network for sandbox \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" successfully" May 14 18:04:46.608124 containerd[1598]: time="2025-05-14T18:04:46.608100684Z" level=info msg="StopPodSandbox for \"17f30cefc40890173c73f4a7928352048b3e4572deb5c3e36d5323fd781ae95c\" returns successfully" May 14 18:04:46.708144 kubelet[2741]: I0514 18:04:46.708076 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb6ba410-9588-4f10-acae-5d598473137a-cilium-config-path\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708144 kubelet[2741]: I0514 18:04:46.708127 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-hubble-tls\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708144 kubelet[2741]: I0514 18:04:46.708153 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a13ab8f9-5a6b-455e-b246-20d18ce2987a-cilium-config-path\") pod \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\" (UID: \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708183 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-etc-cni-netd\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708206 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb6ba410-9588-4f10-acae-5d598473137a-clustermesh-secrets\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708220 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cni-path\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708235 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-kernel\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708259 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-bpf-maps\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708369 kubelet[2741]: I0514 18:04:46.708274 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-cgroup\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708286 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-xtables-lock\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708300 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx57d\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-kube-api-access-fx57d\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708315 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-run\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708331 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n49d7\" (UniqueName: \"kubernetes.io/projected/a13ab8f9-5a6b-455e-b246-20d18ce2987a-kube-api-access-n49d7\") pod \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\" (UID: \"a13ab8f9-5a6b-455e-b246-20d18ce2987a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708347 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-hostproc\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708514 kubelet[2741]: I0514 18:04:46.708363 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-lib-modules\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708717 kubelet[2741]: I0514 18:04:46.708376 2741 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-net\") pod \"fb6ba410-9588-4f10-acae-5d598473137a\" (UID: \"fb6ba410-9588-4f10-acae-5d598473137a\") " May 14 18:04:46.708717 kubelet[2741]: I0514 18:04:46.708453 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.709185 kubelet[2741]: I0514 18:04:46.708809 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.709185 kubelet[2741]: I0514 18:04:46.708880 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.711784 kubelet[2741]: I0514 18:04:46.711737 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb6ba410-9588-4f10-acae-5d598473137a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 18:04:46.711861 kubelet[2741]: I0514 18:04:46.711802 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.711861 kubelet[2741]: I0514 18:04:46.711830 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cni-path" (OuterVolumeSpecName: "cni-path") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.711861 kubelet[2741]: I0514 18:04:46.711853 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.711972 kubelet[2741]: I0514 18:04:46.711870 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.711972 kubelet[2741]: I0514 18:04:46.711938 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13ab8f9-5a6b-455e-b246-20d18ce2987a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a13ab8f9-5a6b-455e-b246-20d18ce2987a" (UID: "a13ab8f9-5a6b-455e-b246-20d18ce2987a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 18:04:46.712682 kubelet[2741]: I0514 18:04:46.712644 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6ba410-9588-4f10-acae-5d598473137a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 18:04:46.712735 kubelet[2741]: I0514 18:04:46.712694 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.712735 kubelet[2741]: I0514 18:04:46.712729 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-hostproc" (OuterVolumeSpecName: "hostproc") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.712788 kubelet[2741]: I0514 18:04:46.712753 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 18:04:46.713129 kubelet[2741]: I0514 18:04:46.713095 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 18:04:46.714429 kubelet[2741]: I0514 18:04:46.714399 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-kube-api-access-fx57d" (OuterVolumeSpecName: "kube-api-access-fx57d") pod "fb6ba410-9588-4f10-acae-5d598473137a" (UID: "fb6ba410-9588-4f10-acae-5d598473137a"). InnerVolumeSpecName "kube-api-access-fx57d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 18:04:46.715292 kubelet[2741]: I0514 18:04:46.715254 2741 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13ab8f9-5a6b-455e-b246-20d18ce2987a-kube-api-access-n49d7" (OuterVolumeSpecName: "kube-api-access-n49d7") pod "a13ab8f9-5a6b-455e-b246-20d18ce2987a" (UID: "a13ab8f9-5a6b-455e-b246-20d18ce2987a"). InnerVolumeSpecName "kube-api-access-n49d7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808587 2741 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808622 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808632 2741 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808640 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fx57d\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-kube-api-access-fx57d\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808652 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.808643 kubelet[2741]: I0514 18:04:46.808660 2741 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808668 2741 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n49d7\" (UniqueName: \"kubernetes.io/projected/a13ab8f9-5a6b-455e-b246-20d18ce2987a-kube-api-access-n49d7\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808677 2741 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808687 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808713 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb6ba410-9588-4f10-acae-5d598473137a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808724 2741 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb6ba410-9588-4f10-acae-5d598473137a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808735 2741 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a13ab8f9-5a6b-455e-b246-20d18ce2987a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808746 2741 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809172 kubelet[2741]: I0514 18:04:46.808756 2741 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb6ba410-9588-4f10-acae-5d598473137a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809366 kubelet[2741]: I0514 18:04:46.808764 2741 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 18:04:46.809366 kubelet[2741]: I0514 18:04:46.808771 2741 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb6ba410-9588-4f10-acae-5d598473137a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 18:04:47.041506 kubelet[2741]: I0514 18:04:47.040165 2741 scope.go:117] "RemoveContainer" containerID="64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2" May 14 18:04:47.043924 containerd[1598]: time="2025-05-14T18:04:47.043882426Z" level=info msg="RemoveContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\"" May 14 18:04:47.048575 containerd[1598]: time="2025-05-14T18:04:47.048460207Z" level=info msg="RemoveContainer for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" returns successfully" May 14 18:04:47.048499 systemd[1]: Removed slice kubepods-besteffort-poda13ab8f9_5a6b_455e_b246_20d18ce2987a.slice - libcontainer container kubepods-besteffort-poda13ab8f9_5a6b_455e_b246_20d18ce2987a.slice. May 14 18:04:47.050598 kubelet[2741]: I0514 18:04:47.048686 2741 scope.go:117] "RemoveContainer" containerID="64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2" May 14 18:04:47.048642 systemd[1]: kubepods-besteffort-poda13ab8f9_5a6b_455e_b246_20d18ce2987a.slice: Consumed 391ms CPU time, 27.1M memory peak, 1.9M written to disk. May 14 18:04:47.052416 systemd[1]: Removed slice kubepods-burstable-podfb6ba410_9588_4f10_acae_5d598473137a.slice - libcontainer container kubepods-burstable-podfb6ba410_9588_4f10_acae_5d598473137a.slice. May 14 18:04:47.052558 systemd[1]: kubepods-burstable-podfb6ba410_9588_4f10_acae_5d598473137a.slice: Consumed 7.393s CPU time, 124.4M memory peak, 508K read from disk, 19.1M written to disk. May 14 18:04:47.055904 containerd[1598]: time="2025-05-14T18:04:47.048891649Z" level=error msg="ContainerStatus for \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\": not found" May 14 18:04:47.059595 kubelet[2741]: E0514 18:04:47.059520 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\": not found" containerID="64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2" May 14 18:04:47.059717 kubelet[2741]: I0514 18:04:47.059621 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2"} err="failed to get container status \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"64ca63cd842ede95f78ff686f99b0de56a2ee65b43de1a3598f3203f4d9865a2\": not found" May 14 18:04:47.059717 kubelet[2741]: I0514 18:04:47.059706 2741 scope.go:117] "RemoveContainer" containerID="e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870" May 14 18:04:47.062087 containerd[1598]: time="2025-05-14T18:04:47.062043720Z" level=info msg="RemoveContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\"" May 14 18:04:47.067038 containerd[1598]: time="2025-05-14T18:04:47.066988731Z" level=info msg="RemoveContainer for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" returns successfully" May 14 18:04:47.067250 kubelet[2741]: I0514 18:04:47.067198 2741 scope.go:117] "RemoveContainer" containerID="5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe" May 14 18:04:47.068563 containerd[1598]: time="2025-05-14T18:04:47.068511604Z" level=info msg="RemoveContainer for \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\"" May 14 18:04:47.073372 containerd[1598]: time="2025-05-14T18:04:47.073332047Z" level=info msg="RemoveContainer for \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" returns successfully" May 14 18:04:47.073557 kubelet[2741]: I0514 18:04:47.073509 2741 scope.go:117] "RemoveContainer" containerID="a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e" May 14 18:04:47.076468 containerd[1598]: time="2025-05-14T18:04:47.075678893Z" level=info msg="RemoveContainer for \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\"" May 14 18:04:47.079836 containerd[1598]: time="2025-05-14T18:04:47.079806654Z" level=info msg="RemoveContainer for \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" returns successfully" May 14 18:04:47.079976 kubelet[2741]: I0514 18:04:47.079944 2741 scope.go:117] "RemoveContainer" containerID="b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad" May 14 18:04:47.081126 containerd[1598]: time="2025-05-14T18:04:47.081087457Z" level=info msg="RemoveContainer for \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\"" May 14 18:04:47.084772 containerd[1598]: time="2025-05-14T18:04:47.084747357Z" level=info msg="RemoveContainer for \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" returns successfully" May 14 18:04:47.084886 kubelet[2741]: I0514 18:04:47.084871 2741 scope.go:117] "RemoveContainer" containerID="fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f" May 14 18:04:47.085908 containerd[1598]: time="2025-05-14T18:04:47.085878785Z" level=info msg="RemoveContainer for \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\"" May 14 18:04:47.093958 containerd[1598]: time="2025-05-14T18:04:47.093928926Z" level=info msg="RemoveContainer for \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" returns successfully" May 14 18:04:47.094076 kubelet[2741]: I0514 18:04:47.094058 2741 scope.go:117] "RemoveContainer" containerID="e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870" May 14 18:04:47.094232 containerd[1598]: time="2025-05-14T18:04:47.094203750Z" level=error msg="ContainerStatus for \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\": not found" May 14 18:04:47.094332 kubelet[2741]: E0514 18:04:47.094310 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\": not found" containerID="e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870" May 14 18:04:47.094373 kubelet[2741]: I0514 18:04:47.094339 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870"} err="failed to get container status \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8c52d5c5c241295fb986c20263679dc72474451949f80815bf2cee65154d870\": not found" May 14 18:04:47.094373 kubelet[2741]: I0514 18:04:47.094365 2741 scope.go:117] "RemoveContainer" containerID="5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe" May 14 18:04:47.094555 containerd[1598]: time="2025-05-14T18:04:47.094506207Z" level=error msg="ContainerStatus for \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\": not found" May 14 18:04:47.094655 kubelet[2741]: E0514 18:04:47.094628 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\": not found" containerID="5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe" May 14 18:04:47.094702 kubelet[2741]: I0514 18:04:47.094656 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe"} err="failed to get container status \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"5631a8e82086751043f56b026aae11c3151e324af5649f84c6374fa8adef16fe\": not found" May 14 18:04:47.094702 kubelet[2741]: I0514 18:04:47.094674 2741 scope.go:117] "RemoveContainer" containerID="a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e" May 14 18:04:47.094859 containerd[1598]: time="2025-05-14T18:04:47.094832549Z" level=error msg="ContainerStatus for \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\": not found" May 14 18:04:47.094943 kubelet[2741]: E0514 18:04:47.094925 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\": not found" containerID="a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e" May 14 18:04:47.094988 kubelet[2741]: I0514 18:04:47.094950 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e"} err="failed to get container status \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1e8c4cd3df9117b34ab806965736c53999651dd77313bd7dcd740176838db2e\": not found" May 14 18:04:47.094988 kubelet[2741]: I0514 18:04:47.094968 2741 scope.go:117] "RemoveContainer" containerID="b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad" May 14 18:04:47.095125 containerd[1598]: time="2025-05-14T18:04:47.095096623Z" level=error msg="ContainerStatus for \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\": not found" May 14 18:04:47.095236 kubelet[2741]: E0514 18:04:47.095217 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\": not found" containerID="b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad" May 14 18:04:47.095284 kubelet[2741]: I0514 18:04:47.095235 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad"} err="failed to get container status \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9627c5eb1f3f8bbf5ce09e379d3134df1a37b66f3555bccd7f81a7ec2bdedad\": not found" May 14 18:04:47.095284 kubelet[2741]: I0514 18:04:47.095246 2741 scope.go:117] "RemoveContainer" containerID="fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f" May 14 18:04:47.095428 containerd[1598]: time="2025-05-14T18:04:47.095398148Z" level=error msg="ContainerStatus for \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\": not found" May 14 18:04:47.095511 kubelet[2741]: E0514 18:04:47.095495 2741 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\": not found" containerID="fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f" May 14 18:04:47.095572 kubelet[2741]: I0514 18:04:47.095515 2741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f"} err="failed to get container status \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb98b3a474cb74fd06916c7d5bedd587f0ca70ad2af0316daf32d2fdee9fbe1f\": not found" May 14 18:04:47.466777 systemd[1]: var-lib-kubelet-pods-a13ab8f9\x2d5a6b\x2d455e\x2db246\x2d20d18ce2987a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn49d7.mount: Deactivated successfully. May 14 18:04:47.466881 systemd[1]: var-lib-kubelet-pods-fb6ba410\x2d9588\x2d4f10\x2dacae\x2d5d598473137a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfx57d.mount: Deactivated successfully. May 14 18:04:47.466950 systemd[1]: var-lib-kubelet-pods-fb6ba410\x2d9588\x2d4f10\x2dacae\x2d5d598473137a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:04:47.467020 systemd[1]: var-lib-kubelet-pods-fb6ba410\x2d9588\x2d4f10\x2dacae\x2d5d598473137a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:04:47.717283 kubelet[2741]: I0514 18:04:47.717143 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13ab8f9-5a6b-455e-b246-20d18ce2987a" path="/var/lib/kubelet/pods/a13ab8f9-5a6b-455e-b246-20d18ce2987a/volumes" May 14 18:04:47.717811 kubelet[2741]: I0514 18:04:47.717734 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6ba410-9588-4f10-acae-5d598473137a" path="/var/lib/kubelet/pods/fb6ba410-9588-4f10-acae-5d598473137a/volumes" May 14 18:04:48.380137 sshd[4346]: Connection closed by 10.0.0.1 port 45322 May 14 18:04:48.380613 sshd-session[4344]: pam_unix(sshd:session): session closed for user core May 14 18:04:48.394235 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:45322.service: Deactivated successfully. May 14 18:04:48.396219 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:04:48.397085 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. May 14 18:04:48.400300 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:42900.service - OpenSSH per-connection server daemon (10.0.0.1:42900). May 14 18:04:48.401022 systemd-logind[1577]: Removed session 24. May 14 18:04:48.461835 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 42900 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:48.463492 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:48.468141 systemd-logind[1577]: New session 25 of user core. May 14 18:04:48.482711 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:04:48.942351 sshd[4500]: Connection closed by 10.0.0.1 port 42900 May 14 18:04:48.942718 sshd-session[4498]: pam_unix(sshd:session): session closed for user core May 14 18:04:48.953273 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:42900.service: Deactivated successfully. May 14 18:04:48.956466 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:04:48.958312 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. May 14 18:04:48.964897 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:42906.service - OpenSSH per-connection server daemon (10.0.0.1:42906). May 14 18:04:48.968504 kubelet[2741]: I0514 18:04:48.965935 2741 memory_manager.go:355] "RemoveStaleState removing state" podUID="fb6ba410-9588-4f10-acae-5d598473137a" containerName="cilium-agent" May 14 18:04:48.968504 kubelet[2741]: I0514 18:04:48.965964 2741 memory_manager.go:355] "RemoveStaleState removing state" podUID="a13ab8f9-5a6b-455e-b246-20d18ce2987a" containerName="cilium-operator" May 14 18:04:48.970389 kubelet[2741]: I0514 18:04:48.970324 2741 status_manager.go:890] "Failed to get status for pod" podUID="e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6" pod="kube-system/cilium-78dkf" err="pods \"cilium-78dkf\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 14 18:04:48.973775 systemd-logind[1577]: Removed session 25. May 14 18:04:48.985027 systemd[1]: Created slice kubepods-burstable-pode1d7d443_d8d5_45cd_aed5_8e8f8cf086e6.slice - libcontainer container kubepods-burstable-pode1d7d443_d8d5_45cd_aed5_8e8f8cf086e6.slice. May 14 18:04:49.021388 kubelet[2741]: I0514 18:04:49.021338 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-cni-path\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021388 kubelet[2741]: I0514 18:04:49.021389 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-clustermesh-secrets\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021627 kubelet[2741]: I0514 18:04:49.021410 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-hostproc\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021627 kubelet[2741]: I0514 18:04:49.021426 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-hubble-tls\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021747 kubelet[2741]: I0514 18:04:49.021522 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-cilium-run\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021780 kubelet[2741]: I0514 18:04:49.021760 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-xtables-lock\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021810 kubelet[2741]: I0514 18:04:49.021782 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-cilium-ipsec-secrets\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021810 kubelet[2741]: I0514 18:04:49.021805 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-lib-modules\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021871 kubelet[2741]: I0514 18:04:49.021821 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-cilium-cgroup\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021871 kubelet[2741]: I0514 18:04:49.021837 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-cilium-config-path\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021871 kubelet[2741]: I0514 18:04:49.021851 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-bpf-maps\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021961 kubelet[2741]: I0514 18:04:49.021868 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rpq\" (UniqueName: \"kubernetes.io/projected/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-kube-api-access-l8rpq\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021961 kubelet[2741]: I0514 18:04:49.021932 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-etc-cni-netd\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.021961 kubelet[2741]: I0514 18:04:49.021947 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-host-proc-sys-net\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.022061 kubelet[2741]: I0514 18:04:49.021996 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6-host-proc-sys-kernel\") pod \"cilium-78dkf\" (UID: \"e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6\") " pod="kube-system/cilium-78dkf" May 14 18:04:49.026234 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 42906 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:49.027939 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:49.032471 systemd-logind[1577]: New session 26 of user core. May 14 18:04:49.042702 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:04:49.095654 sshd[4515]: Connection closed by 10.0.0.1 port 42906 May 14 18:04:49.096051 sshd-session[4513]: pam_unix(sshd:session): session closed for user core May 14 18:04:49.109922 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:42906.service: Deactivated successfully. May 14 18:04:49.111915 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:04:49.112730 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. May 14 18:04:49.115653 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). May 14 18:04:49.116218 systemd-logind[1577]: Removed session 26. May 14 18:04:49.180751 sshd[4522]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:04:49.182237 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:04:49.187071 systemd-logind[1577]: New session 27 of user core. May 14 18:04:49.200746 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:04:49.291242 kubelet[2741]: E0514 18:04:49.291181 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:49.292354 containerd[1598]: time="2025-05-14T18:04:49.292305316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78dkf,Uid:e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6,Namespace:kube-system,Attempt:0,}" May 14 18:04:49.311596 containerd[1598]: time="2025-05-14T18:04:49.311496916Z" level=info msg="connecting to shim 37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:49.351839 systemd[1]: Started cri-containerd-37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3.scope - libcontainer container 37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3. May 14 18:04:49.383296 containerd[1598]: time="2025-05-14T18:04:49.383246164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78dkf,Uid:e1d7d443-d8d5-45cd-aed5-8e8f8cf086e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\"" May 14 18:04:49.384243 kubelet[2741]: E0514 18:04:49.384194 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:49.389594 containerd[1598]: time="2025-05-14T18:04:49.389508507Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:04:49.400574 containerd[1598]: time="2025-05-14T18:04:49.400489331Z" level=info msg="Container 26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:49.419325 containerd[1598]: time="2025-05-14T18:04:49.419273264Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\"" May 14 18:04:49.419985 containerd[1598]: time="2025-05-14T18:04:49.419930947Z" level=info msg="StartContainer for \"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\"" May 14 18:04:49.420879 containerd[1598]: time="2025-05-14T18:04:49.420845299Z" level=info msg="connecting to shim 26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" protocol=ttrpc version=3 May 14 18:04:49.448839 systemd[1]: Started cri-containerd-26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5.scope - libcontainer container 26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5. May 14 18:04:49.502768 containerd[1598]: time="2025-05-14T18:04:49.502654397Z" level=info msg="StartContainer for \"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\" returns successfully" May 14 18:04:49.515101 systemd[1]: cri-containerd-26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5.scope: Deactivated successfully. May 14 18:04:49.517318 containerd[1598]: time="2025-05-14T18:04:49.517281980Z" level=info msg="received exit event container_id:\"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\" id:\"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\" pid:4592 exited_at:{seconds:1747245889 nanos:517022404}" May 14 18:04:49.517419 containerd[1598]: time="2025-05-14T18:04:49.517388963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\" id:\"26a57039ce8cee78054188ef276b7b8cf92650e4e044e31e6030e23a740f56d5\" pid:4592 exited_at:{seconds:1747245889 nanos:517022404}" May 14 18:04:50.056503 kubelet[2741]: E0514 18:04:50.056460 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:50.058811 containerd[1598]: time="2025-05-14T18:04:50.058754394Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:04:50.073772 containerd[1598]: time="2025-05-14T18:04:50.073701425Z" level=info msg="Container 916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:50.082946 containerd[1598]: time="2025-05-14T18:04:50.082880238Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\"" May 14 18:04:50.083590 containerd[1598]: time="2025-05-14T18:04:50.083558751Z" level=info msg="StartContainer for \"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\"" May 14 18:04:50.087066 containerd[1598]: time="2025-05-14T18:04:50.085925600Z" level=info msg="connecting to shim 916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" protocol=ttrpc version=3 May 14 18:04:50.109765 systemd[1]: Started cri-containerd-916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0.scope - libcontainer container 916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0. May 14 18:04:50.151637 containerd[1598]: time="2025-05-14T18:04:50.151566721Z" level=info msg="StartContainer for \"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\" returns successfully" May 14 18:04:50.157247 systemd[1]: cri-containerd-916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0.scope: Deactivated successfully. May 14 18:04:50.158400 containerd[1598]: time="2025-05-14T18:04:50.158290959Z" level=info msg="received exit event container_id:\"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\" id:\"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\" pid:4637 exited_at:{seconds:1747245890 nanos:157940802}" May 14 18:04:50.158651 containerd[1598]: time="2025-05-14T18:04:50.158579248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\" id:\"916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0\" pid:4637 exited_at:{seconds:1747245890 nanos:157940802}" May 14 18:04:50.182375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-916b751e1dea8fc8d2cc142c64a682d584ba397ba5b96a51e085e51c5258fce0-rootfs.mount: Deactivated successfully. May 14 18:04:50.756291 kubelet[2741]: E0514 18:04:50.756223 2741 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:04:51.060234 kubelet[2741]: E0514 18:04:51.060035 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:51.062003 containerd[1598]: time="2025-05-14T18:04:51.061961415Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:04:51.081579 containerd[1598]: time="2025-05-14T18:04:51.081517822Z" level=info msg="Container c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:51.102560 containerd[1598]: time="2025-05-14T18:04:51.102493020Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\"" May 14 18:04:51.103119 containerd[1598]: time="2025-05-14T18:04:51.103093053Z" level=info msg="StartContainer for \"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\"" May 14 18:04:51.105079 containerd[1598]: time="2025-05-14T18:04:51.105001236Z" level=info msg="connecting to shim c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" protocol=ttrpc version=3 May 14 18:04:51.129721 systemd[1]: Started cri-containerd-c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252.scope - libcontainer container c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252. May 14 18:04:51.174173 systemd[1]: cri-containerd-c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252.scope: Deactivated successfully. May 14 18:04:51.175692 containerd[1598]: time="2025-05-14T18:04:51.175647872Z" level=info msg="StartContainer for \"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\" returns successfully" May 14 18:04:51.176132 containerd[1598]: time="2025-05-14T18:04:51.176105294Z" level=info msg="received exit event container_id:\"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\" id:\"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\" pid:4682 exited_at:{seconds:1747245891 nanos:175888160}" May 14 18:04:51.176351 containerd[1598]: time="2025-05-14T18:04:51.176174245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\" id:\"c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252\" pid:4682 exited_at:{seconds:1747245891 nanos:175888160}" May 14 18:04:51.197274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c23632adb2e90fafdc871e719673c6aa61a917989e0ee57bce8b70bc0abfe252-rootfs.mount: Deactivated successfully. May 14 18:04:52.064076 kubelet[2741]: E0514 18:04:52.064039 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:52.065515 containerd[1598]: time="2025-05-14T18:04:52.065463573Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:04:52.073352 containerd[1598]: time="2025-05-14T18:04:52.073195469Z" level=info msg="Container 2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:52.082650 containerd[1598]: time="2025-05-14T18:04:52.082593758Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\"" May 14 18:04:52.083041 containerd[1598]: time="2025-05-14T18:04:52.083017354Z" level=info msg="StartContainer for \"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\"" May 14 18:04:52.083986 containerd[1598]: time="2025-05-14T18:04:52.083947154Z" level=info msg="connecting to shim 2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" protocol=ttrpc version=3 May 14 18:04:52.102703 systemd[1]: Started cri-containerd-2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195.scope - libcontainer container 2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195. May 14 18:04:52.132937 systemd[1]: cri-containerd-2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195.scope: Deactivated successfully. May 14 18:04:52.134048 containerd[1598]: time="2025-05-14T18:04:52.134007508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\" id:\"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\" pid:4721 exited_at:{seconds:1747245892 nanos:133491385}" May 14 18:04:52.136251 containerd[1598]: time="2025-05-14T18:04:52.136212034Z" level=info msg="received exit event container_id:\"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\" id:\"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\" pid:4721 exited_at:{seconds:1747245892 nanos:133491385}" May 14 18:04:52.146444 containerd[1598]: time="2025-05-14T18:04:52.146388124Z" level=info msg="StartContainer for \"2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195\" returns successfully" May 14 18:04:52.159131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f9c5f2a00eca4d538d15542da9fcc36943f47d1cd37567e49a050fca0a6a195-rootfs.mount: Deactivated successfully. May 14 18:04:52.715077 kubelet[2741]: E0514 18:04:52.715025 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:53.068403 kubelet[2741]: E0514 18:04:53.068296 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:53.069793 containerd[1598]: time="2025-05-14T18:04:53.069755927Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:04:53.079387 containerd[1598]: time="2025-05-14T18:04:53.079338600Z" level=info msg="Container b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:53.088061 containerd[1598]: time="2025-05-14T18:04:53.088014448Z" level=info msg="CreateContainer within sandbox \"37b46788e503c58d8acd5d8fb840ea9fad1dfde4abb2908a9bee4ca799bee6e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\"" May 14 18:04:53.088508 containerd[1598]: time="2025-05-14T18:04:53.088460848Z" level=info msg="StartContainer for \"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\"" May 14 18:04:53.089278 containerd[1598]: time="2025-05-14T18:04:53.089234810Z" level=info msg="connecting to shim b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d" address="unix:///run/containerd/s/e7f7a315e1be326154f25381b7abe5259dfb26c6aebf247ae44a8ee9b92bd72a" protocol=ttrpc version=3 May 14 18:04:53.108674 systemd[1]: Started cri-containerd-b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d.scope - libcontainer container b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d. May 14 18:04:53.142713 containerd[1598]: time="2025-05-14T18:04:53.142669582Z" level=info msg="StartContainer for \"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" returns successfully" May 14 18:04:53.211964 containerd[1598]: time="2025-05-14T18:04:53.211915809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"99742d3c178ca729f62e67d27d87b7e9066a4e3657b1a0303144c987c7c990f4\" pid:4791 exited_at:{seconds:1747245893 nanos:211570853}" May 14 18:04:53.595572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 18:04:54.074615 kubelet[2741]: E0514 18:04:54.074579 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:54.089878 kubelet[2741]: I0514 18:04:54.089799 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-78dkf" podStartSLOduration=6.089780573 podStartE2EDuration="6.089780573s" podCreationTimestamp="2025-05-14 18:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:54.089473148 +0000 UTC m=+88.491644297" watchObservedRunningTime="2025-05-14 18:04:54.089780573 +0000 UTC m=+88.491951712" May 14 18:04:55.292344 kubelet[2741]: E0514 18:04:55.292298 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:55.556675 containerd[1598]: time="2025-05-14T18:04:55.556444216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"06cf6b87cd25cd0a4f9ac99049d204facaf79dc84fb1142eedf2ce59765cfa87\" pid:4952 exit_status:1 exited_at:{seconds:1747245895 nanos:556013277}" May 14 18:04:56.742002 systemd-networkd[1504]: lxc_health: Link UP May 14 18:04:56.742997 systemd-networkd[1504]: lxc_health: Gained carrier May 14 18:04:57.294335 kubelet[2741]: E0514 18:04:57.294286 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:57.690915 containerd[1598]: time="2025-05-14T18:04:57.690837118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"289c5ef8eacda59d09876a645d8409801a150f65a31d94d300a44a6b82dfbaed\" pid:5323 exited_at:{seconds:1747245897 nanos:690397122}" May 14 18:04:58.059813 systemd-networkd[1504]: lxc_health: Gained IPv6LL May 14 18:04:58.083715 kubelet[2741]: E0514 18:04:58.083661 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:58.715080 kubelet[2741]: E0514 18:04:58.714972 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 18:04:59.790927 containerd[1598]: time="2025-05-14T18:04:59.790876393Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"1886acd3e81e7b9f0e680d9d8ddf80a34043ff78f267d183d221787e7d5e3d5d\" pid:5359 exited_at:{seconds:1747245899 nanos:789848580}" May 14 18:05:01.910738 containerd[1598]: time="2025-05-14T18:05:01.910682155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"e84294567249585b242aedce156256dd8d4a1cbfe75053a929823387e95da1ee\" pid:5390 exited_at:{seconds:1747245901 nanos:910171315}" May 14 18:05:04.015017 containerd[1598]: time="2025-05-14T18:05:04.014968884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70f1420e0662fa86812fe9f52f47b2cc8d8221d29c33c934783e2f212c3f11d\" id:\"b847fd420f390d84bcec6360638b23f963653c4c41de7af33595f062900d197c\" pid:5416 exited_at:{seconds:1747245904 nanos:14493722}" May 14 18:05:04.035561 sshd[4529]: Connection closed by 10.0.0.1 port 42910 May 14 18:05:04.036044 sshd-session[4522]: pam_unix(sshd:session): session closed for user core May 14 18:05:04.040230 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:42910.service: Deactivated successfully. May 14 18:05:04.042897 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:05:04.044841 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. May 14 18:05:04.046960 systemd-logind[1577]: Removed session 27.