Jul 7 00:13:07.842459 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:13:07.842482 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:13:07.842494 kernel: BIOS-provided physical RAM map: Jul 7 00:13:07.842500 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:13:07.842507 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 00:13:07.842513 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 00:13:07.842521 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 00:13:07.842527 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 00:13:07.842536 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 00:13:07.842547 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 00:13:07.842561 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 7 00:13:07.842575 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 00:13:07.842585 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 00:13:07.842599 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 00:13:07.842613 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 00:13:07.842620 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 00:13:07.842630 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 00:13:07.842637 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 00:13:07.842644 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 00:13:07.842651 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 00:13:07.842658 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 00:13:07.842665 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 00:13:07.842672 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:13:07.842678 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:13:07.842685 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 00:13:07.842694 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:13:07.842701 kernel: NX (Execute Disable) protection: active Jul 7 00:13:07.842708 kernel: APIC: Static calls initialized Jul 7 00:13:07.842715 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 7 00:13:07.842722 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 7 00:13:07.842729 kernel: extended physical RAM map: Jul 7 00:13:07.842736 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:13:07.842743 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 00:13:07.842750 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 00:13:07.842757 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 00:13:07.842764 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 00:13:07.842773 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 00:13:07.842780 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 00:13:07.842862 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 7 00:13:07.842871 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 7 00:13:07.842891 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 7 00:13:07.842898 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 7 00:13:07.842908 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 7 00:13:07.842915 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 00:13:07.842923 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 00:13:07.842930 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 00:13:07.842937 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 00:13:07.842945 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 00:13:07.842952 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 00:13:07.842959 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 00:13:07.842967 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 00:13:07.842976 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 00:13:07.842983 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 00:13:07.842990 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 00:13:07.842997 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:13:07.843005 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:13:07.843012 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 00:13:07.843031 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:13:07.843039 kernel: efi: EFI v2.7 by EDK II Jul 7 00:13:07.843046 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 7 00:13:07.843053 kernel: random: crng init done Jul 7 00:13:07.843064 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 7 00:13:07.843072 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 7 00:13:07.843083 kernel: secureboot: Secure boot disabled Jul 7 00:13:07.843091 kernel: SMBIOS 2.8 present. Jul 7 00:13:07.843098 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 00:13:07.843105 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:13:07.843113 kernel: Hypervisor detected: KVM Jul 7 00:13:07.843120 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:13:07.843127 kernel: kvm-clock: using sched offset of 4909499285 cycles Jul 7 00:13:07.843135 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:13:07.843143 kernel: tsc: Detected 2794.748 MHz processor Jul 7 00:13:07.843151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:13:07.843160 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:13:07.843167 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 7 00:13:07.843175 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:13:07.843182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:13:07.843190 kernel: Using GB pages for direct mapping Jul 7 00:13:07.843197 kernel: ACPI: Early table checksum verification disabled Jul 7 00:13:07.843204 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 7 00:13:07.843212 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 00:13:07.843220 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843229 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843236 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 7 00:13:07.843244 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843251 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843259 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843277 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:13:07.843285 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:13:07.843292 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 7 00:13:07.843300 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 7 00:13:07.843310 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 7 00:13:07.843327 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 7 00:13:07.843335 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 7 00:13:07.843342 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 7 00:13:07.843349 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 7 00:13:07.843357 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 7 00:13:07.843364 kernel: No NUMA configuration found Jul 7 00:13:07.843372 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 7 00:13:07.843379 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 7 00:13:07.843389 kernel: Zone ranges: Jul 7 00:13:07.843397 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:13:07.843405 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 7 00:13:07.843412 kernel: Normal empty Jul 7 00:13:07.843419 kernel: Device empty Jul 7 00:13:07.843427 kernel: Movable zone start for each node Jul 7 00:13:07.843434 kernel: Early memory node ranges Jul 7 00:13:07.843441 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:13:07.843449 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 7 00:13:07.843458 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 7 00:13:07.843468 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 7 00:13:07.843475 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 7 00:13:07.843483 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 7 00:13:07.843490 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 7 00:13:07.843497 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 7 00:13:07.843505 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 7 00:13:07.843512 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:13:07.843520 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:13:07.843535 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 7 00:13:07.843543 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:13:07.843551 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 7 00:13:07.843558 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 7 00:13:07.843568 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 00:13:07.843576 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 00:13:07.843583 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 7 00:13:07.843591 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:13:07.843599 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:13:07.843609 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:13:07.843617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:13:07.843625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:13:07.843632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:13:07.843640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:13:07.843648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:13:07.843655 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:13:07.843663 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:13:07.843671 kernel: TSC deadline timer available Jul 7 00:13:07.843680 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:13:07.843688 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:13:07.843696 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:13:07.843703 kernel: CPU topo: Max. threads per core: 1 Jul 7 00:13:07.843711 kernel: CPU topo: Num. cores per package: 4 Jul 7 00:13:07.843719 kernel: CPU topo: Num. threads per package: 4 Jul 7 00:13:07.843726 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 00:13:07.843734 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:13:07.843742 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 00:13:07.843750 kernel: kvm-guest: setup PV sched yield Jul 7 00:13:07.843759 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 00:13:07.843767 kernel: Booting paravirtualized kernel on KVM Jul 7 00:13:07.843775 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:13:07.843783 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 00:13:07.843865 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 00:13:07.843873 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 00:13:07.843888 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 00:13:07.843896 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:13:07.843907 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:13:07.843917 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:13:07.843928 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:13:07.843935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:13:07.843943 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:13:07.843951 kernel: Fallback order for Node 0: 0 Jul 7 00:13:07.843959 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 7 00:13:07.843967 kernel: Policy zone: DMA32 Jul 7 00:13:07.843974 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:13:07.843984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 00:13:07.843992 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:13:07.844000 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:13:07.844007 kernel: Dynamic Preempt: voluntary Jul 7 00:13:07.844015 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:13:07.844023 kernel: rcu: RCU event tracing is enabled. Jul 7 00:13:07.844031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 00:13:07.844039 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:13:07.844047 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:13:07.844057 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:13:07.844064 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:13:07.844072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 00:13:07.844080 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:13:07.844088 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:13:07.844096 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:13:07.844104 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 00:13:07.844112 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:13:07.844119 kernel: Console: colour dummy device 80x25 Jul 7 00:13:07.844129 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:13:07.844137 kernel: ACPI: Core revision 20240827 Jul 7 00:13:07.844145 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:13:07.844153 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:13:07.844161 kernel: x2apic enabled Jul 7 00:13:07.844168 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:13:07.844176 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 00:13:07.844184 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 00:13:07.844192 kernel: kvm-guest: setup PV IPIs Jul 7 00:13:07.844201 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:13:07.844210 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 00:13:07.844218 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 7 00:13:07.844225 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:13:07.844233 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:13:07.844241 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:13:07.844249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:13:07.844256 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:13:07.844264 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:13:07.844274 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:13:07.844282 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:13:07.844290 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:13:07.844300 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:13:07.844308 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 00:13:07.844316 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 00:13:07.844324 kernel: x86/bugs: return thunk changed Jul 7 00:13:07.844332 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 00:13:07.844342 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:13:07.844350 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:13:07.844357 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:13:07.844365 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:13:07.844373 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:13:07.844381 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:13:07.844388 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:13:07.844396 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:13:07.844404 kernel: landlock: Up and running. Jul 7 00:13:07.844413 kernel: SELinux: Initializing. Jul 7 00:13:07.844421 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:13:07.844429 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:13:07.844437 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:13:07.844445 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:13:07.844452 kernel: ... version: 0 Jul 7 00:13:07.844470 kernel: ... bit width: 48 Jul 7 00:13:07.844480 kernel: ... generic registers: 6 Jul 7 00:13:07.844487 kernel: ... value mask: 0000ffffffffffff Jul 7 00:13:07.844515 kernel: ... max period: 00007fffffffffff Jul 7 00:13:07.844524 kernel: ... fixed-purpose events: 0 Jul 7 00:13:07.844531 kernel: ... event mask: 000000000000003f Jul 7 00:13:07.844539 kernel: signal: max sigframe size: 1776 Jul 7 00:13:07.844547 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:13:07.844555 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:13:07.844563 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:13:07.844570 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:13:07.844578 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:13:07.844588 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 00:13:07.844596 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 00:13:07.844604 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 7 00:13:07.844612 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137196K reserved, 0K cma-reserved) Jul 7 00:13:07.844620 kernel: devtmpfs: initialized Jul 7 00:13:07.844628 kernel: x86/mm: Memory block size: 128MB Jul 7 00:13:07.844636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 7 00:13:07.844644 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 7 00:13:07.844652 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 7 00:13:07.844667 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 7 00:13:07.844675 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 7 00:13:07.844683 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 7 00:13:07.844691 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:13:07.844698 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 00:13:07.844706 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:13:07.844717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:13:07.844724 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:13:07.844732 kernel: audit: type=2000 audit(1751847184.261:1): state=initialized audit_enabled=0 res=1 Jul 7 00:13:07.844742 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:13:07.844750 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:13:07.844758 kernel: cpuidle: using governor menu Jul 7 00:13:07.844765 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:13:07.844773 kernel: dca service started, version 1.12.1 Jul 7 00:13:07.844781 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 00:13:07.844803 kernel: PCI: Using configuration type 1 for base access Jul 7 00:13:07.844821 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:13:07.844829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:13:07.844840 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:13:07.844847 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:13:07.844855 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:13:07.844863 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:13:07.844871 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:13:07.844886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:13:07.844894 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:13:07.844901 kernel: ACPI: Interpreter enabled Jul 7 00:13:07.844909 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:13:07.844919 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:13:07.844928 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:13:07.844935 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:13:07.844943 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:13:07.844951 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:13:07.845134 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:13:07.845260 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:13:07.845384 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:13:07.845394 kernel: PCI host bridge to bus 0000:00 Jul 7 00:13:07.845560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:13:07.845674 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:13:07.845824 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:13:07.845948 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 00:13:07.846058 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 00:13:07.846172 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 00:13:07.846280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:13:07.846416 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:13:07.846545 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:13:07.846665 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 00:13:07.846784 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 00:13:07.846981 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 00:13:07.847106 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:13:07.847241 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:13:07.847361 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 00:13:07.847482 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 00:13:07.847601 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 00:13:07.847744 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:13:07.847924 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 00:13:07.848048 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 00:13:07.848168 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 00:13:07.848358 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:13:07.848482 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 00:13:07.848634 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 00:13:07.848764 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 00:13:07.848930 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 00:13:07.849075 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:13:07.849196 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:13:07.849323 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 00:13:07.849443 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 00:13:07.849561 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 00:13:07.849687 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 00:13:07.849832 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 00:13:07.849844 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:13:07.849852 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:13:07.849860 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:13:07.849868 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:13:07.849883 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:13:07.849891 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:13:07.849899 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:13:07.849911 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:13:07.849919 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:13:07.849926 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:13:07.849934 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:13:07.849942 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:13:07.849950 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:13:07.849958 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:13:07.849966 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:13:07.849974 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:13:07.849984 kernel: iommu: Default domain type: Translated Jul 7 00:13:07.849991 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:13:07.849999 kernel: efivars: Registered efivars operations Jul 7 00:13:07.850007 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:13:07.850015 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:13:07.850023 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 7 00:13:07.850030 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 7 00:13:07.850038 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 7 00:13:07.850046 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 7 00:13:07.850055 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 7 00:13:07.850064 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 7 00:13:07.850072 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 7 00:13:07.850079 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 7 00:13:07.850201 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:13:07.850320 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:13:07.850439 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:13:07.850449 kernel: vgaarb: loaded Jul 7 00:13:07.850461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:13:07.850469 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:13:07.850477 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:13:07.850485 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:13:07.850493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:13:07.850501 kernel: pnp: PnP ACPI init Jul 7 00:13:07.850635 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 00:13:07.850662 kernel: pnp: PnP ACPI: found 6 devices Jul 7 00:13:07.850674 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:13:07.850682 kernel: NET: Registered PF_INET protocol family Jul 7 00:13:07.850691 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:13:07.850699 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:13:07.850707 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:13:07.850715 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:13:07.850724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:13:07.850732 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:13:07.850740 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:13:07.850750 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:13:07.850759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:13:07.850767 kernel: NET: Registered PF_XDP protocol family Jul 7 00:13:07.850912 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 00:13:07.851035 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 00:13:07.851146 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:13:07.851258 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:13:07.851368 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:13:07.851481 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 00:13:07.851590 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 00:13:07.851698 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 00:13:07.851708 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:13:07.851717 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 00:13:07.851726 kernel: Initialise system trusted keyrings Jul 7 00:13:07.851734 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:13:07.851743 kernel: Key type asymmetric registered Jul 7 00:13:07.851754 kernel: Asymmetric key parser 'x509' registered Jul 7 00:13:07.851762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:13:07.851771 kernel: io scheduler mq-deadline registered Jul 7 00:13:07.851781 kernel: io scheduler kyber registered Jul 7 00:13:07.851804 kernel: io scheduler bfq registered Jul 7 00:13:07.851825 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:13:07.851836 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:13:07.851844 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:13:07.851852 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 00:13:07.851860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:13:07.851868 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:13:07.851886 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:13:07.851895 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:13:07.851904 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:13:07.852033 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:13:07.852048 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:13:07.852172 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:13:07.852328 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:13:07 UTC (1751847187) Jul 7 00:13:07.852443 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 00:13:07.852454 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:13:07.852462 kernel: efifb: probing for efifb Jul 7 00:13:07.852470 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 00:13:07.852478 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 00:13:07.852491 kernel: efifb: scrolling: redraw Jul 7 00:13:07.852499 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:13:07.852507 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 00:13:07.852515 kernel: fb0: EFI VGA frame buffer device Jul 7 00:13:07.852523 kernel: pstore: Using crash dump compression: deflate Jul 7 00:13:07.852532 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:13:07.852540 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:13:07.852548 kernel: Segment Routing with IPv6 Jul 7 00:13:07.852556 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:13:07.852566 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:13:07.852575 kernel: Key type dns_resolver registered Jul 7 00:13:07.852582 kernel: IPI shorthand broadcast: enabled Jul 7 00:13:07.852591 kernel: sched_clock: Marking stable (3574002503, 156824630)->(3796552521, -65725388) Jul 7 00:13:07.852599 kernel: registered taskstats version 1 Jul 7 00:13:07.852607 kernel: Loading compiled-in X.509 certificates Jul 7 00:13:07.852615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:13:07.852623 kernel: Demotion targets for Node 0: null Jul 7 00:13:07.852631 kernel: Key type .fscrypt registered Jul 7 00:13:07.852642 kernel: Key type fscrypt-provisioning registered Jul 7 00:13:07.852650 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:13:07.852658 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:13:07.852666 kernel: ima: No architecture policies found Jul 7 00:13:07.852674 kernel: clk: Disabling unused clocks Jul 7 00:13:07.852682 kernel: Warning: unable to open an initial console. Jul 7 00:13:07.852691 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:13:07.852699 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:13:07.852707 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:13:07.852717 kernel: Run /init as init process Jul 7 00:13:07.852725 kernel: with arguments: Jul 7 00:13:07.852733 kernel: /init Jul 7 00:13:07.852741 kernel: with environment: Jul 7 00:13:07.852749 kernel: HOME=/ Jul 7 00:13:07.852757 kernel: TERM=linux Jul 7 00:13:07.852765 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:13:07.852779 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:13:07.852808 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:13:07.852833 systemd[1]: Detected virtualization kvm. Jul 7 00:13:07.852842 systemd[1]: Detected architecture x86-64. Jul 7 00:13:07.852850 systemd[1]: Running in initrd. Jul 7 00:13:07.852859 systemd[1]: No hostname configured, using default hostname. Jul 7 00:13:07.852868 systemd[1]: Hostname set to . Jul 7 00:13:07.852883 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:13:07.852892 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:13:07.852904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:13:07.852912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:13:07.852922 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:13:07.852930 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:13:07.852939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:13:07.852949 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:13:07.852959 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:13:07.852969 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:13:07.852980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:13:07.852989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:13:07.852997 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:13:07.853006 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:13:07.853015 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:13:07.853024 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:13:07.853032 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:13:07.853043 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:13:07.853052 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:13:07.853061 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:13:07.853069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:13:07.853078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:13:07.853087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:13:07.853095 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:13:07.853104 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:13:07.853113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:13:07.853124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:13:07.853142 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:13:07.853161 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:13:07.853170 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:13:07.853179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:13:07.853188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:07.853196 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:13:07.853208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:13:07.853217 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:13:07.853255 systemd-journald[221]: Collecting audit messages is disabled. Jul 7 00:13:07.853278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:13:07.853287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:07.853296 systemd-journald[221]: Journal started Jul 7 00:13:07.853317 systemd-journald[221]: Runtime Journal (/run/log/journal/38b1a032e0a045ec89fa47091d0f7526) is 6M, max 48.5M, 42.4M free. Jul 7 00:13:07.843352 systemd-modules-load[222]: Inserted module 'overlay' Jul 7 00:13:07.856809 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:13:07.857381 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:13:07.863262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:13:07.866945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:13:07.872821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:13:07.874821 kernel: Bridge firewalling registered Jul 7 00:13:07.874861 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 7 00:13:07.880192 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:13:07.880530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:13:07.882274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:13:07.892735 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:13:07.893738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:13:07.895722 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:13:07.898800 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:13:07.901194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:13:07.912951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:13:07.915066 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:13:07.944107 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:13:07.951278 systemd-resolved[261]: Positive Trust Anchors: Jul 7 00:13:07.951293 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:13:07.951325 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:13:07.954192 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 7 00:13:07.955604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:13:07.960700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:13:08.066827 kernel: SCSI subsystem initialized Jul 7 00:13:08.075821 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:13:08.086820 kernel: iscsi: registered transport (tcp) Jul 7 00:13:08.108122 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:13:08.108168 kernel: QLogic iSCSI HBA Driver Jul 7 00:13:08.281153 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:13:08.309950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:13:08.313545 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:13:08.381952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:13:08.384099 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:13:08.446824 kernel: raid6: avx2x4 gen() 28788 MB/s Jul 7 00:13:08.463829 kernel: raid6: avx2x2 gen() 29125 MB/s Jul 7 00:13:08.480872 kernel: raid6: avx2x1 gen() 25168 MB/s Jul 7 00:13:08.480895 kernel: raid6: using algorithm avx2x2 gen() 29125 MB/s Jul 7 00:13:08.498909 kernel: raid6: .... xor() 18697 MB/s, rmw enabled Jul 7 00:13:08.498940 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:13:08.520836 kernel: xor: automatically using best checksumming function avx Jul 7 00:13:08.687872 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:13:08.697216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:13:08.701470 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:13:08.747723 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 7 00:13:08.754321 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:13:08.755711 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:13:08.778435 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 7 00:13:08.809860 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:13:08.811255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:13:08.894620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:13:08.897428 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:13:08.926818 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 00:13:08.932641 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 00:13:08.939242 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:13:08.939274 kernel: GPT:9289727 != 19775487 Jul 7 00:13:08.939285 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:13:08.939295 kernel: GPT:9289727 != 19775487 Jul 7 00:13:08.957882 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:13:08.957949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:13:08.983631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:13:08.984031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:09.032384 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:13:09.032411 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:13:09.032576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:09.038829 kernel: AES CTR mode by8 optimization enabled Jul 7 00:13:09.036012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:09.037463 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:13:09.045642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:13:09.046226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:09.055513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:09.078813 kernel: libata version 3.00 loaded. Jul 7 00:13:09.092295 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:13:09.092548 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:13:09.097451 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 00:13:09.097639 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 00:13:09.097781 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:13:09.101384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:13:09.106650 kernel: scsi host0: ahci Jul 7 00:13:09.107063 kernel: scsi host1: ahci Jul 7 00:13:09.107355 kernel: scsi host2: ahci Jul 7 00:13:09.107656 kernel: scsi host3: ahci Jul 7 00:13:09.107961 kernel: scsi host4: ahci Jul 7 00:13:09.108243 kernel: scsi host5: ahci Jul 7 00:13:09.108519 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 7 00:13:09.108544 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 7 00:13:09.108559 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 7 00:13:09.108569 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 7 00:13:09.108580 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 7 00:13:09.102027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:09.114251 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 7 00:13:09.126500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:13:09.137423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:13:09.146336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:13:09.146481 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:13:09.148288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:13:09.183252 disk-uuid[636]: Primary Header is updated. Jul 7 00:13:09.183252 disk-uuid[636]: Secondary Entries is updated. Jul 7 00:13:09.183252 disk-uuid[636]: Secondary Header is updated. Jul 7 00:13:09.187834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:13:09.193834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:13:09.418889 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:13:09.418976 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:13:09.418989 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 00:13:09.418999 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:13:09.419827 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:13:09.420819 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:13:09.421983 kernel: ata3.00: applying bridge limits Jul 7 00:13:09.422000 kernel: ata3.00: configured for UDMA/100 Jul 7 00:13:09.422823 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:13:09.427819 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:13:09.467392 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:13:09.467605 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:13:09.487819 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:13:09.911586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:13:09.913200 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:13:09.915040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:13:09.916282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:13:09.919285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:13:09.947604 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:13:10.193852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:13:10.194180 disk-uuid[638]: The operation has completed successfully. Jul 7 00:13:10.224744 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:13:10.224888 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:13:10.257912 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:13:10.273232 sh[666]: Success Jul 7 00:13:10.291830 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:13:10.291876 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:13:10.291888 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:13:10.301886 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 00:13:10.333780 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:13:10.337748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:13:10.352602 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:13:10.360471 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:13:10.360496 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (678) Jul 7 00:13:10.363241 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:13:10.363266 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:13:10.363277 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:13:10.367727 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:13:10.369917 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:13:10.372115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:13:10.374683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:13:10.377541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:13:10.400819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Jul 7 00:13:10.400864 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:13:10.401825 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:13:10.402806 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:13:10.409824 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:13:10.410098 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:13:10.413032 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:13:10.533089 ignition[752]: Ignition 2.21.0 Jul 7 00:13:10.533103 ignition[752]: Stage: fetch-offline Jul 7 00:13:10.535275 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:13:10.533147 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:10.533157 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:10.538507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:13:10.533267 ignition[752]: parsed url from cmdline: "" Jul 7 00:13:10.533271 ignition[752]: no config URL provided Jul 7 00:13:10.533276 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:13:10.533293 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:13:10.533316 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 7 00:13:10.533321 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 00:13:10.544919 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 7 00:13:10.588463 ignition[752]: parsing config with SHA512: cd6efd690edcb89f06df5723ea4458490d2a89c1c170c5fad2148611e64d4c3c9a85112dfb33fa276d9b34d481822c294e16dbc0bf26a437d271b5b9ae019c1f Jul 7 00:13:10.592541 unknown[752]: fetched base config from "system" Jul 7 00:13:10.592555 unknown[752]: fetched user config from "qemu" Jul 7 00:13:10.592971 ignition[752]: fetch-offline: fetch-offline passed Jul 7 00:13:10.593016 systemd-networkd[855]: lo: Link UP Jul 7 00:13:10.593024 ignition[752]: Ignition finished successfully Jul 7 00:13:10.593020 systemd-networkd[855]: lo: Gained carrier Jul 7 00:13:10.594731 systemd-networkd[855]: Enumeration completed Jul 7 00:13:10.594896 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:13:10.596311 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:13:10.596315 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:10.597033 systemd[1]: Reached target network.target - Network. Jul 7 00:13:10.597761 systemd-networkd[855]: eth0: Link UP Jul 7 00:13:10.597767 systemd-networkd[855]: eth0: Gained carrier Jul 7 00:13:10.597784 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:13:10.599213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:13:10.601322 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:13:10.602321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:13:10.613881 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:13:10.643935 ignition[859]: Ignition 2.21.0 Jul 7 00:13:10.643952 ignition[859]: Stage: kargs Jul 7 00:13:10.644096 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:10.644108 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:10.646526 ignition[859]: kargs: kargs passed Jul 7 00:13:10.646588 ignition[859]: Ignition finished successfully Jul 7 00:13:10.650977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:13:10.653425 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:13:10.701168 ignition[868]: Ignition 2.21.0 Jul 7 00:13:10.701184 ignition[868]: Stage: disks Jul 7 00:13:10.701358 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:10.701373 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:10.703182 ignition[868]: disks: disks passed Jul 7 00:13:10.703259 ignition[868]: Ignition finished successfully Jul 7 00:13:10.709055 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:13:10.711137 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:13:10.711226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:13:10.715364 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:13:10.715433 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:13:10.717266 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:13:10.718595 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:13:10.760632 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:13:10.768024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:13:10.770983 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:13:10.897844 kernel: EXT4-fs (vda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:13:10.898741 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:13:10.899388 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:13:10.901692 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:13:10.904151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:13:10.905368 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:13:10.905412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:13:10.905440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:13:10.920223 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:13:10.924892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Jul 7 00:13:10.922816 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:13:10.928509 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:13:10.928533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:13:10.928544 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:13:10.931859 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:13:10.970571 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:13:10.976196 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:13:10.981665 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:13:10.986566 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:13:11.082460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:13:11.084618 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:13:11.086215 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:13:11.105831 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:13:11.120564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:13:11.136566 ignition[1003]: INFO : Ignition 2.21.0 Jul 7 00:13:11.136566 ignition[1003]: INFO : Stage: mount Jul 7 00:13:11.138260 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:11.138260 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:11.140380 ignition[1003]: INFO : mount: mount passed Jul 7 00:13:11.140380 ignition[1003]: INFO : Ignition finished successfully Jul 7 00:13:11.141157 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:13:11.144012 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:13:11.361132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:13:11.364324 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:13:11.397826 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 7 00:13:11.400102 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:13:11.400126 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:13:11.400150 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:13:11.405562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:13:11.493346 ignition[1031]: INFO : Ignition 2.21.0 Jul 7 00:13:11.493346 ignition[1031]: INFO : Stage: files Jul 7 00:13:11.495205 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:11.495205 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:11.495205 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:13:11.495205 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:13:11.495205 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:13:11.501416 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:13:11.501416 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:13:11.501416 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:13:11.501416 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:13:11.501416 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:13:11.498613 unknown[1031]: wrote ssh authorized keys file for user: core Jul 7 00:13:11.548666 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:13:11.652731 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:13:11.652731 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:13:11.656348 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:13:12.029382 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:13:12.149305 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:13:12.149305 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:13:12.153779 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:13:12.166092 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:13:12.267118 systemd-networkd[855]: eth0: Gained IPv6LL Jul 7 00:13:12.806831 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:13:13.600894 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:13:13.600894 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:13:13.604844 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:13:13.607468 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:13:13.607468 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:13:13.607468 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 00:13:13.612186 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:13:13.612186 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:13:13.612186 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 00:13:13.612186 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 00:13:13.632904 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:13:13.637098 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:13:13.638658 ignition[1031]: INFO : files: files passed Jul 7 00:13:13.638658 ignition[1031]: INFO : Ignition finished successfully Jul 7 00:13:13.642583 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:13:13.645279 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:13:13.646767 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:13:13.661847 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:13:13.661972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:13:13.666291 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 00:13:13.669177 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:13:13.669177 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:13:13.672188 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:13:13.675477 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:13:13.675741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:13:13.679972 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:13:13.761435 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:13:13.761568 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:13:13.762679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:13:13.764800 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:13:13.766705 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:13:13.767584 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:13:13.814061 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:13:13.815724 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:13:13.839821 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:13:13.840006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:13:13.842143 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:13:13.845100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:13:13.845246 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:13:13.847867 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:13:13.849975 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:13:13.850987 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:13:13.851291 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:13:13.851610 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:13:13.852101 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:13:13.852414 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:13:13.852737 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:13:13.853235 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:13:13.853543 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:13:13.853905 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:13:13.854311 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:13:13.854455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:13:13.870515 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:13:13.871628 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:13:13.872071 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:13:13.872202 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:13:13.875596 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:13:13.875726 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:13:13.879701 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:13:13.879886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:13:13.880860 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:13:13.881202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:13:13.882893 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:13:13.884613 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:13:13.887267 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:13:13.887582 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:13:13.887683 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:13:13.891134 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:13:13.891219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:13:13.892831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:13:13.892966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:13:13.893268 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:13:13.893377 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:13:13.898897 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:13:13.901680 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:13:13.903592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:13:13.903771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:13:13.905783 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:13:13.905949 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:13:13.913188 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:13:13.914941 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:13:13.938891 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:13:13.941973 ignition[1086]: INFO : Ignition 2.21.0 Jul 7 00:13:13.941973 ignition[1086]: INFO : Stage: umount Jul 7 00:13:13.943554 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:13:13.943554 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:13:13.943554 ignition[1086]: INFO : umount: umount passed Jul 7 00:13:13.943554 ignition[1086]: INFO : Ignition finished successfully Jul 7 00:13:13.948946 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:13:13.949682 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:13:13.952394 systemd[1]: Stopped target network.target - Network. Jul 7 00:13:13.952480 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:13:13.952542 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:13:13.954073 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:13:13.954122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:13:13.956878 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:13:13.956933 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:13:13.957853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:13:13.957900 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:13:13.958275 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:13:13.961568 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:13:13.970352 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:13:13.970498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:13:13.975675 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:13:13.975971 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:13:13.976110 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:13:13.980563 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:13:13.981363 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:13:13.983993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:13:13.984053 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:13:13.987349 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:13:13.987421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:13:13.987474 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:13:13.989197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:13:13.989244 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:13:13.993257 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:13:13.993306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:13:13.994442 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:13:13.994489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:13:13.999075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:13:14.000899 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:13:14.000977 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:13:14.028581 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:13:14.028780 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:13:14.060174 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:13:14.060299 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:13:14.061816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:13:14.061885 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:13:14.063198 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:13:14.063251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:13:14.063479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:13:14.063547 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:13:14.064267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:13:14.064326 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:13:14.065121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:13:14.065185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:13:14.066668 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:13:14.075274 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:13:14.075332 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:13:14.081845 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:13:14.081925 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:13:14.085339 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:13:14.085395 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:13:14.088865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:13:14.088918 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:13:14.089900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:13:14.089948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:14.096417 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:13:14.096487 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 00:13:14.096536 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:13:14.096590 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:13:14.099694 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:13:14.099853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:13:14.137695 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:13:14.137874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:13:14.139064 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:13:14.140458 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:13:14.140520 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:13:14.141669 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:13:14.163642 systemd[1]: Switching root. Jul 7 00:13:14.202262 systemd-journald[221]: Journal stopped Jul 7 00:13:15.494740 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Jul 7 00:13:15.494825 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:13:15.494843 kernel: SELinux: policy capability open_perms=1 Jul 7 00:13:15.494854 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:13:15.494871 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:13:15.494883 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:13:15.494894 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:13:15.494906 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:13:15.494917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:13:15.494928 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:13:15.494940 kernel: audit: type=1403 audit(1751847194.667:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:13:15.494961 systemd[1]: Successfully loaded SELinux policy in 54.268ms. Jul 7 00:13:15.494985 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.597ms. Jul 7 00:13:15.494998 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:13:15.495011 systemd[1]: Detected virtualization kvm. Jul 7 00:13:15.495024 systemd[1]: Detected architecture x86-64. Jul 7 00:13:15.495035 systemd[1]: Detected first boot. Jul 7 00:13:15.495047 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:13:15.495060 zram_generator::config[1131]: No configuration found. Jul 7 00:13:15.495075 kernel: Guest personality initialized and is inactive Jul 7 00:13:15.495086 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:13:15.495097 kernel: Initialized host personality Jul 7 00:13:15.495109 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:13:15.495120 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:13:15.495133 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:13:15.495145 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:13:15.495157 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:13:15.495169 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:13:15.495184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:13:15.495196 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:13:15.495209 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:13:15.495221 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:13:15.495233 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:13:15.495247 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:13:15.495259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:13:15.495272 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:13:15.495284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:13:15.495300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:13:15.495313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:13:15.495325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:13:15.495338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:13:15.495356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:13:15.495368 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:13:15.495381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:13:15.495395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:13:15.495407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:13:15.495419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:13:15.495432 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:13:15.495444 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:13:15.495457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:13:15.495469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:13:15.495482 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:13:15.495495 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:13:15.495507 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:13:15.495523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:13:15.495535 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:13:15.495547 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:13:15.495559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:13:15.495571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:13:15.495583 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:13:15.495595 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:13:15.495607 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:13:15.495619 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:13:15.495633 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:15.495645 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:13:15.495657 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:13:15.495669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:13:15.495682 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:13:15.495703 systemd[1]: Reached target machines.target - Containers. Jul 7 00:13:15.495715 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:13:15.495727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:13:15.495742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:13:15.495754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:13:15.495767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:13:15.495779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:13:15.495815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:13:15.495828 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:13:15.495846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:13:15.495861 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:13:15.495875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:13:15.495888 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:13:15.495900 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:13:15.495912 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:13:15.495925 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:13:15.495937 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:13:15.495949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:13:15.495961 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:13:15.495973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:13:15.495988 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:13:15.496003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:13:15.496017 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:13:15.496030 systemd[1]: Stopped verity-setup.service. Jul 7 00:13:15.496042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:15.496054 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:13:15.496067 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:13:15.496078 kernel: ACPI: bus type drm_connector registered Jul 7 00:13:15.496090 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:13:15.496102 kernel: loop: module loaded Jul 7 00:13:15.496124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:13:15.496138 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:13:15.496150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:13:15.496162 kernel: fuse: init (API version 7.41) Jul 7 00:13:15.496173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:13:15.496185 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:13:15.496198 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:13:15.496210 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:13:15.496222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:13:15.496236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:13:15.496270 systemd-journald[1206]: Collecting audit messages is disabled. Jul 7 00:13:15.496298 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:13:15.496313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:13:15.496326 systemd-journald[1206]: Journal started Jul 7 00:13:15.496348 systemd-journald[1206]: Runtime Journal (/run/log/journal/38b1a032e0a045ec89fa47091d0f7526) is 6M, max 48.5M, 42.4M free. Jul 7 00:13:15.212090 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:13:15.239227 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:13:15.239772 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:13:15.498937 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:13:15.500117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:13:15.500360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:13:15.501878 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:13:15.502099 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:13:15.503438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:13:15.503673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:13:15.505310 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:13:15.506780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:13:15.508365 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:13:15.509948 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:13:15.526355 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:13:15.529255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:13:15.531574 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:13:15.532834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:13:15.532933 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:13:15.535281 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:13:15.540920 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:13:15.542114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:13:15.544995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:13:15.546129 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:13:15.548093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:13:15.549400 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:13:15.550956 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:13:15.553723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:13:15.563072 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:13:15.565914 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:13:15.570287 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:13:15.572062 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:13:15.585961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:13:15.625443 systemd-journald[1206]: Time spent on flushing to /var/log/journal/38b1a032e0a045ec89fa47091d0f7526 is 15.477ms for 1075 entries. Jul 7 00:13:15.625443 systemd-journald[1206]: System Journal (/var/log/journal/38b1a032e0a045ec89fa47091d0f7526) is 8M, max 195.6M, 187.6M free. Jul 7 00:13:15.646890 systemd-journald[1206]: Received client request to flush runtime journal. Jul 7 00:13:15.651391 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 00:13:15.649734 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:13:15.657428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:13:15.659027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:13:15.664611 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:13:15.677597 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:13:15.681057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:13:15.687401 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 7 00:13:15.687422 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 7 00:13:15.697280 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:13:15.700668 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:13:15.711825 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 00:13:15.718083 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:13:15.738882 kernel: loop2: detected capacity change from 0 to 221472 Jul 7 00:13:15.751012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:13:15.755503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:13:15.767875 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 00:13:15.798858 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 00:13:15.802751 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jul 7 00:13:15.802781 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jul 7 00:13:15.811032 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 00:13:15.809230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:13:15.820703 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 00:13:15.821465 (sd-merge)[1275]: Merged extensions into '/usr'. Jul 7 00:13:15.826437 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:13:15.826462 systemd[1]: Reloading... Jul 7 00:13:15.969827 zram_generator::config[1326]: No configuration found. Jul 7 00:13:16.029163 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:13:16.040929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:16.123204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:13:16.123664 systemd[1]: Reloading finished in 296 ms. Jul 7 00:13:16.153442 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:13:16.155119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:13:16.171542 systemd[1]: Starting ensure-sysext.service... Jul 7 00:13:16.173628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:13:16.186118 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:13:16.186133 systemd[1]: Reloading... Jul 7 00:13:16.206081 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:13:16.206139 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:13:16.206435 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:13:16.206694 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:13:16.207613 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:13:16.207917 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 7 00:13:16.207991 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 7 00:13:16.212255 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:13:16.212270 systemd-tmpfiles[1341]: Skipping /boot Jul 7 00:13:16.232189 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:13:16.232954 systemd-tmpfiles[1341]: Skipping /boot Jul 7 00:13:16.245831 zram_generator::config[1368]: No configuration found. Jul 7 00:13:16.422260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:16.504043 systemd[1]: Reloading finished in 317 ms. Jul 7 00:13:16.527547 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:13:16.547106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:13:16.556420 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:13:16.559098 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:13:16.561494 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:13:16.576824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:13:16.581093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:13:16.583744 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:13:16.590682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:16.591015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:13:16.597231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:13:16.601024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:13:16.604836 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:13:16.605982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:13:16.606087 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:13:16.613142 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:13:16.614651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:16.617078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:13:16.619410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:13:16.619634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:13:16.621359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:13:16.626140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:13:16.628335 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:13:16.628826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:13:16.635858 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Jul 7 00:13:16.637676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:13:16.637912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:13:16.639937 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:13:16.644941 augenrules[1441]: No rules Jul 7 00:13:16.642487 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:13:16.642761 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:13:16.653190 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:13:16.656043 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:13:16.663754 systemd[1]: Finished ensure-sysext.service. Jul 7 00:13:16.665297 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:13:16.667973 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:13:16.672010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:16.675998 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:13:16.678036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:13:16.679442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:13:16.682187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:13:16.686911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:13:16.691001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:13:16.692122 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:13:16.692166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:13:16.699000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:13:16.708911 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:13:16.710924 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:13:16.710959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:13:16.711305 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:13:16.713582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:13:16.722336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:13:16.724244 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:13:16.725848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:13:16.744510 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:13:16.745879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:13:16.747736 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:13:16.750644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:13:16.752596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:13:16.754264 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:13:16.755159 augenrules[1461]: /sbin/augenrules: No change Jul 7 00:13:16.764275 augenrules[1511]: No rules Jul 7 00:13:16.766547 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:13:16.767214 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:13:16.775678 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:13:16.893823 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:13:16.906839 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:13:16.909818 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:13:16.927362 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 00:13:16.927630 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:13:16.927874 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:13:16.932935 systemd-networkd[1477]: lo: Link UP Jul 7 00:13:16.932940 systemd-networkd[1477]: lo: Gained carrier Jul 7 00:13:16.934685 systemd-networkd[1477]: Enumeration completed Jul 7 00:13:16.935125 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:13:16.935138 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:13:16.935706 systemd-networkd[1477]: eth0: Link UP Jul 7 00:13:16.935944 systemd-networkd[1477]: eth0: Gained carrier Jul 7 00:13:16.935967 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:13:16.936872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:13:16.938403 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:13:16.941696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:13:16.948960 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:13:16.952178 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:13:16.952440 systemd-networkd[1477]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:13:16.953430 systemd-resolved[1410]: Positive Trust Anchors: Jul 7 00:13:16.953448 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:13:16.953481 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:13:16.953566 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:13:16.954782 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Jul 7 00:13:16.954920 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:13:17.367495 systemd-resolved[1410]: Defaulting to hostname 'linux'. Jul 7 00:13:17.367660 systemd-timesyncd[1481]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 00:13:17.367750 systemd-timesyncd[1481]: Initial clock synchronization to Mon 2025-07-07 00:13:17.367250 UTC. Jul 7 00:13:17.369601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:13:17.371184 systemd[1]: Reached target network.target - Network. Jul 7 00:13:17.372214 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:13:17.373433 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:13:17.374625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:13:17.375961 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:13:17.377286 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:13:17.378944 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:13:17.380804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:13:17.382076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:13:17.383369 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:13:17.383399 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:13:17.384778 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:13:17.387108 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:13:17.389548 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:13:17.395176 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:13:17.396849 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:13:17.398675 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:13:17.404095 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:13:17.406802 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:13:17.409791 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:13:17.411454 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:13:17.413835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:13:17.419523 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:13:17.422693 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:13:17.423729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:13:17.423758 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:13:17.426762 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:13:17.430836 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:13:17.433859 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:13:17.437167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:13:17.442839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:13:17.444705 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:13:17.448821 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:13:17.451869 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:13:17.454405 jq[1554]: false Jul 7 00:13:17.455556 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:13:17.459775 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:13:17.463781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:13:17.469741 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jul 7 00:13:17.469752 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jul 7 00:13:17.470836 extend-filesystems[1555]: Found /dev/vda6 Jul 7 00:13:17.475207 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:13:17.477203 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:13:17.477804 extend-filesystems[1555]: Found /dev/vda9 Jul 7 00:13:17.477750 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:13:17.478665 oslogin_cache_refresh[1556]: Failure getting users, quitting Jul 7 00:13:17.479317 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Jul 7 00:13:17.479317 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:13:17.479317 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Jul 7 00:13:17.478684 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:13:17.478736 oslogin_cache_refresh[1556]: Refreshing group entry cache Jul 7 00:13:17.480781 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:13:17.484093 extend-filesystems[1555]: Checking size of /dev/vda9 Jul 7 00:13:17.484016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:13:17.487053 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:13:17.487191 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Jul 7 00:13:17.487183 oslogin_cache_refresh[1556]: Failure getting groups, quitting Jul 7 00:13:17.487256 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:13:17.487200 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:13:17.487423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:13:17.487770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:13:17.492087 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:13:17.496029 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:13:17.499504 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:13:17.499892 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:13:17.502178 jq[1568]: true Jul 7 00:13:17.510474 extend-filesystems[1555]: Resized partition /dev/vda9 Jul 7 00:13:17.513249 extend-filesystems[1586]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:13:17.521599 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 00:13:17.538761 update_engine[1567]: I20250707 00:13:17.538686 1567 main.cc:92] Flatcar Update Engine starting Jul 7 00:13:17.542865 jq[1580]: true Jul 7 00:13:17.543144 tar[1571]: linux-amd64/helm Jul 7 00:13:17.548386 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:13:17.549658 dbus-daemon[1552]: [system] SELinux support is enabled Jul 7 00:13:17.550667 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:13:17.560904 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:13:17.563706 update_engine[1567]: I20250707 00:13:17.563652 1567 update_check_scheduler.cc:74] Next update check in 5m23s Jul 7 00:13:17.567709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:13:17.568600 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 00:13:17.573002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:13:17.573035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:13:17.589597 extend-filesystems[1586]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:13:17.589597 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:13:17.589597 extend-filesystems[1586]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 00:13:17.614451 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Jul 7 00:13:17.610825 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:13:17.610854 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:13:17.613666 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:13:17.613988 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:13:17.622170 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:13:17.623274 bash[1614]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:13:17.626893 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:13:17.628636 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:13:17.631449 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:13:17.767123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:17.781494 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:13:17.781864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:17.784561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:13:17.786991 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:13:17.787047 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:13:17.789936 systemd-logind[1564]: New seat seat0. Jul 7 00:13:17.791757 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:13:17.817657 kernel: kvm_amd: TSC scaling supported Jul 7 00:13:17.817697 kernel: kvm_amd: Nested Virtualization enabled Jul 7 00:13:17.817711 kernel: kvm_amd: Nested Paging enabled Jul 7 00:13:17.817739 kernel: kvm_amd: LBR virtualization supported Jul 7 00:13:17.971628 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 00:13:17.971738 kernel: kvm_amd: Virtual GIF supported Jul 7 00:13:18.022672 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:13:18.106671 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:13:18.111022 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:13:18.131605 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:13:18.136315 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:13:18.141053 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:13:18.141530 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:13:18.147207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:13:18.161023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:13:18.173749 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:13:18.177951 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:13:18.180821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:13:18.183357 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:13:18.301410 containerd[1593]: time="2025-07-07T00:13:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:13:18.305637 containerd[1593]: time="2025-07-07T00:13:18.305512644Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:13:18.318552 containerd[1593]: time="2025-07-07T00:13:18.318348787Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.081µs" Jul 7 00:13:18.318552 containerd[1593]: time="2025-07-07T00:13:18.318539564Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:13:18.318640 containerd[1593]: time="2025-07-07T00:13:18.318614325Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:13:18.318884 containerd[1593]: time="2025-07-07T00:13:18.318849526Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:13:18.318884 containerd[1593]: time="2025-07-07T00:13:18.318872519Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:13:18.318953 containerd[1593]: time="2025-07-07T00:13:18.318937972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319072 containerd[1593]: time="2025-07-07T00:13:18.319024644Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319072 containerd[1593]: time="2025-07-07T00:13:18.319042738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319393 containerd[1593]: time="2025-07-07T00:13:18.319355545Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319393 containerd[1593]: time="2025-07-07T00:13:18.319377005Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319393 containerd[1593]: time="2025-07-07T00:13:18.319388787Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319480 containerd[1593]: time="2025-07-07T00:13:18.319397493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319543 containerd[1593]: time="2025-07-07T00:13:18.319517859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319857 containerd[1593]: time="2025-07-07T00:13:18.319820457Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319884 containerd[1593]: time="2025-07-07T00:13:18.319864189Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:13:18.319884 containerd[1593]: time="2025-07-07T00:13:18.319874748Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:13:18.319973 containerd[1593]: time="2025-07-07T00:13:18.319941624Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:13:18.320272 containerd[1593]: time="2025-07-07T00:13:18.320245854Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:13:18.320354 containerd[1593]: time="2025-07-07T00:13:18.320329090Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:13:18.326531 containerd[1593]: time="2025-07-07T00:13:18.326479725Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:13:18.326595 containerd[1593]: time="2025-07-07T00:13:18.326541601Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:13:18.326595 containerd[1593]: time="2025-07-07T00:13:18.326558352Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:13:18.326595 containerd[1593]: time="2025-07-07T00:13:18.326572689Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:13:18.326650 containerd[1593]: time="2025-07-07T00:13:18.326601073Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:13:18.326650 containerd[1593]: time="2025-07-07T00:13:18.326612825Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:13:18.326650 containerd[1593]: time="2025-07-07T00:13:18.326630287Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:13:18.326650 containerd[1593]: time="2025-07-07T00:13:18.326646448Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:13:18.326740 containerd[1593]: time="2025-07-07T00:13:18.326673939Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:13:18.326740 containerd[1593]: time="2025-07-07T00:13:18.326687435Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:13:18.326740 containerd[1593]: time="2025-07-07T00:13:18.326712552Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:13:18.326740 containerd[1593]: time="2025-07-07T00:13:18.326730926Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:13:18.326927 containerd[1593]: time="2025-07-07T00:13:18.326893822Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:13:18.326927 containerd[1593]: time="2025-07-07T00:13:18.326924209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:13:18.326972 containerd[1593]: time="2025-07-07T00:13:18.326951370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:13:18.326972 containerd[1593]: time="2025-07-07T00:13:18.326966378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:13:18.327008 containerd[1593]: time="2025-07-07T00:13:18.326976406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:13:18.327008 containerd[1593]: time="2025-07-07T00:13:18.326989902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:13:18.327008 containerd[1593]: time="2025-07-07T00:13:18.327001033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:13:18.327081 containerd[1593]: time="2025-07-07T00:13:18.327013807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:13:18.327081 containerd[1593]: time="2025-07-07T00:13:18.327027162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:13:18.327081 containerd[1593]: time="2025-07-07T00:13:18.327037010Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:13:18.327081 containerd[1593]: time="2025-07-07T00:13:18.327048772Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:13:18.327190 containerd[1593]: time="2025-07-07T00:13:18.327141757Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:13:18.327190 containerd[1593]: time="2025-07-07T00:13:18.327173716Z" level=info msg="Start snapshots syncer" Jul 7 00:13:18.327229 containerd[1593]: time="2025-07-07T00:13:18.327209093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:13:18.327589 containerd[1593]: time="2025-07-07T00:13:18.327515928Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:13:18.327742 containerd[1593]: time="2025-07-07T00:13:18.327617148Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:13:18.328963 containerd[1593]: time="2025-07-07T00:13:18.328927345Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:13:18.329089 containerd[1593]: time="2025-07-07T00:13:18.329054714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:13:18.329122 containerd[1593]: time="2025-07-07T00:13:18.329095360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:13:18.329122 containerd[1593]: time="2025-07-07T00:13:18.329118293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:13:18.329160 containerd[1593]: time="2025-07-07T00:13:18.329131388Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:13:18.329160 containerd[1593]: time="2025-07-07T00:13:18.329143871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:13:18.329160 containerd[1593]: time="2025-07-07T00:13:18.329156104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:13:18.329228 containerd[1593]: time="2025-07-07T00:13:18.329177234Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:13:18.329228 containerd[1593]: time="2025-07-07T00:13:18.329206599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:13:18.329228 containerd[1593]: time="2025-07-07T00:13:18.329218241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:13:18.329228 containerd[1593]: time="2025-07-07T00:13:18.329228009Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:13:18.329299 containerd[1593]: time="2025-07-07T00:13:18.329262143Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:13:18.329299 containerd[1593]: time="2025-07-07T00:13:18.329277832Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:13:18.329299 containerd[1593]: time="2025-07-07T00:13:18.329286158Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:13:18.329299 containerd[1593]: time="2025-07-07T00:13:18.329294474Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329302128Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329311195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329323729Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329341332Z" level=info msg="runtime interface created" Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329346802Z" level=info msg="created NRI interface" Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329354576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:13:18.329381 containerd[1593]: time="2025-07-07T00:13:18.329366098Z" level=info msg="Connect containerd service" Jul 7 00:13:18.329520 containerd[1593]: time="2025-07-07T00:13:18.329389973Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:13:18.330443 containerd[1593]: time="2025-07-07T00:13:18.330400909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:13:18.452368 tar[1571]: linux-amd64/LICENSE Jul 7 00:13:18.452518 tar[1571]: linux-amd64/README.md Jul 7 00:13:18.475113 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:13:18.539556 containerd[1593]: time="2025-07-07T00:13:18.539443668Z" level=info msg="Start subscribing containerd event" Jul 7 00:13:18.539743 containerd[1593]: time="2025-07-07T00:13:18.539567030Z" level=info msg="Start recovering state" Jul 7 00:13:18.539768 containerd[1593]: time="2025-07-07T00:13:18.539761925Z" level=info msg="Start event monitor" Jul 7 00:13:18.539811 containerd[1593]: time="2025-07-07T00:13:18.539793414Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:13:18.539811 containerd[1593]: time="2025-07-07T00:13:18.539807270Z" level=info msg="Start streaming server" Jul 7 00:13:18.539850 containerd[1593]: time="2025-07-07T00:13:18.539805577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:13:18.539877 containerd[1593]: time="2025-07-07T00:13:18.539830594Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:13:18.539877 containerd[1593]: time="2025-07-07T00:13:18.539864628Z" level=info msg="runtime interface starting up..." Jul 7 00:13:18.539918 containerd[1593]: time="2025-07-07T00:13:18.539881019Z" level=info msg="starting plugins..." Jul 7 00:13:18.539918 containerd[1593]: time="2025-07-07T00:13:18.539907168Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:13:18.540211 containerd[1593]: time="2025-07-07T00:13:18.539908180Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:13:18.540211 containerd[1593]: time="2025-07-07T00:13:18.540127080Z" level=info msg="containerd successfully booted in 0.239467s" Jul 7 00:13:18.540252 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:13:19.330882 systemd-networkd[1477]: eth0: Gained IPv6LL Jul 7 00:13:19.334939 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:13:19.337094 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:13:19.340286 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 00:13:19.343176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:19.355239 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:13:19.381817 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:13:19.382147 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 00:13:19.384076 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:13:19.386512 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:13:20.070809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:20.072653 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:13:20.074596 systemd[1]: Startup finished in 3.639s (kernel) + 7.030s (initrd) + 5.051s (userspace) = 15.720s. Jul 7 00:13:20.076962 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:20.487018 kubelet[1696]: E0707 00:13:20.486865 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:20.490952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:20.491173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:20.491570 systemd[1]: kubelet.service: Consumed 982ms CPU time, 265.6M memory peak. Jul 7 00:13:21.486079 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:13:21.487471 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:32864.service - OpenSSH per-connection server daemon (10.0.0.1:32864). Jul 7 00:13:21.558367 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 32864 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:21.560553 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:21.567937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:13:21.569143 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:13:21.575629 systemd-logind[1564]: New session 1 of user core. Jul 7 00:13:21.598554 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:13:21.601183 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:13:21.628711 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:13:21.631157 systemd-logind[1564]: New session c1 of user core. Jul 7 00:13:21.785048 systemd[1713]: Queued start job for default target default.target. Jul 7 00:13:21.802154 systemd[1713]: Created slice app.slice - User Application Slice. Jul 7 00:13:21.802183 systemd[1713]: Reached target paths.target - Paths. Jul 7 00:13:21.802231 systemd[1713]: Reached target timers.target - Timers. Jul 7 00:13:21.803954 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:13:21.816113 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:13:21.816269 systemd[1713]: Reached target sockets.target - Sockets. Jul 7 00:13:21.816341 systemd[1713]: Reached target basic.target - Basic System. Jul 7 00:13:21.816386 systemd[1713]: Reached target default.target - Main User Target. Jul 7 00:13:21.816425 systemd[1713]: Startup finished in 177ms. Jul 7 00:13:21.816694 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:13:21.818426 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:13:21.882729 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:32868.service - OpenSSH per-connection server daemon (10.0.0.1:32868). Jul 7 00:13:21.939432 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 32868 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:21.940957 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:21.946130 systemd-logind[1564]: New session 2 of user core. Jul 7 00:13:21.959781 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:13:22.015081 sshd[1726]: Connection closed by 10.0.0.1 port 32868 Jul 7 00:13:22.015481 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:22.028011 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:32868.service: Deactivated successfully. Jul 7 00:13:22.029944 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:13:22.030754 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:13:22.033821 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:32870.service - OpenSSH per-connection server daemon (10.0.0.1:32870). Jul 7 00:13:22.034396 systemd-logind[1564]: Removed session 2. Jul 7 00:13:22.074019 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 32870 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:22.075556 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:22.080141 systemd-logind[1564]: New session 3 of user core. Jul 7 00:13:22.096753 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:13:22.147025 sshd[1734]: Connection closed by 10.0.0.1 port 32870 Jul 7 00:13:22.147360 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:22.164236 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:32870.service: Deactivated successfully. Jul 7 00:13:22.166028 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:13:22.166857 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:13:22.172804 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:32874.service - OpenSSH per-connection server daemon (10.0.0.1:32874). Jul 7 00:13:22.173899 systemd-logind[1564]: Removed session 3. Jul 7 00:13:22.220022 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 32874 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:22.221662 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:22.226465 systemd-logind[1564]: New session 4 of user core. Jul 7 00:13:22.239727 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:13:22.294714 sshd[1742]: Connection closed by 10.0.0.1 port 32874 Jul 7 00:13:22.295164 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:22.303155 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:32874.service: Deactivated successfully. Jul 7 00:13:22.305007 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:13:22.305740 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:13:22.308786 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:32890.service - OpenSSH per-connection server daemon (10.0.0.1:32890). Jul 7 00:13:22.309353 systemd-logind[1564]: Removed session 4. Jul 7 00:13:22.361396 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 32890 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:22.362813 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:22.367590 systemd-logind[1564]: New session 5 of user core. Jul 7 00:13:22.383874 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:13:22.443349 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:13:22.443692 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:22.464743 sudo[1751]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:22.466443 sshd[1750]: Connection closed by 10.0.0.1 port 32890 Jul 7 00:13:22.466847 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:22.477406 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:32890.service: Deactivated successfully. Jul 7 00:13:22.479690 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:13:22.480495 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:13:22.484233 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:32904.service - OpenSSH per-connection server daemon (10.0.0.1:32904). Jul 7 00:13:22.484926 systemd-logind[1564]: Removed session 5. Jul 7 00:13:22.535190 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 32904 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:22.536934 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:22.542117 systemd-logind[1564]: New session 6 of user core. Jul 7 00:13:22.552765 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:13:22.611150 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:13:22.611505 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:22.619630 sudo[1762]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:22.626730 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:13:22.627083 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:22.637769 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:13:22.687761 augenrules[1784]: No rules Jul 7 00:13:22.689817 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:13:22.690151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:13:22.691453 sudo[1761]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:22.693302 sshd[1760]: Connection closed by 10.0.0.1 port 32904 Jul 7 00:13:22.693692 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:22.702470 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:32904.service: Deactivated successfully. Jul 7 00:13:22.704296 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:13:22.705170 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:13:22.708142 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:32918.service - OpenSSH per-connection server daemon (10.0.0.1:32918). Jul 7 00:13:22.708952 systemd-logind[1564]: Removed session 6. Jul 7 00:13:22.766653 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 32918 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:13:22.767989 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:13:22.772520 systemd-logind[1564]: New session 7 of user core. Jul 7 00:13:22.789707 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:13:22.844096 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:13:22.844436 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:13:23.158834 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:13:23.180020 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:13:24.055650 dockerd[1818]: time="2025-07-07T00:13:24.055535940Z" level=info msg="Starting up" Jul 7 00:13:24.056804 dockerd[1818]: time="2025-07-07T00:13:24.056757040Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:13:24.593444 dockerd[1818]: time="2025-07-07T00:13:24.593367066Z" level=info msg="Loading containers: start." Jul 7 00:13:24.604604 kernel: Initializing XFRM netlink socket Jul 7 00:13:24.871384 systemd-networkd[1477]: docker0: Link UP Jul 7 00:13:24.877713 dockerd[1818]: time="2025-07-07T00:13:24.877660477Z" level=info msg="Loading containers: done." Jul 7 00:13:24.894556 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck260035973-merged.mount: Deactivated successfully. Jul 7 00:13:24.895339 dockerd[1818]: time="2025-07-07T00:13:24.895296451Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:13:24.895433 dockerd[1818]: time="2025-07-07T00:13:24.895381561Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:13:24.895594 dockerd[1818]: time="2025-07-07T00:13:24.895523276Z" level=info msg="Initializing buildkit" Jul 7 00:13:24.930126 dockerd[1818]: time="2025-07-07T00:13:24.930074673Z" level=info msg="Completed buildkit initialization" Jul 7 00:13:24.938207 dockerd[1818]: time="2025-07-07T00:13:24.938181476Z" level=info msg="Daemon has completed initialization" Jul 7 00:13:24.938309 dockerd[1818]: time="2025-07-07T00:13:24.938235998Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:13:24.938617 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:13:25.890115 containerd[1593]: time="2025-07-07T00:13:25.890028088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:13:26.546966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165381309.mount: Deactivated successfully. Jul 7 00:13:27.381043 containerd[1593]: time="2025-07-07T00:13:27.380964646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:27.381540 containerd[1593]: time="2025-07-07T00:13:27.381486565Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 00:13:27.382628 containerd[1593]: time="2025-07-07T00:13:27.382600704Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:27.384999 containerd[1593]: time="2025-07-07T00:13:27.384972713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:27.386019 containerd[1593]: time="2025-07-07T00:13:27.385978679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.495873487s" Jul 7 00:13:27.386062 containerd[1593]: time="2025-07-07T00:13:27.386030677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:13:27.386794 containerd[1593]: time="2025-07-07T00:13:27.386745437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:13:28.507876 containerd[1593]: time="2025-07-07T00:13:28.507805685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:28.508597 containerd[1593]: time="2025-07-07T00:13:28.508549180Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 00:13:28.509737 containerd[1593]: time="2025-07-07T00:13:28.509702833Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:28.512158 containerd[1593]: time="2025-07-07T00:13:28.512113574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:28.513325 containerd[1593]: time="2025-07-07T00:13:28.513261737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.126461317s" Jul 7 00:13:28.513325 containerd[1593]: time="2025-07-07T00:13:28.513315588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:13:28.513894 containerd[1593]: time="2025-07-07T00:13:28.513862905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:13:29.992365 containerd[1593]: time="2025-07-07T00:13:29.992283110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:29.993185 containerd[1593]: time="2025-07-07T00:13:29.993110913Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 00:13:29.994272 containerd[1593]: time="2025-07-07T00:13:29.994235702Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:29.996513 containerd[1593]: time="2025-07-07T00:13:29.996475192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:29.997403 containerd[1593]: time="2025-07-07T00:13:29.997343430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.483451791s" Jul 7 00:13:29.997403 containerd[1593]: time="2025-07-07T00:13:29.997379097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:13:29.997906 containerd[1593]: time="2025-07-07T00:13:29.997878544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:13:30.742989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:13:30.745681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:31.217059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:31.222238 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:13:31.268904 kubelet[2098]: E0707 00:13:31.268784 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:13:31.275447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:13:31.275685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:13:31.276136 systemd[1]: kubelet.service: Consumed 244ms CPU time, 116.8M memory peak. Jul 7 00:13:31.614734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425619438.mount: Deactivated successfully. Jul 7 00:13:32.401171 containerd[1593]: time="2025-07-07T00:13:32.401095894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:32.401853 containerd[1593]: time="2025-07-07T00:13:32.401813640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 00:13:32.402897 containerd[1593]: time="2025-07-07T00:13:32.402862988Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:32.404838 containerd[1593]: time="2025-07-07T00:13:32.404785043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:32.405559 containerd[1593]: time="2025-07-07T00:13:32.405519901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.407607724s" Jul 7 00:13:32.405612 containerd[1593]: time="2025-07-07T00:13:32.405558433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:13:32.406269 containerd[1593]: time="2025-07-07T00:13:32.406184457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:13:33.179214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561422809.mount: Deactivated successfully. Jul 7 00:13:33.842091 containerd[1593]: time="2025-07-07T00:13:33.842026473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:33.842830 containerd[1593]: time="2025-07-07T00:13:33.842788092Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:13:33.844038 containerd[1593]: time="2025-07-07T00:13:33.844000475Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:33.846698 containerd[1593]: time="2025-07-07T00:13:33.846658490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:33.847597 containerd[1593]: time="2025-07-07T00:13:33.847549641Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.441330239s" Jul 7 00:13:33.847656 containerd[1593]: time="2025-07-07T00:13:33.847597110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:13:33.848160 containerd[1593]: time="2025-07-07T00:13:33.848117997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:13:34.362597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978214305.mount: Deactivated successfully. Jul 7 00:13:34.369466 containerd[1593]: time="2025-07-07T00:13:34.369417069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:34.370181 containerd[1593]: time="2025-07-07T00:13:34.370145064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:13:34.371412 containerd[1593]: time="2025-07-07T00:13:34.371365233Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:34.373439 containerd[1593]: time="2025-07-07T00:13:34.373397263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:13:34.374002 containerd[1593]: time="2025-07-07T00:13:34.373963916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 525.820752ms" Jul 7 00:13:34.374002 containerd[1593]: time="2025-07-07T00:13:34.373993091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:13:34.374600 containerd[1593]: time="2025-07-07T00:13:34.374468743Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:13:34.896336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953837040.mount: Deactivated successfully. Jul 7 00:13:37.393922 containerd[1593]: time="2025-07-07T00:13:37.393837307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:37.394872 containerd[1593]: time="2025-07-07T00:13:37.394803629Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 00:13:37.396741 containerd[1593]: time="2025-07-07T00:13:37.396687872Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:37.399258 containerd[1593]: time="2025-07-07T00:13:37.399218138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:13:37.400417 containerd[1593]: time="2025-07-07T00:13:37.400364006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.025864777s" Jul 7 00:13:37.400417 containerd[1593]: time="2025-07-07T00:13:37.400393933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:13:40.186617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:40.186834 systemd[1]: kubelet.service: Consumed 244ms CPU time, 116.8M memory peak. Jul 7 00:13:40.189407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:40.216311 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit session-7.scope)... Jul 7 00:13:40.216340 systemd[1]: Reloading... Jul 7 00:13:40.308619 zram_generator::config[2297]: No configuration found. Jul 7 00:13:40.510635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:40.631688 systemd[1]: Reloading finished in 414 ms. Jul 7 00:13:40.706378 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:13:40.706490 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:13:40.706847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:40.706909 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.2M memory peak. Jul 7 00:13:40.709208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:40.916601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:40.932912 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:13:40.968168 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:40.968168 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:13:40.968168 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:40.968472 kubelet[2345]: I0707 00:13:40.968248 2345 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:13:41.359255 kubelet[2345]: I0707 00:13:41.359186 2345 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:13:41.359255 kubelet[2345]: I0707 00:13:41.359230 2345 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:13:41.360597 kubelet[2345]: I0707 00:13:41.359924 2345 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:13:41.382768 kubelet[2345]: E0707 00:13:41.382700 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:41.385571 kubelet[2345]: I0707 00:13:41.385514 2345 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:41.393049 kubelet[2345]: I0707 00:13:41.393011 2345 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:13:41.399208 kubelet[2345]: I0707 00:13:41.399161 2345 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:13:41.399795 kubelet[2345]: I0707 00:13:41.399763 2345 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:13:41.399951 kubelet[2345]: I0707 00:13:41.399912 2345 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:13:41.400112 kubelet[2345]: I0707 00:13:41.399942 2345 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:13:41.400275 kubelet[2345]: I0707 00:13:41.400116 2345 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:13:41.400275 kubelet[2345]: I0707 00:13:41.400125 2345 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:13:41.400275 kubelet[2345]: I0707 00:13:41.400244 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:41.402822 kubelet[2345]: I0707 00:13:41.402788 2345 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:13:41.402822 kubelet[2345]: I0707 00:13:41.402811 2345 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:13:41.402896 kubelet[2345]: I0707 00:13:41.402844 2345 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:13:41.402896 kubelet[2345]: I0707 00:13:41.402877 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:13:41.407495 kubelet[2345]: W0707 00:13:41.406840 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:41.407495 kubelet[2345]: E0707 00:13:41.406906 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:41.407495 kubelet[2345]: I0707 00:13:41.406979 2345 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:13:41.407495 kubelet[2345]: W0707 00:13:41.407214 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:41.407495 kubelet[2345]: E0707 00:13:41.407269 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:41.407495 kubelet[2345]: I0707 00:13:41.407441 2345 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:13:41.407692 kubelet[2345]: W0707 00:13:41.407511 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:13:41.409782 kubelet[2345]: I0707 00:13:41.409738 2345 server.go:1274] "Started kubelet" Jul 7 00:13:41.410167 kubelet[2345]: I0707 00:13:41.410133 2345 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:13:41.411370 kubelet[2345]: I0707 00:13:41.411280 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:13:41.411660 kubelet[2345]: I0707 00:13:41.411636 2345 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:13:41.412055 kubelet[2345]: I0707 00:13:41.412041 2345 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:13:41.413372 kubelet[2345]: E0707 00:13:41.413347 2345 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:13:41.413757 kubelet[2345]: E0707 00:13:41.412593 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcfc424e1900d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:13:41.409714189 +0000 UTC m=+0.473042023,LastTimestamp:2025-07-07 00:13:41.409714189 +0000 UTC m=+0.473042023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:13:41.417481 kubelet[2345]: I0707 00:13:41.416769 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:13:41.417481 kubelet[2345]: I0707 00:13:41.416803 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:13:41.417481 kubelet[2345]: I0707 00:13:41.416859 2345 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:13:41.417481 kubelet[2345]: E0707 00:13:41.416880 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:41.417481 kubelet[2345]: E0707 00:13:41.417290 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Jul 7 00:13:41.417481 kubelet[2345]: W0707 00:13:41.417333 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:41.417481 kubelet[2345]: I0707 00:13:41.417348 2345 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:13:41.417481 kubelet[2345]: E0707 00:13:41.417379 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:41.417481 kubelet[2345]: I0707 00:13:41.417447 2345 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:13:41.418153 kubelet[2345]: I0707 00:13:41.418124 2345 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:13:41.418558 kubelet[2345]: I0707 00:13:41.418526 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:13:41.419822 kubelet[2345]: I0707 00:13:41.419800 2345 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:13:41.434981 kubelet[2345]: I0707 00:13:41.434926 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:13:41.436221 kubelet[2345]: I0707 00:13:41.436194 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:13:41.436221 kubelet[2345]: I0707 00:13:41.436214 2345 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:13:41.436300 kubelet[2345]: I0707 00:13:41.436234 2345 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:13:41.436300 kubelet[2345]: E0707 00:13:41.436269 2345 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:13:41.436771 kubelet[2345]: I0707 00:13:41.436746 2345 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:13:41.436771 kubelet[2345]: I0707 00:13:41.436763 2345 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:13:41.436841 kubelet[2345]: I0707 00:13:41.436779 2345 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:41.437940 kubelet[2345]: W0707 00:13:41.437435 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:41.438360 kubelet[2345]: E0707 00:13:41.437998 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:41.517957 kubelet[2345]: E0707 00:13:41.517914 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:41.537326 kubelet[2345]: E0707 00:13:41.537277 2345 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:13:41.618144 kubelet[2345]: E0707 00:13:41.618014 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:41.618144 kubelet[2345]: E0707 00:13:41.618020 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Jul 7 00:13:41.718544 kubelet[2345]: E0707 00:13:41.718230 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:41.737445 kubelet[2345]: E0707 00:13:41.737413 2345 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:13:41.819194 kubelet[2345]: E0707 00:13:41.819136 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:41.857560 kubelet[2345]: I0707 00:13:41.857502 2345 policy_none.go:49] "None policy: Start" Jul 7 00:13:41.858686 kubelet[2345]: I0707 00:13:41.858659 2345 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:13:41.858686 kubelet[2345]: I0707 00:13:41.858685 2345 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:13:41.867097 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:13:41.883716 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:13:41.888724 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:13:41.903793 kubelet[2345]: I0707 00:13:41.903729 2345 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:13:41.904429 kubelet[2345]: I0707 00:13:41.904045 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:13:41.904429 kubelet[2345]: I0707 00:13:41.904067 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:13:41.904429 kubelet[2345]: I0707 00:13:41.904357 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:13:41.906910 kubelet[2345]: E0707 00:13:41.906841 2345 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:13:42.006253 kubelet[2345]: I0707 00:13:42.006181 2345 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:13:42.006815 kubelet[2345]: E0707 00:13:42.006717 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 7 00:13:42.019724 kubelet[2345]: E0707 00:13:42.019620 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Jul 7 00:13:42.152182 systemd[1]: Created slice kubepods-burstable-pode0deade1aeb424915f015b3219e78757.slice - libcontainer container kubepods-burstable-pode0deade1aeb424915f015b3219e78757.slice. Jul 7 00:13:42.197642 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 00:13:42.209014 kubelet[2345]: I0707 00:13:42.208954 2345 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:13:42.209639 kubelet[2345]: E0707 00:13:42.209549 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 7 00:13:42.219393 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 00:13:42.222075 kubelet[2345]: I0707 00:13:42.221935 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:42.222075 kubelet[2345]: I0707 00:13:42.221994 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:42.222075 kubelet[2345]: I0707 00:13:42.222042 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:42.222075 kubelet[2345]: I0707 00:13:42.222078 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:42.222425 kubelet[2345]: I0707 00:13:42.222107 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:13:42.222425 kubelet[2345]: I0707 00:13:42.222144 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:42.222425 kubelet[2345]: I0707 00:13:42.222173 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:42.222425 kubelet[2345]: I0707 00:13:42.222189 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:42.222425 kubelet[2345]: I0707 00:13:42.222206 2345 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:42.489969 kubelet[2345]: E0707 00:13:42.488235 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.490983 containerd[1593]: time="2025-07-07T00:13:42.490864682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e0deade1aeb424915f015b3219e78757,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:42.515957 kubelet[2345]: E0707 00:13:42.515883 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.516631 containerd[1593]: time="2025-07-07T00:13:42.516549470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:42.524076 kubelet[2345]: E0707 00:13:42.524028 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.524481 containerd[1593]: time="2025-07-07T00:13:42.524446650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:42.611055 kubelet[2345]: I0707 00:13:42.610967 2345 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:13:42.611600 kubelet[2345]: E0707 00:13:42.611543 2345 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jul 7 00:13:42.630794 containerd[1593]: time="2025-07-07T00:13:42.630698140Z" level=info msg="connecting to shim 432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7" address="unix:///run/containerd/s/5589a0bedf01b452fd51052d314951f8702785d3afcc5958476aacc89851a7b7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:42.632788 containerd[1593]: time="2025-07-07T00:13:42.632726404Z" level=info msg="connecting to shim 6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0" address="unix:///run/containerd/s/4e9bff1d6d7b6a9466766a26297719454a194b6aceeb24ddb36a9e6f0a50d644" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:42.635262 kubelet[2345]: W0707 00:13:42.635191 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:42.635262 kubelet[2345]: E0707 00:13:42.635272 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:42.635734 kubelet[2345]: W0707 00:13:42.635191 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:42.635734 kubelet[2345]: E0707 00:13:42.635484 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:42.646885 containerd[1593]: time="2025-07-07T00:13:42.646807361Z" level=info msg="connecting to shim bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5" address="unix:///run/containerd/s/1f204eaa24fdfaf901d353958798f89b7b71e57fadb940cd05c9564d9e3582f2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:42.648424 kubelet[2345]: W0707 00:13:42.648238 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:42.648655 kubelet[2345]: E0707 00:13:42.648631 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:42.675854 systemd[1]: Started cri-containerd-6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0.scope - libcontainer container 6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0. Jul 7 00:13:42.681983 systemd[1]: Started cri-containerd-432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7.scope - libcontainer container 432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7. Jul 7 00:13:42.683948 systemd[1]: Started cri-containerd-bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5.scope - libcontainer container bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5. Jul 7 00:13:42.738971 containerd[1593]: time="2025-07-07T00:13:42.738916899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e0deade1aeb424915f015b3219e78757,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0\"" Jul 7 00:13:42.742080 kubelet[2345]: E0707 00:13:42.741968 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.743905 containerd[1593]: time="2025-07-07T00:13:42.743857174Z" level=info msg="CreateContainer within sandbox \"6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:13:42.749218 containerd[1593]: time="2025-07-07T00:13:42.749159117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5\"" Jul 7 00:13:42.749887 kubelet[2345]: E0707 00:13:42.749861 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.751296 containerd[1593]: time="2025-07-07T00:13:42.751251160Z" level=info msg="CreateContainer within sandbox \"bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:13:42.752741 containerd[1593]: time="2025-07-07T00:13:42.752708112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7\"" Jul 7 00:13:42.753149 kubelet[2345]: E0707 00:13:42.753121 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:42.754193 containerd[1593]: time="2025-07-07T00:13:42.754164403Z" level=info msg="CreateContainer within sandbox \"432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:13:42.758681 containerd[1593]: time="2025-07-07T00:13:42.758650887Z" level=info msg="Container 4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:13:42.761687 containerd[1593]: time="2025-07-07T00:13:42.761645212Z" level=info msg="Container 174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:13:42.765011 kubelet[2345]: W0707 00:13:42.764938 2345 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jul 7 00:13:42.765098 kubelet[2345]: E0707 00:13:42.765012 2345 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:13:42.767711 containerd[1593]: time="2025-07-07T00:13:42.767661034Z" level=info msg="CreateContainer within sandbox \"6a011eccfd520d9bbb6184b57e80d841cef9f69b6b1bf2286c40127f9d5affb0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0\"" Jul 7 00:13:42.768304 containerd[1593]: time="2025-07-07T00:13:42.768277741Z" level=info msg="StartContainer for \"4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0\"" Jul 7 00:13:42.769462 containerd[1593]: time="2025-07-07T00:13:42.769439910Z" level=info msg="connecting to shim 4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0" address="unix:///run/containerd/s/4e9bff1d6d7b6a9466766a26297719454a194b6aceeb24ddb36a9e6f0a50d644" protocol=ttrpc version=3 Jul 7 00:13:42.773324 containerd[1593]: time="2025-07-07T00:13:42.773247521Z" level=info msg="CreateContainer within sandbox \"bb50a3433d860ac1f4a9f66c0214860d0bcef2c3b7aabd07b7dd42de48d3b5e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8\"" Jul 7 00:13:42.774359 containerd[1593]: time="2025-07-07T00:13:42.774316906Z" level=info msg="StartContainer for \"174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8\"" Jul 7 00:13:42.775711 containerd[1593]: time="2025-07-07T00:13:42.775639346Z" level=info msg="connecting to shim 174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8" address="unix:///run/containerd/s/1f204eaa24fdfaf901d353958798f89b7b71e57fadb940cd05c9564d9e3582f2" protocol=ttrpc version=3 Jul 7 00:13:42.776525 containerd[1593]: time="2025-07-07T00:13:42.776481866Z" level=info msg="Container 7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:13:42.784353 containerd[1593]: time="2025-07-07T00:13:42.784306841Z" level=info msg="CreateContainer within sandbox \"432413c0aaebea6e5b290a261a968bf9b98e609ae8e802ab4a0e3573e3b130c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260\"" Jul 7 00:13:42.785208 containerd[1593]: time="2025-07-07T00:13:42.785174428Z" level=info msg="StartContainer for \"7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260\"" Jul 7 00:13:42.787469 containerd[1593]: time="2025-07-07T00:13:42.787426882Z" level=info msg="connecting to shim 7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260" address="unix:///run/containerd/s/5589a0bedf01b452fd51052d314951f8702785d3afcc5958476aacc89851a7b7" protocol=ttrpc version=3 Jul 7 00:13:42.792798 systemd[1]: Started cri-containerd-4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0.scope - libcontainer container 4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0. Jul 7 00:13:42.797671 systemd[1]: Started cri-containerd-174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8.scope - libcontainer container 174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8. Jul 7 00:13:42.808872 systemd[1]: Started cri-containerd-7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260.scope - libcontainer container 7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260. Jul 7 00:13:42.821057 kubelet[2345]: E0707 00:13:42.821008 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" Jul 7 00:13:42.865614 containerd[1593]: time="2025-07-07T00:13:42.864954234Z" level=info msg="StartContainer for \"174a36e9d483aca7f3d2a9968d5084666b1725a6fc8c41d98ee4d16d4a5f50d8\" returns successfully" Jul 7 00:13:42.868165 containerd[1593]: time="2025-07-07T00:13:42.868117536Z" level=info msg="StartContainer for \"4eb80912d2d95220ee6197a73979071ed1441826143c3dbdaf05cc9bc262e8c0\" returns successfully" Jul 7 00:13:42.873608 containerd[1593]: time="2025-07-07T00:13:42.873266833Z" level=info msg="StartContainer for \"7cd5f8ebc05d9a575ffba3c33b247830d6cbc4e410945143f9b79ab7d22b9260\" returns successfully" Jul 7 00:13:43.413649 kubelet[2345]: I0707 00:13:43.413622 2345 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:13:43.463995 kubelet[2345]: E0707 00:13:43.463924 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:43.530956 kubelet[2345]: E0707 00:13:43.530923 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:43.540253 kubelet[2345]: E0707 00:13:43.540167 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:44.421955 kubelet[2345]: I0707 00:13:44.421828 2345 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 00:13:44.421955 kubelet[2345]: E0707 00:13:44.421887 2345 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 00:13:44.444267 kubelet[2345]: E0707 00:13:44.444086 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:44.536983 kubelet[2345]: E0707 00:13:44.536937 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:44.545090 kubelet[2345]: E0707 00:13:44.545044 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:44.645740 kubelet[2345]: E0707 00:13:44.645680 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:44.746480 kubelet[2345]: E0707 00:13:44.746339 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:44.847146 kubelet[2345]: E0707 00:13:44.847072 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:44.948201 kubelet[2345]: E0707 00:13:44.948153 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.049358 kubelet[2345]: E0707 00:13:45.049242 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.149868 kubelet[2345]: E0707 00:13:45.149814 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.250446 kubelet[2345]: E0707 00:13:45.250395 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.303600 kubelet[2345]: E0707 00:13:45.303458 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:45.351101 kubelet[2345]: E0707 00:13:45.351045 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.451418 kubelet[2345]: E0707 00:13:45.451360 2345 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:45.858604 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Jul 7 00:13:45.858633 systemd[1]: Reloading... Jul 7 00:13:45.955638 zram_generator::config[2670]: No configuration found. Jul 7 00:13:46.044451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:13:46.178818 systemd[1]: Reloading finished in 319 ms. Jul 7 00:13:46.210773 kubelet[2345]: I0707 00:13:46.210708 2345 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:46.210800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:46.236072 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:13:46.236386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:46.236446 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 131.3M memory peak. Jul 7 00:13:46.239547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:13:46.463115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:13:46.475046 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:13:46.526991 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:46.526991 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:13:46.526991 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:13:46.527430 kubelet[2709]: I0707 00:13:46.527067 2709 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:13:46.534161 kubelet[2709]: I0707 00:13:46.534134 2709 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:13:46.534161 kubelet[2709]: I0707 00:13:46.534155 2709 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:13:46.534451 kubelet[2709]: I0707 00:13:46.534422 2709 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:13:46.535971 kubelet[2709]: I0707 00:13:46.535942 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:13:46.537731 kubelet[2709]: I0707 00:13:46.537663 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:13:46.541170 kubelet[2709]: I0707 00:13:46.541141 2709 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:13:46.548416 kubelet[2709]: I0707 00:13:46.548376 2709 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:13:46.548518 kubelet[2709]: I0707 00:13:46.548501 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:13:46.548701 kubelet[2709]: I0707 00:13:46.548646 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:13:46.548918 kubelet[2709]: I0707 00:13:46.548689 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:13:46.549045 kubelet[2709]: I0707 00:13:46.548925 2709 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:13:46.549045 kubelet[2709]: I0707 00:13:46.548939 2709 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:13:46.549045 kubelet[2709]: I0707 00:13:46.548969 2709 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:46.549133 kubelet[2709]: I0707 00:13:46.549076 2709 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:13:46.549133 kubelet[2709]: I0707 00:13:46.549088 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:13:46.549133 kubelet[2709]: I0707 00:13:46.549118 2709 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:13:46.549133 kubelet[2709]: I0707 00:13:46.549128 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:13:46.549850 kubelet[2709]: I0707 00:13:46.549818 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:13:46.550224 kubelet[2709]: I0707 00:13:46.550185 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:13:46.550677 kubelet[2709]: I0707 00:13:46.550636 2709 server.go:1274] "Started kubelet" Jul 7 00:13:46.552146 kubelet[2709]: I0707 00:13:46.552111 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:13:46.552406 kubelet[2709]: I0707 00:13:46.552383 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:13:46.552501 kubelet[2709]: I0707 00:13:46.552465 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:13:46.553946 kubelet[2709]: I0707 00:13:46.553906 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:13:46.554650 kubelet[2709]: I0707 00:13:46.554234 2709 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:13:46.555673 kubelet[2709]: I0707 00:13:46.554978 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:13:46.555673 kubelet[2709]: I0707 00:13:46.555459 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:13:46.555673 kubelet[2709]: I0707 00:13:46.555549 2709 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:13:46.557961 kubelet[2709]: I0707 00:13:46.557939 2709 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:13:46.559312 kubelet[2709]: E0707 00:13:46.559258 2709 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:13:46.559674 kubelet[2709]: I0707 00:13:46.559638 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:13:46.560888 kubelet[2709]: E0707 00:13:46.560849 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:13:46.564660 kubelet[2709]: I0707 00:13:46.564632 2709 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:13:46.564660 kubelet[2709]: I0707 00:13:46.564654 2709 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:13:46.580789 kubelet[2709]: I0707 00:13:46.580704 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:13:46.583278 kubelet[2709]: I0707 00:13:46.583252 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:13:46.583278 kubelet[2709]: I0707 00:13:46.583282 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:13:46.583344 kubelet[2709]: I0707 00:13:46.583302 2709 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:13:46.583393 kubelet[2709]: E0707 00:13:46.583345 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:13:46.602816 kubelet[2709]: I0707 00:13:46.602792 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:13:46.602949 kubelet[2709]: I0707 00:13:46.602909 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:13:46.602949 kubelet[2709]: I0707 00:13:46.602931 2709 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:13:46.603143 kubelet[2709]: I0707 00:13:46.603059 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:13:46.603143 kubelet[2709]: I0707 00:13:46.603070 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:13:46.603143 kubelet[2709]: I0707 00:13:46.603088 2709 policy_none.go:49] "None policy: Start" Jul 7 00:13:46.603595 kubelet[2709]: I0707 00:13:46.603555 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:13:46.603595 kubelet[2709]: I0707 00:13:46.603596 2709 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:13:46.603764 kubelet[2709]: I0707 00:13:46.603747 2709 state_mem.go:75] "Updated machine memory state" Jul 7 00:13:46.608223 kubelet[2709]: I0707 00:13:46.608192 2709 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:13:46.608395 kubelet[2709]: I0707 00:13:46.608371 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:13:46.608435 kubelet[2709]: I0707 00:13:46.608391 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:13:46.608620 kubelet[2709]: I0707 00:13:46.608544 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:13:46.714416 kubelet[2709]: I0707 00:13:46.714168 2709 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 00:13:46.720933 kubelet[2709]: I0707 00:13:46.720884 2709 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 00:13:46.721115 kubelet[2709]: I0707 00:13:46.720964 2709 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 00:13:46.759388 kubelet[2709]: I0707 00:13:46.759338 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:46.759388 kubelet[2709]: I0707 00:13:46.759381 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:46.759602 kubelet[2709]: I0707 00:13:46.759401 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:46.759602 kubelet[2709]: I0707 00:13:46.759419 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:46.759602 kubelet[2709]: I0707 00:13:46.759435 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:46.759602 kubelet[2709]: I0707 00:13:46.759449 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:13:46.759602 kubelet[2709]: I0707 00:13:46.759463 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0deade1aeb424915f015b3219e78757-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0deade1aeb424915f015b3219e78757\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:46.759738 kubelet[2709]: I0707 00:13:46.759478 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:46.759738 kubelet[2709]: I0707 00:13:46.759492 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:13:46.861023 sudo[2747]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:13:46.861375 sudo[2747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:13:46.992223 kubelet[2709]: E0707 00:13:46.992098 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:46.992379 kubelet[2709]: E0707 00:13:46.992234 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:46.992379 kubelet[2709]: E0707 00:13:46.992324 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:47.445281 sudo[2747]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:47.549676 kubelet[2709]: I0707 00:13:47.549633 2709 apiserver.go:52] "Watching apiserver" Jul 7 00:13:47.555898 kubelet[2709]: I0707 00:13:47.555827 2709 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:13:47.593744 kubelet[2709]: E0707 00:13:47.593616 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:47.594658 kubelet[2709]: E0707 00:13:47.593826 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:47.600877 kubelet[2709]: E0707 00:13:47.600192 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:13:47.600877 kubelet[2709]: E0707 00:13:47.600318 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:47.624535 kubelet[2709]: I0707 00:13:47.623958 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.623943357 podStartE2EDuration="1.623943357s" podCreationTimestamp="2025-07-07 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:47.615301591 +0000 UTC m=+1.134637682" watchObservedRunningTime="2025-07-07 00:13:47.623943357 +0000 UTC m=+1.143279448" Jul 7 00:13:47.633384 kubelet[2709]: I0707 00:13:47.633324 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.633301737 podStartE2EDuration="1.633301737s" podCreationTimestamp="2025-07-07 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:47.624372462 +0000 UTC m=+1.143708553" watchObservedRunningTime="2025-07-07 00:13:47.633301737 +0000 UTC m=+1.152637828" Jul 7 00:13:47.633969 kubelet[2709]: I0707 00:13:47.633515 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.63350591 podStartE2EDuration="1.63350591s" podCreationTimestamp="2025-07-07 00:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:47.63306825 +0000 UTC m=+1.152404341" watchObservedRunningTime="2025-07-07 00:13:47.63350591 +0000 UTC m=+1.152842011" Jul 7 00:13:48.594774 kubelet[2709]: E0707 00:13:48.594711 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:48.965924 sudo[1796]: pam_unix(sudo:session): session closed for user root Jul 7 00:13:48.967467 sshd[1795]: Connection closed by 10.0.0.1 port 32918 Jul 7 00:13:48.968058 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jul 7 00:13:48.973692 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:32918.service: Deactivated successfully. Jul 7 00:13:48.976143 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:13:48.976364 systemd[1]: session-7.scope: Consumed 5.074s CPU time, 261.2M memory peak. Jul 7 00:13:48.977706 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:13:48.979125 systemd-logind[1564]: Removed session 7. Jul 7 00:13:49.595908 kubelet[2709]: E0707 00:13:49.595859 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:52.613712 kubelet[2709]: E0707 00:13:52.613641 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:52.766095 kubelet[2709]: I0707 00:13:52.766057 2709 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:13:52.766395 containerd[1593]: time="2025-07-07T00:13:52.766355201Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:13:52.766795 kubelet[2709]: I0707 00:13:52.766515 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:13:53.600537 kubelet[2709]: E0707 00:13:53.600491 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:53.729668 systemd[1]: Created slice kubepods-besteffort-pod73508913_568d_46c1_967a_af6d170334ad.slice - libcontainer container kubepods-besteffort-pod73508913_568d_46c1_967a_af6d170334ad.slice. Jul 7 00:13:53.747399 systemd[1]: Created slice kubepods-burstable-pod247daba8_969a_4f97_b0ed_5fc6839399b8.slice - libcontainer container kubepods-burstable-pod247daba8_969a_4f97_b0ed_5fc6839399b8.slice. Jul 7 00:13:53.852713 kubelet[2709]: I0707 00:13:53.852549 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/247daba8-969a-4f97-b0ed-5fc6839399b8-clustermesh-secrets\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.852713 kubelet[2709]: I0707 00:13:53.852616 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-net\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.852713 kubelet[2709]: I0707 00:13:53.852666 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-config-path\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852766 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flps6\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-kube-api-access-flps6\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852831 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfcc\" (UniqueName: \"kubernetes.io/projected/73508913-568d-46c1-967a-af6d170334ad-kube-api-access-mdfcc\") pod \"kube-proxy-pvfcw\" (UID: \"73508913-568d-46c1-967a-af6d170334ad\") " pod="kube-system/kube-proxy-pvfcw" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852863 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-bpf-maps\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852890 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-cgroup\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852905 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-lib-modules\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853212 kubelet[2709]: I0707 00:13:53.852923 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-run\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.852948 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-xtables-lock\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.852999 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73508913-568d-46c1-967a-af6d170334ad-kube-proxy\") pod \"kube-proxy-pvfcw\" (UID: \"73508913-568d-46c1-967a-af6d170334ad\") " pod="kube-system/kube-proxy-pvfcw" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.853035 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cni-path\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.853060 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-hubble-tls\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.853082 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73508913-568d-46c1-967a-af6d170334ad-xtables-lock\") pod \"kube-proxy-pvfcw\" (UID: \"73508913-568d-46c1-967a-af6d170334ad\") " pod="kube-system/kube-proxy-pvfcw" Jul 7 00:13:53.853342 kubelet[2709]: I0707 00:13:53.853097 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73508913-568d-46c1-967a-af6d170334ad-lib-modules\") pod \"kube-proxy-pvfcw\" (UID: \"73508913-568d-46c1-967a-af6d170334ad\") " pod="kube-system/kube-proxy-pvfcw" Jul 7 00:13:53.853469 kubelet[2709]: I0707 00:13:53.853111 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-hostproc\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853469 kubelet[2709]: I0707 00:13:53.853126 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-etc-cni-netd\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:53.853469 kubelet[2709]: I0707 00:13:53.853142 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-kernel\") pod \"cilium-nfnwt\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " pod="kube-system/cilium-nfnwt" Jul 7 00:13:54.507495 systemd[1]: Created slice kubepods-besteffort-pod58a66540_bae9_49c6_b6da_205ee80eb0ec.slice - libcontainer container kubepods-besteffort-pod58a66540_bae9_49c6_b6da_205ee80eb0ec.slice. Jul 7 00:13:54.558266 kubelet[2709]: I0707 00:13:54.558199 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzs7v\" (UniqueName: \"kubernetes.io/projected/58a66540-bae9-49c6-b6da-205ee80eb0ec-kube-api-access-nzs7v\") pod \"cilium-operator-5d85765b45-ksp95\" (UID: \"58a66540-bae9-49c6-b6da-205ee80eb0ec\") " pod="kube-system/cilium-operator-5d85765b45-ksp95" Jul 7 00:13:54.558266 kubelet[2709]: I0707 00:13:54.558245 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a66540-bae9-49c6-b6da-205ee80eb0ec-cilium-config-path\") pod \"cilium-operator-5d85765b45-ksp95\" (UID: \"58a66540-bae9-49c6-b6da-205ee80eb0ec\") " pod="kube-system/cilium-operator-5d85765b45-ksp95" Jul 7 00:13:54.645100 kubelet[2709]: E0707 00:13:54.645049 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:54.645699 containerd[1593]: time="2025-07-07T00:13:54.645629309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvfcw,Uid:73508913-568d-46c1-967a-af6d170334ad,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:54.650399 kubelet[2709]: E0707 00:13:54.650360 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:54.650843 containerd[1593]: time="2025-07-07T00:13:54.650808901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nfnwt,Uid:247daba8-969a-4f97-b0ed-5fc6839399b8,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:54.810701 kubelet[2709]: E0707 00:13:54.810642 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:54.811413 containerd[1593]: time="2025-07-07T00:13:54.811376334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ksp95,Uid:58a66540-bae9-49c6-b6da-205ee80eb0ec,Namespace:kube-system,Attempt:0,}" Jul 7 00:13:54.889541 containerd[1593]: time="2025-07-07T00:13:54.889463691Z" level=info msg="connecting to shim e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313" address="unix:///run/containerd/s/2811d1c0f37d8476753e3ed2e8b854fb6fe98793dd88474457c763800b52edd1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:54.896756 containerd[1593]: time="2025-07-07T00:13:54.896686235Z" level=info msg="connecting to shim d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:54.921841 systemd[1]: Started cri-containerd-e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313.scope - libcontainer container e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313. Jul 7 00:13:54.925695 systemd[1]: Started cri-containerd-d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db.scope - libcontainer container d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db. Jul 7 00:13:55.019893 containerd[1593]: time="2025-07-07T00:13:55.019837686Z" level=info msg="connecting to shim b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152" address="unix:///run/containerd/s/b106cbcce38d45357fe233403dafdd7af99df6717e344be4be32c81c8f465a4f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:13:55.026573 containerd[1593]: time="2025-07-07T00:13:55.026519684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvfcw,Uid:73508913-568d-46c1-967a-af6d170334ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313\"" Jul 7 00:13:55.027291 kubelet[2709]: E0707 00:13:55.027259 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.032389 containerd[1593]: time="2025-07-07T00:13:55.031457584Z" level=info msg="CreateContainer within sandbox \"e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:13:55.034247 containerd[1593]: time="2025-07-07T00:13:55.034206658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nfnwt,Uid:247daba8-969a-4f97-b0ed-5fc6839399b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\"" Jul 7 00:13:55.037204 kubelet[2709]: E0707 00:13:55.037166 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.040651 containerd[1593]: time="2025-07-07T00:13:55.040595627Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:13:55.047181 containerd[1593]: time="2025-07-07T00:13:55.047117538Z" level=info msg="Container 78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:13:55.060625 containerd[1593]: time="2025-07-07T00:13:55.060560023Z" level=info msg="CreateContainer within sandbox \"e663aafc74d634c48ba961de8fde993b5b2037d6a928db8b73e577c9456b2313\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab\"" Jul 7 00:13:55.061653 containerd[1593]: time="2025-07-07T00:13:55.061546895Z" level=info msg="StartContainer for \"78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab\"" Jul 7 00:13:55.063230 containerd[1593]: time="2025-07-07T00:13:55.063203456Z" level=info msg="connecting to shim 78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab" address="unix:///run/containerd/s/2811d1c0f37d8476753e3ed2e8b854fb6fe98793dd88474457c763800b52edd1" protocol=ttrpc version=3 Jul 7 00:13:55.065828 systemd[1]: Started cri-containerd-b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152.scope - libcontainer container b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152. Jul 7 00:13:55.083808 kubelet[2709]: E0707 00:13:55.082734 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.093775 systemd[1]: Started cri-containerd-78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab.scope - libcontainer container 78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab. Jul 7 00:13:55.138834 containerd[1593]: time="2025-07-07T00:13:55.138726422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ksp95,Uid:58a66540-bae9-49c6-b6da-205ee80eb0ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\"" Jul 7 00:13:55.140530 kubelet[2709]: E0707 00:13:55.140508 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.155427 containerd[1593]: time="2025-07-07T00:13:55.155386555Z" level=info msg="StartContainer for \"78346e4ed5124e9cafc83ee695237158d39822b019a2376c5ab53930962650ab\" returns successfully" Jul 7 00:13:55.607612 kubelet[2709]: E0707 00:13:55.607552 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.607612 kubelet[2709]: E0707 00:13:55.607608 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:13:55.907024 kubelet[2709]: I0707 00:13:55.906853 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvfcw" podStartSLOduration=2.906830392 podStartE2EDuration="2.906830392s" podCreationTimestamp="2025-07-07 00:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:13:55.906795897 +0000 UTC m=+9.426131998" watchObservedRunningTime="2025-07-07 00:13:55.906830392 +0000 UTC m=+9.426166483" Jul 7 00:13:59.147832 kubelet[2709]: E0707 00:13:59.147617 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:00.373935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3688066891.mount: Deactivated successfully. Jul 7 00:14:02.360780 update_engine[1567]: I20250707 00:14:02.360656 1567 update_attempter.cc:509] Updating boot flags... Jul 7 00:14:05.606733 containerd[1593]: time="2025-07-07T00:14:05.606653746Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:05.607370 containerd[1593]: time="2025-07-07T00:14:05.607302884Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:14:05.608482 containerd[1593]: time="2025-07-07T00:14:05.608440436Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:05.610009 containerd[1593]: time="2025-07-07T00:14:05.609975120Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.569326534s" Jul 7 00:14:05.610058 containerd[1593]: time="2025-07-07T00:14:05.610009416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:14:05.613597 containerd[1593]: time="2025-07-07T00:14:05.613547911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:14:05.623540 containerd[1593]: time="2025-07-07T00:14:05.623499643Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:14:05.632598 containerd[1593]: time="2025-07-07T00:14:05.632544680Z" level=info msg="Container fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:05.641608 containerd[1593]: time="2025-07-07T00:14:05.641562093Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\"" Jul 7 00:14:05.642048 containerd[1593]: time="2025-07-07T00:14:05.642020892Z" level=info msg="StartContainer for \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\"" Jul 7 00:14:05.643025 containerd[1593]: time="2025-07-07T00:14:05.642981519Z" level=info msg="connecting to shim fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" protocol=ttrpc version=3 Jul 7 00:14:05.695848 systemd[1]: Started cri-containerd-fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0.scope - libcontainer container fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0. Jul 7 00:14:05.744567 systemd[1]: cri-containerd-fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0.scope: Deactivated successfully. Jul 7 00:14:05.746296 containerd[1593]: time="2025-07-07T00:14:05.746257130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" id:\"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" pid:3148 exited_at:{seconds:1751847245 nanos:745441477}" Jul 7 00:14:05.840004 containerd[1593]: time="2025-07-07T00:14:05.839931153Z" level=info msg="received exit event container_id:\"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" id:\"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" pid:3148 exited_at:{seconds:1751847245 nanos:745441477}" Jul 7 00:14:05.841457 containerd[1593]: time="2025-07-07T00:14:05.841426582Z" level=info msg="StartContainer for \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" returns successfully" Jul 7 00:14:05.863136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0-rootfs.mount: Deactivated successfully. Jul 7 00:14:06.627882 kubelet[2709]: E0707 00:14:06.627839 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:06.630207 containerd[1593]: time="2025-07-07T00:14:06.630116705Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:14:06.644466 containerd[1593]: time="2025-07-07T00:14:06.644283809Z" level=info msg="Container 386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:06.654141 containerd[1593]: time="2025-07-07T00:14:06.654083673Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\"" Jul 7 00:14:06.656602 containerd[1593]: time="2025-07-07T00:14:06.655924384Z" level=info msg="StartContainer for \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\"" Jul 7 00:14:06.656884 containerd[1593]: time="2025-07-07T00:14:06.656830498Z" level=info msg="connecting to shim 386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" protocol=ttrpc version=3 Jul 7 00:14:06.676723 systemd[1]: Started cri-containerd-386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a.scope - libcontainer container 386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a. Jul 7 00:14:06.708551 containerd[1593]: time="2025-07-07T00:14:06.708502753Z" level=info msg="StartContainer for \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" returns successfully" Jul 7 00:14:06.724415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:14:06.724882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:06.725767 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:06.728132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:14:06.728898 containerd[1593]: time="2025-07-07T00:14:06.728806674Z" level=info msg="received exit event container_id:\"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" id:\"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" pid:3195 exited_at:{seconds:1751847246 nanos:728480537}" Jul 7 00:14:06.729123 containerd[1593]: time="2025-07-07T00:14:06.728833203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" id:\"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" pid:3195 exited_at:{seconds:1751847246 nanos:728480537}" Jul 7 00:14:06.731235 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:14:06.731851 systemd[1]: cri-containerd-386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a.scope: Deactivated successfully. Jul 7 00:14:06.758999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:14:07.631128 kubelet[2709]: E0707 00:14:07.631085 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:07.632759 containerd[1593]: time="2025-07-07T00:14:07.632723097Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:14:07.642698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a-rootfs.mount: Deactivated successfully. Jul 7 00:14:07.698414 containerd[1593]: time="2025-07-07T00:14:07.698366530Z" level=info msg="Container da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:07.704705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075263128.mount: Deactivated successfully. Jul 7 00:14:07.708798 containerd[1593]: time="2025-07-07T00:14:07.708760668Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\"" Jul 7 00:14:07.709525 containerd[1593]: time="2025-07-07T00:14:07.709498423Z" level=info msg="StartContainer for \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\"" Jul 7 00:14:07.711959 containerd[1593]: time="2025-07-07T00:14:07.711900193Z" level=info msg="connecting to shim da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" protocol=ttrpc version=3 Jul 7 00:14:07.756804 systemd[1]: Started cri-containerd-da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1.scope - libcontainer container da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1. Jul 7 00:14:07.806151 systemd[1]: cri-containerd-da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1.scope: Deactivated successfully. Jul 7 00:14:07.807885 containerd[1593]: time="2025-07-07T00:14:07.807827621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" id:\"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" pid:3257 exited_at:{seconds:1751847247 nanos:807469755}" Jul 7 00:14:07.830569 containerd[1593]: time="2025-07-07T00:14:07.830540474Z" level=info msg="received exit event container_id:\"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" id:\"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" pid:3257 exited_at:{seconds:1751847247 nanos:807469755}" Jul 7 00:14:07.833166 containerd[1593]: time="2025-07-07T00:14:07.833133456Z" level=info msg="StartContainer for \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" returns successfully" Jul 7 00:14:07.854860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1-rootfs.mount: Deactivated successfully. Jul 7 00:14:08.022894 containerd[1593]: time="2025-07-07T00:14:08.022762417Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:08.023747 containerd[1593]: time="2025-07-07T00:14:08.023700319Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:14:08.024831 containerd[1593]: time="2025-07-07T00:14:08.024794405Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:14:08.025854 containerd[1593]: time="2025-07-07T00:14:08.025783204Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.412184355s" Jul 7 00:14:08.025854 containerd[1593]: time="2025-07-07T00:14:08.025853557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:14:08.028007 containerd[1593]: time="2025-07-07T00:14:08.027973863Z" level=info msg="CreateContainer within sandbox \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:14:08.035626 containerd[1593]: time="2025-07-07T00:14:08.035572509Z" level=info msg="Container a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:08.042050 containerd[1593]: time="2025-07-07T00:14:08.042018568Z" level=info msg="CreateContainer within sandbox \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\"" Jul 7 00:14:08.042492 containerd[1593]: time="2025-07-07T00:14:08.042450034Z" level=info msg="StartContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\"" Jul 7 00:14:08.043164 containerd[1593]: time="2025-07-07T00:14:08.043133474Z" level=info msg="connecting to shim a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8" address="unix:///run/containerd/s/b106cbcce38d45357fe233403dafdd7af99df6717e344be4be32c81c8f465a4f" protocol=ttrpc version=3 Jul 7 00:14:08.071759 systemd[1]: Started cri-containerd-a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8.scope - libcontainer container a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8. Jul 7 00:14:08.103120 containerd[1593]: time="2025-07-07T00:14:08.103071648Z" level=info msg="StartContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" returns successfully" Jul 7 00:14:08.639096 kubelet[2709]: E0707 00:14:08.639015 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:08.646925 containerd[1593]: time="2025-07-07T00:14:08.644881741Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:14:08.654030 kubelet[2709]: E0707 00:14:08.653912 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:08.667015 containerd[1593]: time="2025-07-07T00:14:08.666954043Z" level=info msg="Container 037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:08.678494 containerd[1593]: time="2025-07-07T00:14:08.678445504Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\"" Jul 7 00:14:08.679005 containerd[1593]: time="2025-07-07T00:14:08.678975807Z" level=info msg="StartContainer for \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\"" Jul 7 00:14:08.680104 containerd[1593]: time="2025-07-07T00:14:08.680060626Z" level=info msg="connecting to shim 037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" protocol=ttrpc version=3 Jul 7 00:14:08.711843 systemd[1]: Started cri-containerd-037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb.scope - libcontainer container 037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb. Jul 7 00:14:08.750604 systemd[1]: cri-containerd-037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb.scope: Deactivated successfully. Jul 7 00:14:08.751383 containerd[1593]: time="2025-07-07T00:14:08.751338845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" id:\"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" pid:3336 exited_at:{seconds:1751847248 nanos:751032857}" Jul 7 00:14:08.754597 containerd[1593]: time="2025-07-07T00:14:08.753795206Z" level=info msg="received exit event container_id:\"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" id:\"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" pid:3336 exited_at:{seconds:1751847248 nanos:751032857}" Jul 7 00:14:08.764743 containerd[1593]: time="2025-07-07T00:14:08.764696632Z" level=info msg="StartContainer for \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" returns successfully" Jul 7 00:14:08.780504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb-rootfs.mount: Deactivated successfully. Jul 7 00:14:09.658766 kubelet[2709]: E0707 00:14:09.658710 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:09.659534 kubelet[2709]: E0707 00:14:09.659513 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:09.662923 containerd[1593]: time="2025-07-07T00:14:09.662882173Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:14:09.675407 kubelet[2709]: I0707 00:14:09.675341 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ksp95" podStartSLOduration=3.7899409459999998 podStartE2EDuration="16.675325722s" podCreationTimestamp="2025-07-07 00:13:53 +0000 UTC" firstStartedPulling="2025-07-07 00:13:55.141319208 +0000 UTC m=+8.660655299" lastFinishedPulling="2025-07-07 00:14:08.026703984 +0000 UTC m=+21.546040075" observedRunningTime="2025-07-07 00:14:08.682845428 +0000 UTC m=+22.202181529" watchObservedRunningTime="2025-07-07 00:14:09.675325722 +0000 UTC m=+23.194661813" Jul 7 00:14:09.677219 containerd[1593]: time="2025-07-07T00:14:09.677158402Z" level=info msg="Container 2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:09.685118 containerd[1593]: time="2025-07-07T00:14:09.685075514Z" level=info msg="CreateContainer within sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\"" Jul 7 00:14:09.685607 containerd[1593]: time="2025-07-07T00:14:09.685547466Z" level=info msg="StartContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\"" Jul 7 00:14:09.686562 containerd[1593]: time="2025-07-07T00:14:09.686538798Z" level=info msg="connecting to shim 2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c" address="unix:///run/containerd/s/4f2dc71d08952ea513db76cc28d4c71dafd48bdd72954164d4f1341c7f666a78" protocol=ttrpc version=3 Jul 7 00:14:09.710771 systemd[1]: Started cri-containerd-2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c.scope - libcontainer container 2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c. Jul 7 00:14:09.752505 containerd[1593]: time="2025-07-07T00:14:09.752449788Z" level=info msg="StartContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" returns successfully" Jul 7 00:14:09.829910 containerd[1593]: time="2025-07-07T00:14:09.829860355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" id:\"04e9d7c36fd4387e3dac826661c45e5bd4e8548dcba03233d91584a03dbca8fb\" pid:3406 exited_at:{seconds:1751847249 nanos:829434070}" Jul 7 00:14:09.904806 kubelet[2709]: I0707 00:14:09.904769 2709 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:14:09.948622 systemd[1]: Created slice kubepods-burstable-pod1c553ca2_c5b7_4b11_ada3_ceae5cc90a7b.slice - libcontainer container kubepods-burstable-pod1c553ca2_c5b7_4b11_ada3_ceae5cc90a7b.slice. Jul 7 00:14:09.964356 systemd[1]: Created slice kubepods-burstable-pod3f19e193_b82a_44ff_9f4e_3c4b0ef482fe.slice - libcontainer container kubepods-burstable-pod3f19e193_b82a_44ff_9f4e_3c4b0ef482fe.slice. Jul 7 00:14:10.059905 kubelet[2709]: I0707 00:14:10.059846 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rhq\" (UniqueName: \"kubernetes.io/projected/1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b-kube-api-access-78rhq\") pod \"coredns-7c65d6cfc9-zgz92\" (UID: \"1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b\") " pod="kube-system/coredns-7c65d6cfc9-zgz92" Jul 7 00:14:10.059905 kubelet[2709]: I0707 00:14:10.059900 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nm67\" (UniqueName: \"kubernetes.io/projected/3f19e193-b82a-44ff-9f4e-3c4b0ef482fe-kube-api-access-8nm67\") pod \"coredns-7c65d6cfc9-jkhvf\" (UID: \"3f19e193-b82a-44ff-9f4e-3c4b0ef482fe\") " pod="kube-system/coredns-7c65d6cfc9-jkhvf" Jul 7 00:14:10.059905 kubelet[2709]: I0707 00:14:10.059923 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b-config-volume\") pod \"coredns-7c65d6cfc9-zgz92\" (UID: \"1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b\") " pod="kube-system/coredns-7c65d6cfc9-zgz92" Jul 7 00:14:10.060144 kubelet[2709]: I0707 00:14:10.059943 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f19e193-b82a-44ff-9f4e-3c4b0ef482fe-config-volume\") pod \"coredns-7c65d6cfc9-jkhvf\" (UID: \"3f19e193-b82a-44ff-9f4e-3c4b0ef482fe\") " pod="kube-system/coredns-7c65d6cfc9-jkhvf" Jul 7 00:14:10.257431 kubelet[2709]: E0707 00:14:10.256997 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:10.266142 containerd[1593]: time="2025-07-07T00:14:10.266079243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgz92,Uid:1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:10.267966 kubelet[2709]: E0707 00:14:10.267915 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:10.268903 containerd[1593]: time="2025-07-07T00:14:10.268618717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkhvf,Uid:3f19e193-b82a-44ff-9f4e-3c4b0ef482fe,Namespace:kube-system,Attempt:0,}" Jul 7 00:14:10.668949 kubelet[2709]: E0707 00:14:10.668894 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:11.670445 kubelet[2709]: E0707 00:14:11.670395 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:11.982707 systemd-networkd[1477]: cilium_host: Link UP Jul 7 00:14:11.982885 systemd-networkd[1477]: cilium_net: Link UP Jul 7 00:14:11.983370 systemd-networkd[1477]: cilium_net: Gained carrier Jul 7 00:14:11.983683 systemd-networkd[1477]: cilium_host: Gained carrier Jul 7 00:14:12.083851 systemd-networkd[1477]: cilium_vxlan: Link UP Jul 7 00:14:12.083863 systemd-networkd[1477]: cilium_vxlan: Gained carrier Jul 7 00:14:12.293620 kernel: NET: Registered PF_ALG protocol family Jul 7 00:14:12.354818 systemd-networkd[1477]: cilium_net: Gained IPv6LL Jul 7 00:14:12.491845 systemd-networkd[1477]: cilium_host: Gained IPv6LL Jul 7 00:14:12.673252 kubelet[2709]: E0707 00:14:12.672949 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:12.947410 systemd-networkd[1477]: lxc_health: Link UP Jul 7 00:14:12.949031 systemd-networkd[1477]: lxc_health: Gained carrier Jul 7 00:14:13.154771 systemd-networkd[1477]: cilium_vxlan: Gained IPv6LL Jul 7 00:14:13.342319 systemd-networkd[1477]: lxc57c59c23cbc4: Link UP Jul 7 00:14:13.342668 kernel: eth0: renamed from tmpd030d Jul 7 00:14:13.343687 systemd-networkd[1477]: lxc57c59c23cbc4: Gained carrier Jul 7 00:14:13.368384 systemd-networkd[1477]: lxc5b91d00c363a: Link UP Jul 7 00:14:13.369525 systemd-networkd[1477]: lxc5b91d00c363a: Gained carrier Jul 7 00:14:13.369839 kernel: eth0: renamed from tmp866af Jul 7 00:14:14.050828 systemd-networkd[1477]: lxc_health: Gained IPv6LL Jul 7 00:14:14.652929 kubelet[2709]: E0707 00:14:14.652877 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:14.676482 kubelet[2709]: E0707 00:14:14.676435 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:14.771064 kubelet[2709]: I0707 00:14:14.770977 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nfnwt" podStartSLOduration=11.195555656 podStartE2EDuration="21.770960394s" podCreationTimestamp="2025-07-07 00:13:53 +0000 UTC" firstStartedPulling="2025-07-07 00:13:55.037893742 +0000 UTC m=+8.557229833" lastFinishedPulling="2025-07-07 00:14:05.61329848 +0000 UTC m=+19.132634571" observedRunningTime="2025-07-07 00:14:10.684880742 +0000 UTC m=+24.204216843" watchObservedRunningTime="2025-07-07 00:14:14.770960394 +0000 UTC m=+28.290296485" Jul 7 00:14:14.818822 systemd-networkd[1477]: lxc57c59c23cbc4: Gained IPv6LL Jul 7 00:14:15.038378 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:50634.service - OpenSSH per-connection server daemon (10.0.0.1:50634). Jul 7 00:14:15.097873 sshd[3878]: Accepted publickey for core from 10.0.0.1 port 50634 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:15.099867 sshd-session[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:15.105028 systemd-logind[1564]: New session 8 of user core. Jul 7 00:14:15.115773 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:14:15.202760 systemd-networkd[1477]: lxc5b91d00c363a: Gained IPv6LL Jul 7 00:14:15.246439 sshd[3880]: Connection closed by 10.0.0.1 port 50634 Jul 7 00:14:15.246802 sshd-session[3878]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:15.251401 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:50634.service: Deactivated successfully. Jul 7 00:14:15.253608 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:14:15.254485 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:14:15.255877 systemd-logind[1564]: Removed session 8. Jul 7 00:14:16.656745 containerd[1593]: time="2025-07-07T00:14:16.656695095Z" level=info msg="connecting to shim 866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a" address="unix:///run/containerd/s/67d6b41ecd9f58ad0899a0ede69c28d074dc0724762dbbf02ce71f28879b6c63" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:16.669962 containerd[1593]: time="2025-07-07T00:14:16.669373678Z" level=info msg="connecting to shim d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720" address="unix:///run/containerd/s/b8cfa1fde95fed429e93bf566964c94ff200cd4ad15e4d3e62d77b41c96b9010" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:14:16.692730 systemd[1]: Started cri-containerd-866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a.scope - libcontainer container 866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a. Jul 7 00:14:16.697446 systemd[1]: Started cri-containerd-d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720.scope - libcontainer container d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720. Jul 7 00:14:16.709528 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:14:16.713958 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:14:16.743445 containerd[1593]: time="2025-07-07T00:14:16.743393518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zgz92,Uid:1c553ca2-c5b7-4b11-ada3-ceae5cc90a7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a\"" Jul 7 00:14:16.744110 kubelet[2709]: E0707 00:14:16.744088 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:16.749199 containerd[1593]: time="2025-07-07T00:14:16.749110578Z" level=info msg="CreateContainer within sandbox \"866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:14:16.752210 containerd[1593]: time="2025-07-07T00:14:16.752174379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jkhvf,Uid:3f19e193-b82a-44ff-9f4e-3c4b0ef482fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720\"" Jul 7 00:14:16.752919 kubelet[2709]: E0707 00:14:16.752891 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:16.754670 containerd[1593]: time="2025-07-07T00:14:16.754642368Z" level=info msg="CreateContainer within sandbox \"d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:14:16.796934 containerd[1593]: time="2025-07-07T00:14:16.796880903Z" level=info msg="Container 40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:16.804955 containerd[1593]: time="2025-07-07T00:14:16.804895110Z" level=info msg="Container f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:14:16.813931 containerd[1593]: time="2025-07-07T00:14:16.813876999Z" level=info msg="CreateContainer within sandbox \"866af95713afd9bda089b60c7460b8ed0ec75f7176957454b89388720df9fa8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315\"" Jul 7 00:14:16.814598 containerd[1593]: time="2025-07-07T00:14:16.814513828Z" level=info msg="StartContainer for \"f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315\"" Jul 7 00:14:16.815296 containerd[1593]: time="2025-07-07T00:14:16.814969537Z" level=info msg="CreateContainer within sandbox \"d030d3e5fbfb72d6c621d2021b52d630b6647f26906f6ea256f8f6ebef770720\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42\"" Jul 7 00:14:16.815463 containerd[1593]: time="2025-07-07T00:14:16.815423853Z" level=info msg="StartContainer for \"40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42\"" Jul 7 00:14:16.815881 containerd[1593]: time="2025-07-07T00:14:16.815853251Z" level=info msg="connecting to shim f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315" address="unix:///run/containerd/s/67d6b41ecd9f58ad0899a0ede69c28d074dc0724762dbbf02ce71f28879b6c63" protocol=ttrpc version=3 Jul 7 00:14:16.816201 containerd[1593]: time="2025-07-07T00:14:16.816174356Z" level=info msg="connecting to shim 40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42" address="unix:///run/containerd/s/b8cfa1fde95fed429e93bf566964c94ff200cd4ad15e4d3e62d77b41c96b9010" protocol=ttrpc version=3 Jul 7 00:14:16.847770 systemd[1]: Started cri-containerd-40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42.scope - libcontainer container 40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42. Jul 7 00:14:16.849598 systemd[1]: Started cri-containerd-f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315.scope - libcontainer container f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315. Jul 7 00:14:16.889799 containerd[1593]: time="2025-07-07T00:14:16.889622561Z" level=info msg="StartContainer for \"40e1dd4d5506198b7e6a139f63f4714df618e771bcb36c81b408d2c2640c0e42\" returns successfully" Jul 7 00:14:16.889951 containerd[1593]: time="2025-07-07T00:14:16.889898600Z" level=info msg="StartContainer for \"f29ea8639b961b5e6fdc39a80f91ba0500f1078040e7c6b38ea922a46b179315\" returns successfully" Jul 7 00:14:17.687278 kubelet[2709]: E0707 00:14:17.687234 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:17.689964 kubelet[2709]: E0707 00:14:17.689873 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:17.698472 kubelet[2709]: I0707 00:14:17.698409 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zgz92" podStartSLOduration=24.698392707 podStartE2EDuration="24.698392707s" podCreationTimestamp="2025-07-07 00:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:17.697871215 +0000 UTC m=+31.217207347" watchObservedRunningTime="2025-07-07 00:14:17.698392707 +0000 UTC m=+31.217728799" Jul 7 00:14:17.722562 kubelet[2709]: I0707 00:14:17.722245 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jkhvf" podStartSLOduration=24.72209833 podStartE2EDuration="24.72209833s" podCreationTimestamp="2025-07-07 00:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:14:17.721062659 +0000 UTC m=+31.240398760" watchObservedRunningTime="2025-07-07 00:14:17.72209833 +0000 UTC m=+31.241434431" Jul 7 00:14:18.691803 kubelet[2709]: E0707 00:14:18.691756 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:18.691803 kubelet[2709]: E0707 00:14:18.691784 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:19.693457 kubelet[2709]: E0707 00:14:19.693408 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:19.693457 kubelet[2709]: E0707 00:14:19.693408 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:14:20.264101 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:48602.service - OpenSSH per-connection server daemon (10.0.0.1:48602). Jul 7 00:14:20.323892 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 48602 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:20.325668 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:20.330950 systemd-logind[1564]: New session 9 of user core. Jul 7 00:14:20.338733 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:14:20.477884 sshd[4077]: Connection closed by 10.0.0.1 port 48602 Jul 7 00:14:20.478235 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:20.483145 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:48602.service: Deactivated successfully. Jul 7 00:14:20.485243 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:14:20.486241 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:14:20.487479 systemd-logind[1564]: Removed session 9. Jul 7 00:14:25.494692 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:48608.service - OpenSSH per-connection server daemon (10.0.0.1:48608). Jul 7 00:14:25.535153 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 48608 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:25.536571 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:25.540858 systemd-logind[1564]: New session 10 of user core. Jul 7 00:14:25.549718 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:14:25.734106 sshd[4095]: Connection closed by 10.0.0.1 port 48608 Jul 7 00:14:25.734525 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:25.738736 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:48608.service: Deactivated successfully. Jul 7 00:14:25.741526 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:14:25.742752 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:14:25.744252 systemd-logind[1564]: Removed session 10. Jul 7 00:14:30.751094 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:37068.service - OpenSSH per-connection server daemon (10.0.0.1:37068). Jul 7 00:14:30.804081 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 37068 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:30.805469 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:30.810832 systemd-logind[1564]: New session 11 of user core. Jul 7 00:14:30.821715 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:14:30.934727 sshd[4112]: Connection closed by 10.0.0.1 port 37068 Jul 7 00:14:30.935225 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:30.944224 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:37068.service: Deactivated successfully. Jul 7 00:14:30.946075 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:14:30.947064 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:14:30.950834 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Jul 7 00:14:30.951815 systemd-logind[1564]: Removed session 11. Jul 7 00:14:30.999412 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:31.001184 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:31.006458 systemd-logind[1564]: New session 12 of user core. Jul 7 00:14:31.015731 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:14:31.262326 sshd[4130]: Connection closed by 10.0.0.1 port 37074 Jul 7 00:14:31.262627 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:31.274772 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:37074.service: Deactivated successfully. Jul 7 00:14:31.277399 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:14:31.278231 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:14:31.283839 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:37088.service - OpenSSH per-connection server daemon (10.0.0.1:37088). Jul 7 00:14:31.285224 systemd-logind[1564]: Removed session 12. Jul 7 00:14:31.348375 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 37088 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:31.350121 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:31.355024 systemd-logind[1564]: New session 13 of user core. Jul 7 00:14:31.364732 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:14:31.508047 sshd[4143]: Connection closed by 10.0.0.1 port 37088 Jul 7 00:14:31.508377 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:31.512421 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:37088.service: Deactivated successfully. Jul 7 00:14:31.514353 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:14:31.515180 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:14:31.516321 systemd-logind[1564]: Removed session 13. Jul 7 00:14:36.522708 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:37100.service - OpenSSH per-connection server daemon (10.0.0.1:37100). Jul 7 00:14:36.562107 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 37100 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:36.563417 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:36.567716 systemd-logind[1564]: New session 14 of user core. Jul 7 00:14:36.577733 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:14:36.684337 sshd[4158]: Connection closed by 10.0.0.1 port 37100 Jul 7 00:14:36.684703 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:36.688685 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:37100.service: Deactivated successfully. Jul 7 00:14:36.690665 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:14:36.691538 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:14:36.692849 systemd-logind[1564]: Removed session 14. Jul 7 00:14:41.702333 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:43412.service - OpenSSH per-connection server daemon (10.0.0.1:43412). Jul 7 00:14:41.757604 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 43412 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:41.759413 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:41.764220 systemd-logind[1564]: New session 15 of user core. Jul 7 00:14:41.773800 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:14:41.885939 sshd[4173]: Connection closed by 10.0.0.1 port 43412 Jul 7 00:14:41.886282 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:41.890841 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:43412.service: Deactivated successfully. Jul 7 00:14:41.893004 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:14:41.893947 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:14:41.895400 systemd-logind[1564]: Removed session 15. Jul 7 00:14:46.903152 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:43422.service - OpenSSH per-connection server daemon (10.0.0.1:43422). Jul 7 00:14:46.959994 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:46.961916 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:46.967222 systemd-logind[1564]: New session 16 of user core. Jul 7 00:14:46.976945 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:14:47.095956 sshd[4190]: Connection closed by 10.0.0.1 port 43422 Jul 7 00:14:47.096340 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:47.107861 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:43422.service: Deactivated successfully. Jul 7 00:14:47.110232 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:14:47.111591 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:14:47.114034 systemd-logind[1564]: Removed session 16. Jul 7 00:14:47.115521 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:43434.service - OpenSSH per-connection server daemon (10.0.0.1:43434). Jul 7 00:14:47.169716 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:47.172117 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:47.177756 systemd-logind[1564]: New session 17 of user core. Jul 7 00:14:47.188903 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:14:47.442498 sshd[4206]: Connection closed by 10.0.0.1 port 43434 Jul 7 00:14:47.443267 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:47.453113 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:43434.service: Deactivated successfully. Jul 7 00:14:47.455442 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:14:47.456522 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:14:47.459800 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:43440.service - OpenSSH per-connection server daemon (10.0.0.1:43440). Jul 7 00:14:47.460509 systemd-logind[1564]: Removed session 17. Jul 7 00:14:47.525801 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 43440 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:47.527812 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:47.533602 systemd-logind[1564]: New session 18 of user core. Jul 7 00:14:47.541773 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:14:48.970427 sshd[4219]: Connection closed by 10.0.0.1 port 43440 Jul 7 00:14:48.970793 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:48.985514 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:43440.service: Deactivated successfully. Jul 7 00:14:48.987649 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:14:48.989565 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:14:48.993779 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Jul 7 00:14:48.995387 systemd-logind[1564]: Removed session 18. Jul 7 00:14:49.047014 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:49.048746 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:49.053370 systemd-logind[1564]: New session 19 of user core. Jul 7 00:14:49.061722 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:14:49.282914 sshd[4243]: Connection closed by 10.0.0.1 port 43444 Jul 7 00:14:49.283784 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:49.293037 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:43444.service: Deactivated successfully. Jul 7 00:14:49.295232 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:14:49.296062 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:14:49.299433 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:43454.service - OpenSSH per-connection server daemon (10.0.0.1:43454). Jul 7 00:14:49.300140 systemd-logind[1564]: Removed session 19. Jul 7 00:14:49.351993 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:49.353817 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:49.358847 systemd-logind[1564]: New session 20 of user core. Jul 7 00:14:49.372733 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:14:49.486634 sshd[4256]: Connection closed by 10.0.0.1 port 43454 Jul 7 00:14:49.487004 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:49.491224 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:43454.service: Deactivated successfully. Jul 7 00:14:49.493282 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:14:49.495213 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:14:49.496647 systemd-logind[1564]: Removed session 20. Jul 7 00:14:54.512499 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:47206.service - OpenSSH per-connection server daemon (10.0.0.1:47206). Jul 7 00:14:54.576098 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 47206 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:54.577826 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:54.582535 systemd-logind[1564]: New session 21 of user core. Jul 7 00:14:54.592843 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:14:54.705179 sshd[4276]: Connection closed by 10.0.0.1 port 47206 Jul 7 00:14:54.705550 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:54.710395 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:47206.service: Deactivated successfully. Jul 7 00:14:54.712984 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:14:54.714044 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:14:54.715392 systemd-logind[1564]: Removed session 21. Jul 7 00:14:59.722333 systemd[1]: Started sshd@21-10.0.0.122:22-10.0.0.1:55374.service - OpenSSH per-connection server daemon (10.0.0.1:55374). Jul 7 00:14:59.779036 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 55374 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:14:59.780424 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:14:59.784853 systemd-logind[1564]: New session 22 of user core. Jul 7 00:14:59.794695 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:14:59.898974 sshd[4293]: Connection closed by 10.0.0.1 port 55374 Jul 7 00:14:59.899266 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jul 7 00:14:59.903332 systemd[1]: sshd@21-10.0.0.122:22-10.0.0.1:55374.service: Deactivated successfully. Jul 7 00:14:59.905375 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:14:59.906323 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:14:59.907714 systemd-logind[1564]: Removed session 22. Jul 7 00:15:04.914818 systemd[1]: Started sshd@22-10.0.0.122:22-10.0.0.1:55384.service - OpenSSH per-connection server daemon (10.0.0.1:55384). Jul 7 00:15:04.978524 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 55384 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:04.979948 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:04.984276 systemd-logind[1564]: New session 23 of user core. Jul 7 00:15:04.994734 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:15:05.102440 sshd[4309]: Connection closed by 10.0.0.1 port 55384 Jul 7 00:15:05.102765 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:05.107110 systemd[1]: sshd@22-10.0.0.122:22-10.0.0.1:55384.service: Deactivated successfully. Jul 7 00:15:05.109000 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:15:05.109811 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:15:05.111660 systemd-logind[1564]: Removed session 23. Jul 7 00:15:06.585021 kubelet[2709]: E0707 00:15:06.584951 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:08.584213 kubelet[2709]: E0707 00:15:08.584164 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:10.120069 systemd[1]: Started sshd@23-10.0.0.122:22-10.0.0.1:46318.service - OpenSSH per-connection server daemon (10.0.0.1:46318). Jul 7 00:15:10.179293 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 46318 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:10.181056 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:10.185738 systemd-logind[1564]: New session 24 of user core. Jul 7 00:15:10.196785 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:15:10.308382 sshd[4325]: Connection closed by 10.0.0.1 port 46318 Jul 7 00:15:10.308773 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:10.320444 systemd[1]: sshd@23-10.0.0.122:22-10.0.0.1:46318.service: Deactivated successfully. Jul 7 00:15:10.322382 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:15:10.323388 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:15:10.327204 systemd[1]: Started sshd@24-10.0.0.122:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Jul 7 00:15:10.328201 systemd-logind[1564]: Removed session 24. Jul 7 00:15:10.392219 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:10.393745 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:10.398738 systemd-logind[1564]: New session 25 of user core. Jul 7 00:15:10.410838 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:15:10.584201 kubelet[2709]: E0707 00:15:10.584142 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:11.758788 containerd[1593]: time="2025-07-07T00:15:11.758721503Z" level=info msg="StopContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" with timeout 30 (s)" Jul 7 00:15:11.767141 containerd[1593]: time="2025-07-07T00:15:11.767070969Z" level=info msg="Stop container \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" with signal terminated" Jul 7 00:15:11.782107 systemd[1]: cri-containerd-a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8.scope: Deactivated successfully. Jul 7 00:15:11.787261 containerd[1593]: time="2025-07-07T00:15:11.787119724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" id:\"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" pid:3302 exited_at:{seconds:1751847311 nanos:785415186}" Jul 7 00:15:11.793851 containerd[1593]: time="2025-07-07T00:15:11.793767347Z" level=info msg="received exit event container_id:\"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" id:\"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" pid:3302 exited_at:{seconds:1751847311 nanos:785415186}" Jul 7 00:15:11.809283 containerd[1593]: time="2025-07-07T00:15:11.809177456Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:15:11.810959 containerd[1593]: time="2025-07-07T00:15:11.810727359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" id:\"b0c0027b79637b439d3232f70a4250dc4b8b2b80a7519c44d7c32df60b19211d\" pid:4367 exited_at:{seconds:1751847311 nanos:810259204}" Jul 7 00:15:11.812788 containerd[1593]: time="2025-07-07T00:15:11.812747659Z" level=info msg="StopContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" with timeout 2 (s)" Jul 7 00:15:11.813209 containerd[1593]: time="2025-07-07T00:15:11.813184685Z" level=info msg="Stop container \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" with signal terminated" Jul 7 00:15:11.823361 systemd-networkd[1477]: lxc_health: Link DOWN Jul 7 00:15:11.825692 systemd-networkd[1477]: lxc_health: Lost carrier Jul 7 00:15:11.827031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8-rootfs.mount: Deactivated successfully. Jul 7 00:15:11.847484 containerd[1593]: time="2025-07-07T00:15:11.847416282Z" level=info msg="StopContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" returns successfully" Jul 7 00:15:11.848219 containerd[1593]: time="2025-07-07T00:15:11.848178890Z" level=info msg="StopPodSandbox for \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\"" Jul 7 00:15:11.848420 containerd[1593]: time="2025-07-07T00:15:11.848250346Z" level=info msg="Container to stop \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.848778 systemd[1]: cri-containerd-2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c.scope: Deactivated successfully. Jul 7 00:15:11.849218 systemd[1]: cri-containerd-2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c.scope: Consumed 6.436s CPU time, 124.4M memory peak, 304K read from disk, 13.3M written to disk. Jul 7 00:15:11.852002 containerd[1593]: time="2025-07-07T00:15:11.851950568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" pid:3374 exited_at:{seconds:1751847311 nanos:851441475}" Jul 7 00:15:11.852109 containerd[1593]: time="2025-07-07T00:15:11.852053885Z" level=info msg="received exit event container_id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" id:\"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" pid:3374 exited_at:{seconds:1751847311 nanos:851441475}" Jul 7 00:15:11.856948 systemd[1]: cri-containerd-b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152.scope: Deactivated successfully. Jul 7 00:15:11.860162 containerd[1593]: time="2025-07-07T00:15:11.860109209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" id:\"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" pid:2918 exit_status:137 exited_at:{seconds:1751847311 nanos:859243444}" Jul 7 00:15:11.876962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c-rootfs.mount: Deactivated successfully. Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.901410408Z" level=info msg="StopContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" returns successfully" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902353460Z" level=info msg="StopPodSandbox for \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\"" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902430226Z" level=info msg="Container to stop \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902441048Z" level=info msg="Container to stop \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902457288Z" level=info msg="Container to stop \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902465925Z" level=info msg="Container to stop \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.902785 containerd[1593]: time="2025-07-07T00:15:11.902477006Z" level=info msg="Container to stop \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:15:11.912254 systemd[1]: cri-containerd-d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db.scope: Deactivated successfully. Jul 7 00:15:11.922835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152-rootfs.mount: Deactivated successfully. Jul 7 00:15:11.934509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db-rootfs.mount: Deactivated successfully. Jul 7 00:15:11.954279 containerd[1593]: time="2025-07-07T00:15:11.954012673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" id:\"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" pid:2861 exit_status:137 exited_at:{seconds:1751847311 nanos:912346888}" Jul 7 00:15:11.954673 containerd[1593]: time="2025-07-07T00:15:11.954652425Z" level=info msg="shim disconnected" id=b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152 namespace=k8s.io Jul 7 00:15:11.954752 containerd[1593]: time="2025-07-07T00:15:11.954736065Z" level=warning msg="cleaning up after shim disconnected" id=b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152 namespace=k8s.io Jul 7 00:15:11.958115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152-shm.mount: Deactivated successfully. Jul 7 00:15:11.958407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db-shm.mount: Deactivated successfully. Jul 7 00:15:11.987880 containerd[1593]: time="2025-07-07T00:15:11.954798435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:15:11.988050 containerd[1593]: time="2025-07-07T00:15:11.955890271Z" level=info msg="shim disconnected" id=d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db namespace=k8s.io Jul 7 00:15:11.988050 containerd[1593]: time="2025-07-07T00:15:11.987962502Z" level=warning msg="cleaning up after shim disconnected" id=d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db namespace=k8s.io Jul 7 00:15:11.988050 containerd[1593]: time="2025-07-07T00:15:11.987972662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:15:11.989926 containerd[1593]: time="2025-07-07T00:15:11.989852444Z" level=info msg="TearDown network for sandbox \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" successfully" Jul 7 00:15:11.990507 containerd[1593]: time="2025-07-07T00:15:11.990378450Z" level=info msg="StopPodSandbox for \"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" returns successfully" Jul 7 00:15:11.990507 containerd[1593]: time="2025-07-07T00:15:11.990269491Z" level=info msg="received exit event sandbox_id:\"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" exit_status:137 exited_at:{seconds:1751847311 nanos:912346888}" Jul 7 00:15:11.991569 containerd[1593]: time="2025-07-07T00:15:11.990308956Z" level=info msg="received exit event sandbox_id:\"b8884368470fc3168b78ecd9ccbdb2cef22a75a0c883e134d09cd997fbc6e152\" exit_status:137 exited_at:{seconds:1751847311 nanos:859243444}" Jul 7 00:15:11.992481 containerd[1593]: time="2025-07-07T00:15:11.986636979Z" level=info msg="TearDown network for sandbox \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" successfully" Jul 7 00:15:11.992481 containerd[1593]: time="2025-07-07T00:15:11.992366038Z" level=info msg="StopPodSandbox for \"d6a6a37ff8b1a0a95aac8a7efb6f8e9d02d2c2ed64edf7368bdff9e177fa85db\" returns successfully" Jul 7 00:15:12.111826 kubelet[2709]: I0707 00:15:12.111766 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-config-path\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.111826 kubelet[2709]: I0707 00:15:12.111815 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-bpf-maps\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.111826 kubelet[2709]: I0707 00:15:12.111839 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flps6\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-kube-api-access-flps6\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.111826 kubelet[2709]: I0707 00:15:12.111857 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-run\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111873 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a66540-bae9-49c6-b6da-205ee80eb0ec-cilium-config-path\") pod \"58a66540-bae9-49c6-b6da-205ee80eb0ec\" (UID: \"58a66540-bae9-49c6-b6da-205ee80eb0ec\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111892 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-hostproc\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111907 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-kernel\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111924 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-cgroup\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111938 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-hubble-tls\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112546 kubelet[2709]: I0707 00:15:12.111954 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzs7v\" (UniqueName: \"kubernetes.io/projected/58a66540-bae9-49c6-b6da-205ee80eb0ec-kube-api-access-nzs7v\") pod \"58a66540-bae9-49c6-b6da-205ee80eb0ec\" (UID: \"58a66540-bae9-49c6-b6da-205ee80eb0ec\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.111971 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/247daba8-969a-4f97-b0ed-5fc6839399b8-clustermesh-secrets\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.111984 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-net\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.111998 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-lib-modules\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.112017 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-xtables-lock\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.112031 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cni-path\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112789 kubelet[2709]: I0707 00:15:12.112044 2709 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-etc-cni-netd\") pod \"247daba8-969a-4f97-b0ed-5fc6839399b8\" (UID: \"247daba8-969a-4f97-b0ed-5fc6839399b8\") " Jul 7 00:15:12.112994 kubelet[2709]: I0707 00:15:12.112121 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.112994 kubelet[2709]: I0707 00:15:12.112172 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.112994 kubelet[2709]: I0707 00:15:12.112428 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.113375 kubelet[2709]: I0707 00:15:12.113346 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-hostproc" (OuterVolumeSpecName: "hostproc") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.113375 kubelet[2709]: I0707 00:15:12.113382 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.113475 kubelet[2709]: I0707 00:15:12.113406 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.115357 kubelet[2709]: I0707 00:15:12.115319 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:15:12.115781 kubelet[2709]: I0707 00:15:12.115532 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.115781 kubelet[2709]: I0707 00:15:12.115708 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a66540-bae9-49c6-b6da-205ee80eb0ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58a66540-bae9-49c6-b6da-205ee80eb0ec" (UID: "58a66540-bae9-49c6-b6da-205ee80eb0ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:15:12.115781 kubelet[2709]: I0707 00:15:12.115742 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.115781 kubelet[2709]: I0707 00:15:12.115760 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cni-path" (OuterVolumeSpecName: "cni-path") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.115781 kubelet[2709]: I0707 00:15:12.115784 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:15:12.117699 kubelet[2709]: I0707 00:15:12.117666 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-kube-api-access-flps6" (OuterVolumeSpecName: "kube-api-access-flps6") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "kube-api-access-flps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:15:12.118020 kubelet[2709]: I0707 00:15:12.117976 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a66540-bae9-49c6-b6da-205ee80eb0ec-kube-api-access-nzs7v" (OuterVolumeSpecName: "kube-api-access-nzs7v") pod "58a66540-bae9-49c6-b6da-205ee80eb0ec" (UID: "58a66540-bae9-49c6-b6da-205ee80eb0ec"). InnerVolumeSpecName "kube-api-access-nzs7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:15:12.118343 kubelet[2709]: I0707 00:15:12.118304 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:15:12.119320 kubelet[2709]: I0707 00:15:12.119300 2709 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247daba8-969a-4f97-b0ed-5fc6839399b8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "247daba8-969a-4f97-b0ed-5fc6839399b8" (UID: "247daba8-969a-4f97-b0ed-5fc6839399b8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:15:12.213196 kubelet[2709]: I0707 00:15:12.213125 2709 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213196 kubelet[2709]: I0707 00:15:12.213176 2709 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213196 kubelet[2709]: I0707 00:15:12.213190 2709 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213196 kubelet[2709]: I0707 00:15:12.213199 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213196 kubelet[2709]: I0707 00:15:12.213212 2709 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213222 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flps6\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-kube-api-access-flps6\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213230 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213239 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a66540-bae9-49c6-b6da-205ee80eb0ec-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213246 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213254 2709 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213267 2709 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/247daba8-969a-4f97-b0ed-5fc6839399b8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213277 2709 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213464 kubelet[2709]: I0707 00:15:12.213285 2709 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzs7v\" (UniqueName: \"kubernetes.io/projected/58a66540-bae9-49c6-b6da-205ee80eb0ec-kube-api-access-nzs7v\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213682 kubelet[2709]: I0707 00:15:12.213293 2709 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/247daba8-969a-4f97-b0ed-5fc6839399b8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213682 kubelet[2709]: I0707 00:15:12.213301 2709 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.213682 kubelet[2709]: I0707 00:15:12.213308 2709 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/247daba8-969a-4f97-b0ed-5fc6839399b8-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 00:15:12.592866 systemd[1]: Removed slice kubepods-burstable-pod247daba8_969a_4f97_b0ed_5fc6839399b8.slice - libcontainer container kubepods-burstable-pod247daba8_969a_4f97_b0ed_5fc6839399b8.slice. Jul 7 00:15:12.592969 systemd[1]: kubepods-burstable-pod247daba8_969a_4f97_b0ed_5fc6839399b8.slice: Consumed 6.553s CPU time, 124.7M memory peak, 308K read from disk, 13.3M written to disk. Jul 7 00:15:12.594241 systemd[1]: Removed slice kubepods-besteffort-pod58a66540_bae9_49c6_b6da_205ee80eb0ec.slice - libcontainer container kubepods-besteffort-pod58a66540_bae9_49c6_b6da_205ee80eb0ec.slice. Jul 7 00:15:12.801903 kubelet[2709]: I0707 00:15:12.801814 2709 scope.go:117] "RemoveContainer" containerID="2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c" Jul 7 00:15:12.806295 containerd[1593]: time="2025-07-07T00:15:12.806246029Z" level=info msg="RemoveContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\"" Jul 7 00:15:12.812025 containerd[1593]: time="2025-07-07T00:15:12.811956176Z" level=info msg="RemoveContainer for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" returns successfully" Jul 7 00:15:12.812269 kubelet[2709]: I0707 00:15:12.812243 2709 scope.go:117] "RemoveContainer" containerID="037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb" Jul 7 00:15:12.814434 containerd[1593]: time="2025-07-07T00:15:12.814410394Z" level=info msg="RemoveContainer for \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\"" Jul 7 00:15:12.823691 systemd[1]: var-lib-kubelet-pods-58a66540\x2dbae9\x2d49c6\x2db6da\x2d205ee80eb0ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzs7v.mount: Deactivated successfully. Jul 7 00:15:12.824265 systemd[1]: var-lib-kubelet-pods-247daba8\x2d969a\x2d4f97\x2db0ed\x2d5fc6839399b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dflps6.mount: Deactivated successfully. Jul 7 00:15:12.824384 systemd[1]: var-lib-kubelet-pods-247daba8\x2d969a\x2d4f97\x2db0ed\x2d5fc6839399b8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:15:12.824481 systemd[1]: var-lib-kubelet-pods-247daba8\x2d969a\x2d4f97\x2db0ed\x2d5fc6839399b8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:15:12.839369 containerd[1593]: time="2025-07-07T00:15:12.839313221Z" level=info msg="RemoveContainer for \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" returns successfully" Jul 7 00:15:12.839646 kubelet[2709]: I0707 00:15:12.839594 2709 scope.go:117] "RemoveContainer" containerID="da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1" Jul 7 00:15:12.842217 containerd[1593]: time="2025-07-07T00:15:12.842180157Z" level=info msg="RemoveContainer for \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\"" Jul 7 00:15:12.847060 containerd[1593]: time="2025-07-07T00:15:12.846957764Z" level=info msg="RemoveContainer for \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" returns successfully" Jul 7 00:15:12.847275 kubelet[2709]: I0707 00:15:12.847236 2709 scope.go:117] "RemoveContainer" containerID="386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a" Jul 7 00:15:12.848874 containerd[1593]: time="2025-07-07T00:15:12.848837794Z" level=info msg="RemoveContainer for \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\"" Jul 7 00:15:12.852626 containerd[1593]: time="2025-07-07T00:15:12.852602926Z" level=info msg="RemoveContainer for \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" returns successfully" Jul 7 00:15:12.852844 kubelet[2709]: I0707 00:15:12.852775 2709 scope.go:117] "RemoveContainer" containerID="fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0" Jul 7 00:15:12.854183 containerd[1593]: time="2025-07-07T00:15:12.854153888Z" level=info msg="RemoveContainer for \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\"" Jul 7 00:15:12.857513 containerd[1593]: time="2025-07-07T00:15:12.857480473Z" level=info msg="RemoveContainer for \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" returns successfully" Jul 7 00:15:12.857638 kubelet[2709]: I0707 00:15:12.857619 2709 scope.go:117] "RemoveContainer" containerID="2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c" Jul 7 00:15:12.857845 containerd[1593]: time="2025-07-07T00:15:12.857765567Z" level=error msg="ContainerStatus for \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\": not found" Jul 7 00:15:12.861883 kubelet[2709]: E0707 00:15:12.861835 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\": not found" containerID="2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c" Jul 7 00:15:12.862078 kubelet[2709]: I0707 00:15:12.861871 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c"} err="failed to get container status \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c47088cf6439341f6dc75c7c9206f95f0d47fc73274011843e3f1528744ea3c\": not found" Jul 7 00:15:12.862078 kubelet[2709]: I0707 00:15:12.861942 2709 scope.go:117] "RemoveContainer" containerID="037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb" Jul 7 00:15:12.862201 containerd[1593]: time="2025-07-07T00:15:12.862152708Z" level=error msg="ContainerStatus for \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\": not found" Jul 7 00:15:12.862297 kubelet[2709]: E0707 00:15:12.862275 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\": not found" containerID="037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb" Jul 7 00:15:12.862341 kubelet[2709]: I0707 00:15:12.862294 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb"} err="failed to get container status \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"037714d23e51c996daf71e88972e9e08404d64c01b22e5c061a259c6a3c6dfdb\": not found" Jul 7 00:15:12.862341 kubelet[2709]: I0707 00:15:12.862306 2709 scope.go:117] "RemoveContainer" containerID="da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1" Jul 7 00:15:12.862455 containerd[1593]: time="2025-07-07T00:15:12.862430799Z" level=error msg="ContainerStatus for \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\": not found" Jul 7 00:15:12.862534 kubelet[2709]: E0707 00:15:12.862513 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\": not found" containerID="da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1" Jul 7 00:15:12.862534 kubelet[2709]: I0707 00:15:12.862531 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1"} err="failed to get container status \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"da6a8430e9fd71e268e14d5a1a44e12eff7454372bde2c63b686e12bbd75efd1\": not found" Jul 7 00:15:12.862658 kubelet[2709]: I0707 00:15:12.862549 2709 scope.go:117] "RemoveContainer" containerID="386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a" Jul 7 00:15:12.862916 containerd[1593]: time="2025-07-07T00:15:12.862857584Z" level=error msg="ContainerStatus for \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\": not found" Jul 7 00:15:12.863085 kubelet[2709]: E0707 00:15:12.863048 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\": not found" containerID="386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a" Jul 7 00:15:12.863119 kubelet[2709]: I0707 00:15:12.863080 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a"} err="failed to get container status \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\": rpc error: code = NotFound desc = an error occurred when try to find container \"386f2a77926f2e4268d86b3e91a93bf27b6f7b9bfef907da3588216e8182d97a\": not found" Jul 7 00:15:12.863119 kubelet[2709]: I0707 00:15:12.863106 2709 scope.go:117] "RemoveContainer" containerID="fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0" Jul 7 00:15:12.863321 containerd[1593]: time="2025-07-07T00:15:12.863291263Z" level=error msg="ContainerStatus for \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\": not found" Jul 7 00:15:12.863405 kubelet[2709]: E0707 00:15:12.863381 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\": not found" containerID="fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0" Jul 7 00:15:12.863405 kubelet[2709]: I0707 00:15:12.863398 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0"} err="failed to get container status \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa93afe9b25c6e56604c9aae2394c97639558cec4f29854aa163f060396860e0\": not found" Jul 7 00:15:12.863481 kubelet[2709]: I0707 00:15:12.863411 2709 scope.go:117] "RemoveContainer" containerID="a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8" Jul 7 00:15:12.864664 containerd[1593]: time="2025-07-07T00:15:12.864638015Z" level=info msg="RemoveContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\"" Jul 7 00:15:12.868456 containerd[1593]: time="2025-07-07T00:15:12.868404159Z" level=info msg="RemoveContainer for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" returns successfully" Jul 7 00:15:12.868599 kubelet[2709]: I0707 00:15:12.868547 2709 scope.go:117] "RemoveContainer" containerID="a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8" Jul 7 00:15:12.868819 containerd[1593]: time="2025-07-07T00:15:12.868735611Z" level=error msg="ContainerStatus for \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\": not found" Jul 7 00:15:12.868923 kubelet[2709]: E0707 00:15:12.868881 2709 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\": not found" containerID="a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8" Jul 7 00:15:12.868923 kubelet[2709]: I0707 00:15:12.868910 2709 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8"} err="failed to get container status \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8c1bb8e07deaa564e36d0988e6cb71855563cb2241b605de9f73dc174d4dcd8\": not found" Jul 7 00:15:13.708926 sshd[4341]: Connection closed by 10.0.0.1 port 46322 Jul 7 00:15:13.709521 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:13.719032 systemd[1]: sshd@24-10.0.0.122:22-10.0.0.1:46322.service: Deactivated successfully. Jul 7 00:15:13.721695 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:15:13.722489 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:15:13.725894 systemd[1]: Started sshd@25-10.0.0.122:22-10.0.0.1:46334.service - OpenSSH per-connection server daemon (10.0.0.1:46334). Jul 7 00:15:13.726744 systemd-logind[1564]: Removed session 25. Jul 7 00:15:13.780828 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 46334 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:13.782637 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:13.788097 systemd-logind[1564]: New session 26 of user core. Jul 7 00:15:13.799817 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:15:14.476978 sshd[4492]: Connection closed by 10.0.0.1 port 46334 Jul 7 00:15:14.477465 sshd-session[4490]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:14.488286 systemd[1]: sshd@25-10.0.0.122:22-10.0.0.1:46334.service: Deactivated successfully. Jul 7 00:15:14.491439 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:15:14.493773 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:15:14.494622 kubelet[2709]: E0707 00:15:14.494549 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a66540-bae9-49c6-b6da-205ee80eb0ec" containerName="cilium-operator" Jul 7 00:15:14.494622 kubelet[2709]: E0707 00:15:14.494615 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="clean-cilium-state" Jul 7 00:15:14.494963 kubelet[2709]: E0707 00:15:14.494626 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="cilium-agent" Jul 7 00:15:14.494963 kubelet[2709]: E0707 00:15:14.494638 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="mount-cgroup" Jul 7 00:15:14.494963 kubelet[2709]: E0707 00:15:14.494646 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="apply-sysctl-overwrites" Jul 7 00:15:14.494963 kubelet[2709]: E0707 00:15:14.494659 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="mount-bpf-fs" Jul 7 00:15:14.494963 kubelet[2709]: I0707 00:15:14.494688 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a66540-bae9-49c6-b6da-205ee80eb0ec" containerName="cilium-operator" Jul 7 00:15:14.494963 kubelet[2709]: I0707 00:15:14.494697 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" containerName="cilium-agent" Jul 7 00:15:14.497733 systemd-logind[1564]: Removed session 26. Jul 7 00:15:14.500914 systemd[1]: Started sshd@26-10.0.0.122:22-10.0.0.1:46344.service - OpenSSH per-connection server daemon (10.0.0.1:46344). Jul 7 00:15:14.518979 systemd[1]: Created slice kubepods-burstable-pod6978a50b_eb01_4e8c_ab6e_786e570d247e.slice - libcontainer container kubepods-burstable-pod6978a50b_eb01_4e8c_ab6e_786e570d247e.slice. Jul 7 00:15:14.561895 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 46344 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:14.563738 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:14.568675 systemd-logind[1564]: New session 27 of user core. Jul 7 00:15:14.577775 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:15:14.586608 kubelet[2709]: I0707 00:15:14.586483 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247daba8-969a-4f97-b0ed-5fc6839399b8" path="/var/lib/kubelet/pods/247daba8-969a-4f97-b0ed-5fc6839399b8/volumes" Jul 7 00:15:14.587354 kubelet[2709]: I0707 00:15:14.587323 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a66540-bae9-49c6-b6da-205ee80eb0ec" path="/var/lib/kubelet/pods/58a66540-bae9-49c6-b6da-205ee80eb0ec/volumes" Jul 7 00:15:14.627059 kubelet[2709]: I0707 00:15:14.627022 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6978a50b-eb01-4e8c-ab6e-786e570d247e-hubble-tls\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627133 kubelet[2709]: I0707 00:15:14.627064 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-lib-modules\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627133 kubelet[2709]: I0707 00:15:14.627084 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-bpf-maps\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627133 kubelet[2709]: I0707 00:15:14.627099 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-cilium-cgroup\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627133 kubelet[2709]: I0707 00:15:14.627116 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6978a50b-eb01-4e8c-ab6e-786e570d247e-cilium-config-path\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627249 kubelet[2709]: I0707 00:15:14.627137 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6978a50b-eb01-4e8c-ab6e-786e570d247e-cilium-ipsec-secrets\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627249 kubelet[2709]: I0707 00:15:14.627179 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-host-proc-sys-net\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627249 kubelet[2709]: I0707 00:15:14.627194 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcj87\" (UniqueName: \"kubernetes.io/projected/6978a50b-eb01-4e8c-ab6e-786e570d247e-kube-api-access-pcj87\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627249 kubelet[2709]: I0707 00:15:14.627208 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-hostproc\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627344 kubelet[2709]: I0707 00:15:14.627243 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-xtables-lock\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627344 kubelet[2709]: I0707 00:15:14.627275 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6978a50b-eb01-4e8c-ab6e-786e570d247e-clustermesh-secrets\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627344 kubelet[2709]: I0707 00:15:14.627291 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-cilium-run\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627344 kubelet[2709]: I0707 00:15:14.627307 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-cni-path\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627344 kubelet[2709]: I0707 00:15:14.627337 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-etc-cni-netd\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.627522 kubelet[2709]: I0707 00:15:14.627354 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6978a50b-eb01-4e8c-ab6e-786e570d247e-host-proc-sys-kernel\") pod \"cilium-g49fl\" (UID: \"6978a50b-eb01-4e8c-ab6e-786e570d247e\") " pod="kube-system/cilium-g49fl" Jul 7 00:15:14.631412 sshd[4506]: Connection closed by 10.0.0.1 port 46344 Jul 7 00:15:14.632038 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:14.641701 systemd[1]: sshd@26-10.0.0.122:22-10.0.0.1:46344.service: Deactivated successfully. Jul 7 00:15:14.644372 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:15:14.645372 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:15:14.649940 systemd[1]: Started sshd@27-10.0.0.122:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Jul 7 00:15:14.650750 systemd-logind[1564]: Removed session 27. Jul 7 00:15:14.699465 sshd[4514]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:83ntyEUnau8DogEBCDVWnSOs0UDR1Qy+tUsGmqQZtGw Jul 7 00:15:14.701361 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:15:14.706979 systemd-logind[1564]: New session 28 of user core. Jul 7 00:15:14.716760 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:15:14.826612 kubelet[2709]: E0707 00:15:14.826021 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:14.827064 containerd[1593]: time="2025-07-07T00:15:14.827019314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g49fl,Uid:6978a50b-eb01-4e8c-ab6e-786e570d247e,Namespace:kube-system,Attempt:0,}" Jul 7 00:15:14.846352 containerd[1593]: time="2025-07-07T00:15:14.846294738Z" level=info msg="connecting to shim 26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:15:14.875791 systemd[1]: Started cri-containerd-26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f.scope - libcontainer container 26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f. Jul 7 00:15:14.904548 containerd[1593]: time="2025-07-07T00:15:14.904500527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g49fl,Uid:6978a50b-eb01-4e8c-ab6e-786e570d247e,Namespace:kube-system,Attempt:0,} returns sandbox id \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\"" Jul 7 00:15:14.905371 kubelet[2709]: E0707 00:15:14.905329 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:14.907419 containerd[1593]: time="2025-07-07T00:15:14.907385833Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:15:14.915883 containerd[1593]: time="2025-07-07T00:15:14.915839776Z" level=info msg="Container 4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:14.927733 containerd[1593]: time="2025-07-07T00:15:14.927671626Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\"" Jul 7 00:15:14.928642 containerd[1593]: time="2025-07-07T00:15:14.928556063Z" level=info msg="StartContainer for \"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\"" Jul 7 00:15:14.929849 containerd[1593]: time="2025-07-07T00:15:14.929807933Z" level=info msg="connecting to shim 4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" protocol=ttrpc version=3 Jul 7 00:15:14.956804 systemd[1]: Started cri-containerd-4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06.scope - libcontainer container 4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06. Jul 7 00:15:14.994153 containerd[1593]: time="2025-07-07T00:15:14.994098463Z" level=info msg="StartContainer for \"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\" returns successfully" Jul 7 00:15:15.004023 systemd[1]: cri-containerd-4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06.scope: Deactivated successfully. Jul 7 00:15:15.005511 containerd[1593]: time="2025-07-07T00:15:15.005438721Z" level=info msg="received exit event container_id:\"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\" id:\"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\" pid:4584 exited_at:{seconds:1751847315 nanos:5055220}" Jul 7 00:15:15.005703 containerd[1593]: time="2025-07-07T00:15:15.005517361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\" id:\"4db3dfed7add69d477d110c9a1e6299904c3e86155061f945d5df78cb4adfb06\" pid:4584 exited_at:{seconds:1751847315 nanos:5055220}" Jul 7 00:15:15.814615 kubelet[2709]: E0707 00:15:15.814549 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:15.817388 containerd[1593]: time="2025-07-07T00:15:15.816543345Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:15:15.828849 containerd[1593]: time="2025-07-07T00:15:15.827804796Z" level=info msg="Container 8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:15.835499 containerd[1593]: time="2025-07-07T00:15:15.835452184Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\"" Jul 7 00:15:15.836147 containerd[1593]: time="2025-07-07T00:15:15.836086063Z" level=info msg="StartContainer for \"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\"" Jul 7 00:15:15.837081 containerd[1593]: time="2025-07-07T00:15:15.837054060Z" level=info msg="connecting to shim 8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" protocol=ttrpc version=3 Jul 7 00:15:15.869835 systemd[1]: Started cri-containerd-8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41.scope - libcontainer container 8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41. Jul 7 00:15:15.903684 containerd[1593]: time="2025-07-07T00:15:15.903637490Z" level=info msg="StartContainer for \"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\" returns successfully" Jul 7 00:15:15.908741 systemd[1]: cri-containerd-8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41.scope: Deactivated successfully. Jul 7 00:15:15.910033 containerd[1593]: time="2025-07-07T00:15:15.909993043Z" level=info msg="received exit event container_id:\"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\" id:\"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\" pid:4630 exited_at:{seconds:1751847315 nanos:909726656}" Jul 7 00:15:15.910123 containerd[1593]: time="2025-07-07T00:15:15.910043059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\" id:\"8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41\" pid:4630 exited_at:{seconds:1751847315 nanos:909726656}" Jul 7 00:15:15.931129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c4aaa648858af33deadc051e81131bf2dfafe1338f5cc46b664d89d4fbd8a41-rootfs.mount: Deactivated successfully. Jul 7 00:15:16.633810 kubelet[2709]: E0707 00:15:16.633756 2709 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:15:16.818001 kubelet[2709]: E0707 00:15:16.817953 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:16.819543 containerd[1593]: time="2025-07-07T00:15:16.819494461Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:15:16.834127 containerd[1593]: time="2025-07-07T00:15:16.834049374Z" level=info msg="Container 645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:16.850811 containerd[1593]: time="2025-07-07T00:15:16.850759876Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\"" Jul 7 00:15:16.851434 containerd[1593]: time="2025-07-07T00:15:16.851405869Z" level=info msg="StartContainer for \"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\"" Jul 7 00:15:16.853280 containerd[1593]: time="2025-07-07T00:15:16.853240957Z" level=info msg="connecting to shim 645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" protocol=ttrpc version=3 Jul 7 00:15:16.880716 systemd[1]: Started cri-containerd-645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461.scope - libcontainer container 645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461. Jul 7 00:15:16.946124 systemd[1]: cri-containerd-645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461.scope: Deactivated successfully. Jul 7 00:15:16.947162 containerd[1593]: time="2025-07-07T00:15:16.946972868Z" level=info msg="StartContainer for \"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\" returns successfully" Jul 7 00:15:16.947747 containerd[1593]: time="2025-07-07T00:15:16.947711137Z" level=info msg="received exit event container_id:\"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\" id:\"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\" pid:4674 exited_at:{seconds:1751847316 nanos:947220942}" Jul 7 00:15:16.948921 containerd[1593]: time="2025-07-07T00:15:16.948882339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\" id:\"645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461\" pid:4674 exited_at:{seconds:1751847316 nanos:947220942}" Jul 7 00:15:16.971108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645616be66b18684f89045f67537f5b06ecdaec775358a7b07014cb732fac461-rootfs.mount: Deactivated successfully. Jul 7 00:15:17.822784 kubelet[2709]: E0707 00:15:17.822730 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:17.825229 containerd[1593]: time="2025-07-07T00:15:17.824573815Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:15:17.834507 containerd[1593]: time="2025-07-07T00:15:17.834444809Z" level=info msg="Container 1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:17.845589 containerd[1593]: time="2025-07-07T00:15:17.845517713Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\"" Jul 7 00:15:17.846122 containerd[1593]: time="2025-07-07T00:15:17.846075125Z" level=info msg="StartContainer for \"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\"" Jul 7 00:15:17.847225 containerd[1593]: time="2025-07-07T00:15:17.847167257Z" level=info msg="connecting to shim 1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" protocol=ttrpc version=3 Jul 7 00:15:17.870826 systemd[1]: Started cri-containerd-1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486.scope - libcontainer container 1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486. Jul 7 00:15:17.902165 systemd[1]: cri-containerd-1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486.scope: Deactivated successfully. Jul 7 00:15:17.904519 containerd[1593]: time="2025-07-07T00:15:17.904479411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\" id:\"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\" pid:4712 exited_at:{seconds:1751847317 nanos:903309150}" Jul 7 00:15:17.906797 containerd[1593]: time="2025-07-07T00:15:17.906761701Z" level=info msg="received exit event container_id:\"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\" id:\"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\" pid:4712 exited_at:{seconds:1751847317 nanos:903309150}" Jul 7 00:15:17.915854 containerd[1593]: time="2025-07-07T00:15:17.915806319Z" level=info msg="StartContainer for \"1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486\" returns successfully" Jul 7 00:15:17.928658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ee6e5d709e8e7e31da42b28bae59ae936681e34484dbb9ced7795168d2a4486-rootfs.mount: Deactivated successfully. Jul 7 00:15:18.342463 kubelet[2709]: I0707 00:15:18.342388 2709 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:15:18Z","lastTransitionTime":"2025-07-07T00:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:15:18.828457 kubelet[2709]: E0707 00:15:18.828415 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:18.830777 containerd[1593]: time="2025-07-07T00:15:18.830203893Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:15:18.841410 containerd[1593]: time="2025-07-07T00:15:18.841372521Z" level=info msg="Container 79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:15:18.847312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213881517.mount: Deactivated successfully. Jul 7 00:15:18.851149 containerd[1593]: time="2025-07-07T00:15:18.851095696Z" level=info msg="CreateContainer within sandbox \"26e8fad76f8fdffe388ace68096545e290601eb538e6d2801969ae0efda18a5f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\"" Jul 7 00:15:18.851726 containerd[1593]: time="2025-07-07T00:15:18.851689698Z" level=info msg="StartContainer for \"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\"" Jul 7 00:15:18.852542 containerd[1593]: time="2025-07-07T00:15:18.852498188Z" level=info msg="connecting to shim 79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f" address="unix:///run/containerd/s/3f131b141fe4288b71bd4daea646eee5a3c48eac8a69292f3c67d4882e7fbb92" protocol=ttrpc version=3 Jul 7 00:15:18.884882 systemd[1]: Started cri-containerd-79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f.scope - libcontainer container 79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f. Jul 7 00:15:18.924357 containerd[1593]: time="2025-07-07T00:15:18.924301657Z" level=info msg="StartContainer for \"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" returns successfully" Jul 7 00:15:19.002481 containerd[1593]: time="2025-07-07T00:15:19.002418854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"745a4e4b3658fc12607498bf2534804431bc55f1c58af3c7780cfc5c4004337f\" pid:4781 exited_at:{seconds:1751847319 nanos:1944049}" Jul 7 00:15:19.395642 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:15:19.834478 kubelet[2709]: E0707 00:15:19.834360 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:19.850606 kubelet[2709]: I0707 00:15:19.850310 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g49fl" podStartSLOduration=5.850267742 podStartE2EDuration="5.850267742s" podCreationTimestamp="2025-07-07 00:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:15:19.850201285 +0000 UTC m=+93.369537456" watchObservedRunningTime="2025-07-07 00:15:19.850267742 +0000 UTC m=+93.369603843" Jul 7 00:15:20.837165 kubelet[2709]: E0707 00:15:20.837099 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:21.055762 containerd[1593]: time="2025-07-07T00:15:21.055715770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"4a71f270b85755473db4a0ad358c01ae49c210209118f72f9d5e2e840bcbe007\" pid:4923 exit_status:1 exited_at:{seconds:1751847321 nanos:54741557}" Jul 7 00:15:22.514875 systemd-networkd[1477]: lxc_health: Link UP Jul 7 00:15:22.516357 systemd-networkd[1477]: lxc_health: Gained carrier Jul 7 00:15:22.829419 kubelet[2709]: E0707 00:15:22.829356 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:22.842695 kubelet[2709]: E0707 00:15:22.842660 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:23.182573 containerd[1593]: time="2025-07-07T00:15:23.182406170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"7256ef72342d369965b66ee96f4e93e94263ebbb3be2beb16a802ef42009f3ea\" pid:5310 exited_at:{seconds:1751847323 nanos:181540694}" Jul 7 00:15:23.585009 kubelet[2709]: E0707 00:15:23.584947 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:23.844851 kubelet[2709]: E0707 00:15:23.844719 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:15:24.002847 systemd-networkd[1477]: lxc_health: Gained IPv6LL Jul 7 00:15:25.289130 containerd[1593]: time="2025-07-07T00:15:25.289083987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"bd914fe52d2b248f447e20bcfcaaf575a20591e88f9be9812486c0b0fa668841\" pid:5345 exited_at:{seconds:1751847325 nanos:288026759}" Jul 7 00:15:27.371266 containerd[1593]: time="2025-07-07T00:15:27.371143414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"3927c7972b64dd39e8f3e5689a79119d2fe232577438665f97d841bdbf0c1d74\" pid:5378 exited_at:{seconds:1751847327 nanos:370682808}" Jul 7 00:15:29.488830 containerd[1593]: time="2025-07-07T00:15:29.488714099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79f04c0617b9c5923c2be5944a8dcb57ff8b1da3d4779d0519091a1869c15a2f\" id:\"47b09d31fcd5167d56d515da19e2cd5f628dff7dd8b3948a53f5bd9dbf705d00\" pid:5402 exited_at:{seconds:1751847329 nanos:488345489}" Jul 7 00:15:29.491376 kubelet[2709]: E0707 00:15:29.490750 2709 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46328->127.0.0.1:43363: write tcp 127.0.0.1:46328->127.0.0.1:43363: write: broken pipe Jul 7 00:15:29.495417 sshd[4516]: Connection closed by 10.0.0.1 port 46358 Jul 7 00:15:29.495801 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Jul 7 00:15:29.498949 systemd[1]: sshd@27-10.0.0.122:22-10.0.0.1:46358.service: Deactivated successfully. Jul 7 00:15:29.501110 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:15:29.502699 systemd-logind[1564]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:15:29.504414 systemd-logind[1564]: Removed session 28. Jul 7 00:15:29.584418 kubelet[2709]: E0707 00:15:29.584363 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"