Jul 10 00:21:10.889269 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:21:10.889290 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:10.889301 kernel: BIOS-provided physical RAM map: Jul 10 00:21:10.889308 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:21:10.889314 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:21:10.889321 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:21:10.889328 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:21:10.889335 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:21:10.889348 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 00:21:10.889355 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 00:21:10.889361 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 10 00:21:10.889368 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 00:21:10.889374 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 00:21:10.889381 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 00:21:10.889391 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 00:21:10.889398 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:21:10.889408 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 00:21:10.889415 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 00:21:10.889422 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 00:21:10.889429 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 00:21:10.889436 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 00:21:10.889443 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:21:10.889450 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:21:10.889457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:21:10.889464 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 00:21:10.889473 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:21:10.889480 kernel: NX (Execute Disable) protection: active Jul 10 00:21:10.889487 kernel: APIC: Static calls initialized Jul 10 00:21:10.889494 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 10 00:21:10.889501 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 10 00:21:10.889508 kernel: extended physical RAM map: Jul 10 00:21:10.889515 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:21:10.889522 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:21:10.889529 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:21:10.889550 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:21:10.889557 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:21:10.889566 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 00:21:10.889573 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 00:21:10.889580 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 10 00:21:10.889588 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 10 00:21:10.889598 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 10 00:21:10.889605 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 10 00:21:10.889614 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 10 00:21:10.889622 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 00:21:10.889629 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 00:21:10.889636 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 00:21:10.889644 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 00:21:10.889651 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:21:10.889658 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 00:21:10.889666 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 00:21:10.889673 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 00:21:10.889682 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 00:21:10.889690 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 00:21:10.889697 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:21:10.889704 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:21:10.889711 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:21:10.889719 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 00:21:10.889726 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:21:10.889735 kernel: efi: EFI v2.7 by EDK II Jul 10 00:21:10.889743 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 10 00:21:10.889750 kernel: random: crng init done Jul 10 00:21:10.889760 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 10 00:21:10.889767 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 10 00:21:10.889778 kernel: secureboot: Secure boot disabled Jul 10 00:21:10.889786 kernel: SMBIOS 2.8 present. Jul 10 00:21:10.889801 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 00:21:10.889808 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:21:10.889816 kernel: Hypervisor detected: KVM Jul 10 00:21:10.889823 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:21:10.889831 kernel: kvm-clock: using sched offset of 4882742902 cycles Jul 10 00:21:10.889838 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:21:10.889846 kernel: tsc: Detected 2794.750 MHz processor Jul 10 00:21:10.889854 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:21:10.889861 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:21:10.889870 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 10 00:21:10.889878 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:21:10.889885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:21:10.889893 kernel: Using GB pages for direct mapping Jul 10 00:21:10.889900 kernel: ACPI: Early table checksum verification disabled Jul 10 00:21:10.889908 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 10 00:21:10.889915 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:21:10.889923 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889930 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889940 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 10 00:21:10.889947 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889955 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889962 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889969 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:21:10.889977 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:21:10.889984 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 10 00:21:10.889992 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 10 00:21:10.890001 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 10 00:21:10.890009 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 10 00:21:10.890016 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 10 00:21:10.890023 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 10 00:21:10.890031 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 10 00:21:10.890038 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 10 00:21:10.890045 kernel: No NUMA configuration found Jul 10 00:21:10.890053 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 10 00:21:10.890060 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 10 00:21:10.890068 kernel: Zone ranges: Jul 10 00:21:10.890077 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:21:10.890084 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 10 00:21:10.890092 kernel: Normal empty Jul 10 00:21:10.890099 kernel: Device empty Jul 10 00:21:10.890106 kernel: Movable zone start for each node Jul 10 00:21:10.890114 kernel: Early memory node ranges Jul 10 00:21:10.890121 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:21:10.890128 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 10 00:21:10.890138 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 10 00:21:10.890147 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 10 00:21:10.890155 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 10 00:21:10.890162 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 10 00:21:10.890171 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 10 00:21:10.890180 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 10 00:21:10.890189 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 10 00:21:10.890201 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:21:10.890211 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:21:10.890230 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 10 00:21:10.890240 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:21:10.890250 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 10 00:21:10.890260 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 10 00:21:10.890272 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 00:21:10.890282 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 00:21:10.890291 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 10 00:21:10.890301 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:21:10.890311 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:21:10.890323 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:21:10.890333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:21:10.890343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:21:10.890353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:21:10.890362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:21:10.890372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:21:10.890382 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:21:10.890392 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:21:10.890400 kernel: TSC deadline timer available Jul 10 00:21:10.890410 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:21:10.890417 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:21:10.890425 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:21:10.890433 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:21:10.890440 kernel: CPU topo: Num. cores per package: 4 Jul 10 00:21:10.890448 kernel: CPU topo: Num. threads per package: 4 Jul 10 00:21:10.890456 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 00:21:10.890464 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:21:10.890472 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:21:10.890479 kernel: kvm-guest: setup PV sched yield Jul 10 00:21:10.890489 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 00:21:10.890497 kernel: Booting paravirtualized kernel on KVM Jul 10 00:21:10.890505 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:21:10.890513 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:21:10.890521 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 00:21:10.890529 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 00:21:10.890595 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:21:10.890603 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:21:10.890610 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:21:10.890623 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:10.890634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:21:10.890641 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:21:10.890649 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:21:10.890657 kernel: Fallback order for Node 0: 0 Jul 10 00:21:10.890665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 10 00:21:10.890673 kernel: Policy zone: DMA32 Jul 10 00:21:10.890682 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:21:10.890696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:21:10.890707 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:21:10.890718 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:21:10.890728 kernel: Dynamic Preempt: voluntary Jul 10 00:21:10.890735 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:21:10.890744 kernel: rcu: RCU event tracing is enabled. Jul 10 00:21:10.890752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:21:10.890760 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:21:10.890768 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:21:10.890779 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:21:10.890786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:21:10.890805 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:21:10.890814 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:21:10.890822 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:21:10.890830 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:21:10.890838 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:21:10.890846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:21:10.890854 kernel: Console: colour dummy device 80x25 Jul 10 00:21:10.890864 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:21:10.890872 kernel: ACPI: Core revision 20240827 Jul 10 00:21:10.890880 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:21:10.890888 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:21:10.890896 kernel: x2apic enabled Jul 10 00:21:10.890903 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:21:10.890911 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:21:10.890919 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:21:10.890927 kernel: kvm-guest: setup PV IPIs Jul 10 00:21:10.890937 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:21:10.890945 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 10 00:21:10.890953 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 10 00:21:10.890961 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:21:10.890968 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:21:10.890976 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:21:10.890984 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:21:10.890992 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:21:10.890999 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:21:10.891010 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:21:10.891017 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:21:10.891025 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:21:10.891036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:21:10.891043 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:21:10.891052 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:21:10.891060 kernel: x86/bugs: return thunk changed Jul 10 00:21:10.891067 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:21:10.891078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:21:10.891086 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:21:10.891094 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:21:10.891102 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:21:10.891110 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:21:10.891117 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:21:10.891125 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:21:10.891133 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:21:10.891140 kernel: landlock: Up and running. Jul 10 00:21:10.891150 kernel: SELinux: Initializing. Jul 10 00:21:10.891158 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:21:10.891166 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:21:10.891174 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:21:10.891182 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:21:10.891189 kernel: ... version: 0 Jul 10 00:21:10.891197 kernel: ... bit width: 48 Jul 10 00:21:10.891205 kernel: ... generic registers: 6 Jul 10 00:21:10.891213 kernel: ... value mask: 0000ffffffffffff Jul 10 00:21:10.891223 kernel: ... max period: 00007fffffffffff Jul 10 00:21:10.891231 kernel: ... fixed-purpose events: 0 Jul 10 00:21:10.891238 kernel: ... event mask: 000000000000003f Jul 10 00:21:10.891246 kernel: signal: max sigframe size: 1776 Jul 10 00:21:10.891254 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:21:10.891266 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:21:10.891277 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:21:10.891285 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:21:10.891293 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:21:10.891302 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 00:21:10.891310 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:21:10.891318 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 10 00:21:10.891326 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137196K reserved, 0K cma-reserved) Jul 10 00:21:10.891334 kernel: devtmpfs: initialized Jul 10 00:21:10.891342 kernel: x86/mm: Memory block size: 128MB Jul 10 00:21:10.891350 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 10 00:21:10.891357 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 10 00:21:10.891365 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 10 00:21:10.891375 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 10 00:21:10.891383 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 10 00:21:10.891391 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 10 00:21:10.891399 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:21:10.891407 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:21:10.891414 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:21:10.891422 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:21:10.891430 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:21:10.891438 kernel: audit: type=2000 audit(1752106867.352:1): state=initialized audit_enabled=0 res=1 Jul 10 00:21:10.891448 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:21:10.891456 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:21:10.891464 kernel: cpuidle: using governor menu Jul 10 00:21:10.891471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:21:10.891479 kernel: dca service started, version 1.12.1 Jul 10 00:21:10.891487 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 00:21:10.891495 kernel: PCI: Using configuration type 1 for base access Jul 10 00:21:10.891503 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:21:10.891510 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:21:10.891520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:21:10.891528 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:21:10.891549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:21:10.891557 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:21:10.891564 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:21:10.891572 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:21:10.891580 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:21:10.891587 kernel: ACPI: Interpreter enabled Jul 10 00:21:10.891595 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:21:10.891605 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:21:10.891613 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:21:10.891621 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:21:10.891629 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:21:10.891637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:21:10.891867 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:21:10.891998 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:21:10.892244 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:21:10.892257 kernel: PCI host bridge to bus 0000:00 Jul 10 00:21:10.892392 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:21:10.892505 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:21:10.892639 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:21:10.892752 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 00:21:10.892875 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 00:21:10.892990 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:21:10.893100 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:21:10.893310 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:21:10.893856 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:21:10.894032 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:21:10.894157 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 00:21:10.894295 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:21:10.894459 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:21:10.894658 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:21:10.894814 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 00:21:10.894940 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 00:21:10.895086 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 00:21:10.895288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:21:10.895482 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 00:21:10.895672 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 00:21:10.895850 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 00:21:10.896039 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:21:10.896229 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 00:21:10.896396 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 00:21:10.896587 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 00:21:10.896803 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 00:21:10.896997 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:21:10.897167 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:21:10.897364 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:21:10.897552 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 00:21:10.897720 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 00:21:10.897912 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:21:10.898079 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 00:21:10.898095 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:21:10.898106 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:21:10.898116 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:21:10.898126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:21:10.898136 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:21:10.898147 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:21:10.898157 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:21:10.898174 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:21:10.898186 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:21:10.898196 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:21:10.898207 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:21:10.898217 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:21:10.898228 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:21:10.898238 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:21:10.898249 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:21:10.898259 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:21:10.898273 kernel: iommu: Default domain type: Translated Jul 10 00:21:10.898284 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:21:10.898294 kernel: efivars: Registered efivars operations Jul 10 00:21:10.898305 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:21:10.898315 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:21:10.898326 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 10 00:21:10.898336 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 10 00:21:10.898346 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 10 00:21:10.898357 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 10 00:21:10.898371 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 10 00:21:10.898381 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 10 00:21:10.898392 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 10 00:21:10.898402 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 10 00:21:10.898583 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:21:10.898745 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:21:10.898909 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:21:10.898925 kernel: vgaarb: loaded Jul 10 00:21:10.898942 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:21:10.898953 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:21:10.898964 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:21:10.898974 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:21:10.898985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:21:10.898996 kernel: pnp: PnP ACPI init Jul 10 00:21:10.899207 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 00:21:10.899245 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:21:10.899262 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:21:10.899273 kernel: NET: Registered PF_INET protocol family Jul 10 00:21:10.899285 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:21:10.899296 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:21:10.899308 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:21:10.899319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:21:10.899331 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:21:10.899342 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:21:10.899354 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:21:10.899369 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:21:10.899380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:21:10.899392 kernel: NET: Registered PF_XDP protocol family Jul 10 00:21:10.899583 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 00:21:10.899745 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 00:21:10.899903 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:21:10.900044 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:21:10.900192 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:21:10.900339 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 00:21:10.900481 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 00:21:10.900664 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:21:10.900683 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:21:10.900695 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 10 00:21:10.900706 kernel: Initialise system trusted keyrings Jul 10 00:21:10.900717 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:21:10.900728 kernel: Key type asymmetric registered Jul 10 00:21:10.900745 kernel: Asymmetric key parser 'x509' registered Jul 10 00:21:10.900755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:21:10.900766 kernel: io scheduler mq-deadline registered Jul 10 00:21:10.900781 kernel: io scheduler kyber registered Jul 10 00:21:10.900801 kernel: io scheduler bfq registered Jul 10 00:21:10.900812 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:21:10.900828 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:21:10.900839 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:21:10.900850 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:21:10.900860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:21:10.900871 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:21:10.900882 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:21:10.900892 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:21:10.900903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:21:10.901086 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:21:10.901110 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:21:10.901318 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:21:10.901500 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:21:10 UTC (1752106870) Jul 10 00:21:10.901679 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:21:10.901697 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:21:10.901708 kernel: efifb: probing for efifb Jul 10 00:21:10.901720 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:21:10.901731 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:21:10.901748 kernel: efifb: scrolling: redraw Jul 10 00:21:10.901759 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:21:10.901770 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:21:10.901780 kernel: fb0: EFI VGA frame buffer device Jul 10 00:21:10.901803 kernel: pstore: Using crash dump compression: deflate Jul 10 00:21:10.901815 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:21:10.901826 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:21:10.901836 kernel: Segment Routing with IPv6 Jul 10 00:21:10.901845 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:21:10.901857 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:21:10.901865 kernel: Key type dns_resolver registered Jul 10 00:21:10.901875 kernel: IPI shorthand broadcast: enabled Jul 10 00:21:10.901888 kernel: sched_clock: Marking stable (3915002584, 157792762)->(4088661249, -15865903) Jul 10 00:21:10.901903 kernel: registered taskstats version 1 Jul 10 00:21:10.901913 kernel: Loading compiled-in X.509 certificates Jul 10 00:21:10.901924 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:21:10.901935 kernel: Demotion targets for Node 0: null Jul 10 00:21:10.901946 kernel: Key type .fscrypt registered Jul 10 00:21:10.901960 kernel: Key type fscrypt-provisioning registered Jul 10 00:21:10.901968 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:21:10.901976 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:21:10.901985 kernel: ima: No architecture policies found Jul 10 00:21:10.901993 kernel: clk: Disabling unused clocks Jul 10 00:21:10.902001 kernel: Warning: unable to open an initial console. Jul 10 00:21:10.902009 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:21:10.902017 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:21:10.902025 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:21:10.902036 kernel: Run /init as init process Jul 10 00:21:10.902044 kernel: with arguments: Jul 10 00:21:10.902052 kernel: /init Jul 10 00:21:10.902060 kernel: with environment: Jul 10 00:21:10.902068 kernel: HOME=/ Jul 10 00:21:10.902076 kernel: TERM=linux Jul 10 00:21:10.902084 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:21:10.902093 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:21:10.902107 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:10.902116 systemd[1]: Detected virtualization kvm. Jul 10 00:21:10.902124 systemd[1]: Detected architecture x86-64. Jul 10 00:21:10.902133 systemd[1]: Running in initrd. Jul 10 00:21:10.902141 systemd[1]: No hostname configured, using default hostname. Jul 10 00:21:10.902150 systemd[1]: Hostname set to . Jul 10 00:21:10.902159 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:10.902167 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:21:10.902179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:10.902187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:10.902197 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:21:10.902208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:10.902217 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:21:10.902226 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:21:10.902236 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:21:10.902247 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:21:10.902256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:10.902264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:10.902273 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:10.902281 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:10.902290 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:10.902298 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:10.902307 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:10.902318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:10.902327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:21:10.902336 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:21:10.902344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:10.902353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:10.902361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:10.902370 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:10.902379 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:21:10.902387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:10.902398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:21:10.902407 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:21:10.902416 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:21:10.902424 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:10.902433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:10.902442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:10.902452 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:10.902476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:10.902487 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:21:10.902496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:21:10.902534 systemd-journald[220]: Collecting audit messages is disabled. Jul 10 00:21:10.902580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:10.902589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:21:10.902598 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:10.902607 systemd-journald[220]: Journal started Jul 10 00:21:10.902629 systemd-journald[220]: Runtime Journal (/run/log/journal/ed21775df2d6425dbe06b119ede033c1) is 6M, max 48.5M, 42.4M free. Jul 10 00:21:10.894028 systemd-modules-load[221]: Inserted module 'overlay' Jul 10 00:21:10.906102 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:10.912707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:10.916741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:10.925247 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:21:10.927789 kernel: Bridge firewalling registered Jul 10 00:21:10.926978 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 10 00:21:10.928201 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:10.930039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:10.931440 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:21:10.932639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:10.938574 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:10.945051 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:10.949117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:21:10.949385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:10.953646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:10.988464 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:21:11.007612 systemd-resolved[262]: Positive Trust Anchors: Jul 10 00:21:11.007624 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:11.007655 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:11.010195 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 10 00:21:11.011583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:11.041313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:11.164579 kernel: SCSI subsystem initialized Jul 10 00:21:11.173566 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:21:11.184567 kernel: iscsi: registered transport (tcp) Jul 10 00:21:11.206561 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:21:11.206583 kernel: QLogic iSCSI HBA Driver Jul 10 00:21:11.231354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:11.259175 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:11.263056 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:11.323183 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:11.351631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:21:11.449584 kernel: raid6: avx2x4 gen() 30070 MB/s Jul 10 00:21:11.466567 kernel: raid6: avx2x2 gen() 30339 MB/s Jul 10 00:21:11.483634 kernel: raid6: avx2x1 gen() 25457 MB/s Jul 10 00:21:11.483693 kernel: raid6: using algorithm avx2x2 gen() 30339 MB/s Jul 10 00:21:11.501627 kernel: raid6: .... xor() 19530 MB/s, rmw enabled Jul 10 00:21:11.501671 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:21:11.523605 kernel: xor: automatically using best checksumming function avx Jul 10 00:21:11.705599 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:21:11.716090 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:11.721172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:11.757059 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 10 00:21:11.762856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:11.766839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:21:11.799274 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 10 00:21:11.830868 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:11.834712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:11.918678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:11.922675 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:21:11.969564 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:21:11.975715 kernel: AES CTR mode by8 optimization enabled Jul 10 00:21:11.978557 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 00:21:11.985842 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:21:11.990069 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:21:12.000832 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:21:12.000870 kernel: GPT:9289727 != 19775487 Jul 10 00:21:12.000881 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:21:12.000892 kernel: GPT:9289727 != 19775487 Jul 10 00:21:12.000902 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:21:12.001566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:12.007021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:12.008238 kernel: libata version 3.00 loaded. Jul 10 00:21:12.008409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:12.009795 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:12.022647 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:21:12.022876 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:21:12.022888 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:21:12.023033 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:21:12.023250 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:21:12.023391 kernel: scsi host0: ahci Jul 10 00:21:12.023824 kernel: scsi host1: ahci Jul 10 00:21:12.023977 kernel: scsi host2: ahci Jul 10 00:21:12.012817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:12.020222 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:12.028173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:12.037889 kernel: scsi host3: ahci Jul 10 00:21:12.038091 kernel: scsi host4: ahci Jul 10 00:21:12.038248 kernel: scsi host5: ahci Jul 10 00:21:12.038405 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 00:21:12.038418 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 00:21:12.038429 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 00:21:12.038444 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 00:21:12.038455 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 00:21:12.038466 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 00:21:12.028297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:12.038335 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:12.041861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:12.071824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:12.083221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:21:12.092015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:21:12.098813 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:21:12.098890 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:21:12.110271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:21:12.111205 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:21:12.141701 disk-uuid[634]: Primary Header is updated. Jul 10 00:21:12.141701 disk-uuid[634]: Secondary Entries is updated. Jul 10 00:21:12.141701 disk-uuid[634]: Secondary Header is updated. Jul 10 00:21:12.144979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:12.349774 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:21:12.349855 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:21:12.349867 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:21:12.351588 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:21:12.351673 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:21:12.352570 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:21:12.353579 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:21:12.353608 kernel: ata3.00: applying bridge limits Jul 10 00:21:12.354580 kernel: ata3.00: configured for UDMA/100 Jul 10 00:21:12.355572 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:21:12.411615 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:21:12.411972 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:21:12.438872 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:21:12.859996 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:12.860645 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:12.861023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:12.861348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:12.862693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:21:12.902795 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:13.154561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:21:13.154624 disk-uuid[635]: The operation has completed successfully. Jul 10 00:21:13.190499 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:21:13.190658 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:21:13.222814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:21:13.240090 sh[664]: Success Jul 10 00:21:13.259303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:21:13.259374 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:21:13.259387 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:21:13.269563 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:21:13.305411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:21:13.307637 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:21:13.328389 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:21:13.337574 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:21:13.337636 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (676) Jul 10 00:21:13.338977 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:21:13.339003 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:13.340624 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:21:13.345863 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:21:13.348305 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:13.350690 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:21:13.351618 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:21:13.354895 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:21:13.396578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Jul 10 00:21:13.399257 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:13.399305 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:13.399317 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:13.407568 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:13.407807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:21:13.411671 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:21:13.499030 ignition[752]: Ignition 2.21.0 Jul 10 00:21:13.499047 ignition[752]: Stage: fetch-offline Jul 10 00:21:13.499089 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:13.499099 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:13.499188 ignition[752]: parsed url from cmdline: "" Jul 10 00:21:13.499192 ignition[752]: no config URL provided Jul 10 00:21:13.499197 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:21:13.499207 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:21:13.499234 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 10 00:21:13.499242 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:21:13.507460 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 10 00:21:13.518299 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:13.523825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:13.550479 ignition[752]: parsing config with SHA512: efb13d3212a34188048a2cdd966ee0bf41cd6f983d5d9908f9d226fa6caaac4d5c303f09ac929c5aeb09af85883d9dec7fa0f21d698bb6b4efc465fc19657b91 Jul 10 00:21:13.557144 unknown[752]: fetched base config from "system" Jul 10 00:21:13.557159 unknown[752]: fetched user config from "qemu" Jul 10 00:21:13.559093 ignition[752]: fetch-offline: fetch-offline passed Jul 10 00:21:13.559154 ignition[752]: Ignition finished successfully Jul 10 00:21:13.563268 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:13.578311 systemd-networkd[854]: lo: Link UP Jul 10 00:21:13.578323 systemd-networkd[854]: lo: Gained carrier Jul 10 00:21:13.579948 systemd-networkd[854]: Enumeration completed Jul 10 00:21:13.580060 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:13.580350 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:13.580355 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:21:13.582120 systemd-networkd[854]: eth0: Link UP Jul 10 00:21:13.582124 systemd-networkd[854]: eth0: Gained carrier Jul 10 00:21:13.582133 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:13.582639 systemd[1]: Reached target network.target - Network. Jul 10 00:21:13.584464 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:21:13.588127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:21:13.597596 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:21:13.624187 ignition[858]: Ignition 2.21.0 Jul 10 00:21:13.624205 ignition[858]: Stage: kargs Jul 10 00:21:13.624447 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:13.624460 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:13.626709 ignition[858]: kargs: kargs passed Jul 10 00:21:13.626777 ignition[858]: Ignition finished successfully Jul 10 00:21:13.633776 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:21:13.637240 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:21:15.191633 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1735745417 wd_nsec: 1735745192 Jul 10 00:21:15.200432 ignition[867]: Ignition 2.21.0 Jul 10 00:21:15.200451 ignition[867]: Stage: disks Jul 10 00:21:15.200635 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:15.200646 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:15.203580 ignition[867]: disks: disks passed Jul 10 00:21:15.203762 ignition[867]: Ignition finished successfully Jul 10 00:21:15.207130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:21:15.208722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:15.210732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:21:15.212039 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:15.214139 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:15.216311 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:15.219624 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:21:15.262329 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:21:15.271262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:21:15.276138 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:21:15.478607 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:21:15.479564 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:21:15.480316 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:15.483738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:21:15.486435 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:21:15.487646 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:21:15.487712 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:21:15.487908 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:15.496399 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:21:15.498814 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:21:15.503445 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 10 00:21:15.503498 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:15.503516 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:15.503531 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:15.533697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:21:15.573682 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:21:15.579591 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:21:15.584996 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:21:15.590426 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:21:15.615000 systemd-networkd[854]: eth0: Gained IPv6LL Jul 10 00:21:15.698029 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:15.700978 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:21:15.702090 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:21:15.723816 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:21:15.725160 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:15.743103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:21:15.765720 ignition[999]: INFO : Ignition 2.21.0 Jul 10 00:21:15.765720 ignition[999]: INFO : Stage: mount Jul 10 00:21:15.767599 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:15.767599 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:15.767599 ignition[999]: INFO : mount: mount passed Jul 10 00:21:15.767599 ignition[999]: INFO : Ignition finished successfully Jul 10 00:21:15.774047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:21:15.776291 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:21:16.481483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:21:16.588326 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 10 00:21:16.588407 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:21:16.588423 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:21:16.589275 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:21:16.593734 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:21:16.704766 ignition[1028]: INFO : Ignition 2.21.0 Jul 10 00:21:16.704766 ignition[1028]: INFO : Stage: files Jul 10 00:21:16.707068 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:16.707068 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:16.709515 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:21:16.710794 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:21:16.710794 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:21:16.713869 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:21:16.713869 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:21:16.716807 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:21:16.716807 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:21:16.716807 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:21:16.713910 unknown[1028]: wrote ssh authorized keys file for user: core Jul 10 00:21:16.757492 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:21:16.873612 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:21:16.875718 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:16.875718 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:21:17.232829 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:21:17.608181 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:21:17.608181 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:17.612233 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:21:17.703067 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:17.705802 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:21:17.705802 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:21:17.712591 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:21:17.712591 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:21:17.719189 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:21:18.013195 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:21:18.747776 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:21:18.747776 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:21:18.752020 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:18.758916 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:18.758916 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:21:18.758916 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:21:18.764349 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:21:18.764349 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:21:18.764349 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:21:18.764349 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:21:18.812095 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:21:18.819699 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:18.821304 ignition[1028]: INFO : files: files passed Jul 10 00:21:18.821304 ignition[1028]: INFO : Ignition finished successfully Jul 10 00:21:18.833346 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:21:18.835868 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:21:18.839041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:21:18.860252 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:21:18.860385 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:21:18.864372 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:21:18.868898 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:18.868898 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:18.872165 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:18.873644 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:18.877177 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:21:18.878693 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:21:18.950179 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:21:18.950314 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:21:18.952679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:21:18.954787 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:21:18.955842 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:21:18.959173 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:21:18.995385 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:18.999692 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:21:19.037864 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:19.040225 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:19.043310 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:21:19.043516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:21:19.043749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:19.048236 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:21:19.048435 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:21:19.052059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:21:19.052250 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:19.056395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:19.056612 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:19.060929 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:21:19.061108 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:19.065269 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:21:19.065455 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:21:19.067323 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:21:19.069290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:21:19.069495 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:19.073628 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:19.075740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:19.076985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:21:19.077094 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:19.079422 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:21:19.079631 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:19.083695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:21:19.083850 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:19.085993 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:21:19.086916 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:21:19.092622 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:19.092873 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:21:19.095389 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:21:19.095884 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:21:19.096010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:19.098724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:21:19.098831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:19.099231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:21:19.099371 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:19.103472 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:21:19.103640 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:21:19.105548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:21:19.106115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:21:19.106262 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:19.109122 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:21:19.116527 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:21:19.116840 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:19.119022 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:21:19.119139 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:19.126634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:21:19.126776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:21:19.150064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:21:19.163259 ignition[1084]: INFO : Ignition 2.21.0 Jul 10 00:21:19.163259 ignition[1084]: INFO : Stage: umount Jul 10 00:21:19.165229 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:19.165229 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:19.169926 ignition[1084]: INFO : umount: umount passed Jul 10 00:21:19.169926 ignition[1084]: INFO : Ignition finished successfully Jul 10 00:21:19.172766 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:21:19.172976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:21:19.174089 systemd[1]: Stopped target network.target - Network. Jul 10 00:21:19.175751 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:21:19.175827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:21:19.179306 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:21:19.179369 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:21:19.180294 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:21:19.180366 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:21:19.182386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:21:19.182459 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:19.183026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:21:19.187089 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:21:19.195314 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:21:19.195567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:21:19.200043 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:21:19.200366 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:21:19.200503 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:21:19.205219 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:21:19.206591 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:21:19.207864 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:21:19.207916 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:19.211181 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:21:19.214044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:21:19.214160 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:19.215231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:21:19.215291 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:19.219324 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:21:19.219404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:19.220401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:21:19.220474 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:19.224624 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:19.226890 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:21:19.226989 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:19.239080 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:21:19.239307 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:19.241705 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:21:19.241755 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:19.243733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:21:19.243773 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:19.245827 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:21:19.245924 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:19.247442 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:21:19.247520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:19.251253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:21:19.251323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:19.253219 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:21:19.257250 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:21:19.257336 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:19.261460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:21:19.261525 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:19.266367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:19.266453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:19.271111 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:21:19.271184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:21:19.271241 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:19.271829 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:21:19.271967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:21:19.275966 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:21:19.276150 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:21:19.524515 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:21:19.524704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:21:19.527930 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:21:19.530127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:21:19.530203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:19.534419 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:21:19.565577 systemd[1]: Switching root. Jul 10 00:21:19.612307 systemd-journald[220]: Journal stopped Jul 10 00:21:21.668567 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 10 00:21:21.668655 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:21:21.668678 kernel: SELinux: policy capability open_perms=1 Jul 10 00:21:21.668693 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:21:21.668709 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:21:21.668724 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:21:21.668739 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:21:21.668760 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:21:21.668782 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:21:21.668797 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:21:21.668812 kernel: audit: type=1403 audit(1752106880.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:21:21.668835 systemd[1]: Successfully loaded SELinux policy in 55.055ms. Jul 10 00:21:21.668871 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.128ms. Jul 10 00:21:21.668889 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:21.668906 systemd[1]: Detected virtualization kvm. Jul 10 00:21:21.668922 systemd[1]: Detected architecture x86-64. Jul 10 00:21:21.668937 systemd[1]: Detected first boot. Jul 10 00:21:21.668957 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:21.668973 zram_generator::config[1130]: No configuration found. Jul 10 00:21:21.668990 kernel: Guest personality initialized and is inactive Jul 10 00:21:21.669005 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:21:21.669021 kernel: Initialized host personality Jul 10 00:21:21.669036 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:21:21.669057 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:21:21.669074 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:21:21.669096 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:21:21.669115 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:21:21.669131 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:21.669148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:21:21.669164 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:21:21.669187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:21:21.669203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:21:21.669219 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:21:21.669235 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:21:21.669258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:21:21.669273 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:21:21.669290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:21.669306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:21.669322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:21:21.669338 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:21:21.669355 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:21:21.669378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:21.669394 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:21:21.669410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:21.669427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:21.669443 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:21:21.669462 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:21:21.669480 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:21.669506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:21:21.669522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:21.669580 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:21.669605 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:21.669621 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:21.669637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:21:21.669653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:21:21.669669 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:21:21.669685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:21.669701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:21.669717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:21.669733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:21:21.669751 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:21:21.669767 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:21:21.669783 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:21:21.669799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:21.669815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:21:21.669831 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:21:21.669851 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:21:21.669867 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:21:21.669889 systemd[1]: Reached target machines.target - Containers. Jul 10 00:21:21.669916 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:21:21.669933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:21.670426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:21.670446 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:21:21.670462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:21.670477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:21.670503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:21.670519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:21:21.670557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:21.670570 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:21:21.670582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:21:21.670596 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:21:21.670610 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:21:21.670622 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:21:21.670637 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:21.670652 kernel: fuse: init (API version 7.41) Jul 10 00:21:21.670667 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:21.670680 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:21.670692 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:21.670704 kernel: loop: module loaded Jul 10 00:21:21.670716 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:21:21.670738 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:21:21.670750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:21.670762 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:21:21.670774 systemd[1]: Stopped verity-setup.service. Jul 10 00:21:21.670791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:21.670804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:21:21.670818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:21:21.670830 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:21:21.670842 kernel: ACPI: bus type drm_connector registered Jul 10 00:21:21.670853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:21:21.670893 systemd-journald[1208]: Collecting audit messages is disabled. Jul 10 00:21:21.670916 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:21:21.670928 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:21:21.670944 systemd-journald[1208]: Journal started Jul 10 00:21:21.670967 systemd-journald[1208]: Runtime Journal (/run/log/journal/ed21775df2d6425dbe06b119ede033c1) is 6M, max 48.5M, 42.4M free. Jul 10 00:21:21.405414 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:21:21.426852 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:21:21.427369 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:21:21.674778 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:21.676249 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:21:21.677848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:21.679484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:21:21.679950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:21:21.681791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:21.682024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:21.683673 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:21.683912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:21.685414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:21.685680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:21.687359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:21:21.687749 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:21:21.689331 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:21.689576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:21.691145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:21.692813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:21.694746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:21:21.696512 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:21:21.712891 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:21.715909 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:21:21.719644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:21:21.721049 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:21:21.721087 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:21.723315 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:21:21.727464 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:21:21.729971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:21.731750 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:21:21.736062 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:21:21.737404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:21.741758 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:21:21.743099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:21.744352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:21.746801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:21:21.760077 systemd-journald[1208]: Time spent on flushing to /var/log/journal/ed21775df2d6425dbe06b119ede033c1 is 14.248ms for 1069 entries. Jul 10 00:21:21.760077 systemd-journald[1208]: System Journal (/var/log/journal/ed21775df2d6425dbe06b119ede033c1) is 8M, max 195.6M, 187.6M free. Jul 10 00:21:21.793639 systemd-journald[1208]: Received client request to flush runtime journal. Jul 10 00:21:21.793675 kernel: loop0: detected capacity change from 0 to 113872 Jul 10 00:21:21.751415 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:21:21.754325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:21:21.756908 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:21:21.763016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:21:21.765668 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:21:21.770980 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:21:21.795888 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:21:21.804687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:21:21.804514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:21.806423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:21.818577 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:21:21.827592 kernel: loop1: detected capacity change from 0 to 229808 Jul 10 00:21:21.838400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:21:21.841549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:21.859557 kernel: loop2: detected capacity change from 0 to 146240 Jul 10 00:21:21.876757 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 10 00:21:21.876776 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 10 00:21:21.886660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:21.895576 kernel: loop3: detected capacity change from 0 to 113872 Jul 10 00:21:21.907566 kernel: loop4: detected capacity change from 0 to 229808 Jul 10 00:21:21.962578 kernel: loop5: detected capacity change from 0 to 146240 Jul 10 00:21:21.975417 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:21:21.976249 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 10 00:21:21.983066 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:21:21.983084 systemd[1]: Reloading... Jul 10 00:21:22.073570 zram_generator::config[1305]: No configuration found. Jul 10 00:21:22.205212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:22.257378 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:21:22.296626 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:21:22.297036 systemd[1]: Reloading finished in 313 ms. Jul 10 00:21:22.325098 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:21:22.329935 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:21:22.348699 systemd[1]: Starting ensure-sysext.service... Jul 10 00:21:22.350796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:22.376410 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:21:22.376602 systemd[1]: Reloading... Jul 10 00:21:22.394991 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:21:22.395030 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:21:22.395320 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:21:22.395608 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:21:22.396558 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:21:22.396843 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 10 00:21:22.396913 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 10 00:21:22.401834 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:22.402355 systemd-tmpfiles[1336]: Skipping /boot Jul 10 00:21:22.421026 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:22.421177 systemd-tmpfiles[1336]: Skipping /boot Jul 10 00:21:22.441191 zram_generator::config[1363]: No configuration found. Jul 10 00:21:22.650656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:22.739479 systemd[1]: Reloading finished in 362 ms. Jul 10 00:21:22.763314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:21:22.786793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:22.798985 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:22.802120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:21:22.811640 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:21:22.816689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:22.821782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:22.827211 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:21:22.831146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:22.831325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:22.838842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:22.842313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:22.846800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:22.847968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:22.848076 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:22.851604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:21:22.852686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:22.859224 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:21:22.863649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:22.863906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:22.865772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:22.865991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:22.867649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:22.867877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:22.878512 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:21:22.880506 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 10 00:21:22.885568 systemd[1]: Finished ensure-sysext.service. Jul 10 00:21:22.887900 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:22.888151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:22.889644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:22.892241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:22.895676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:22.898776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:22.899925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:22.900167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:22.909900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:21:22.913935 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:21:22.918985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:22.920206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:22.920522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:22.922077 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:22.922358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:22.923931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:22.924210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:22.925781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:22.926061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:22.931245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:22.931333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:23.018089 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:21:23.041367 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:23.047386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:23.049015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:21:23.055722 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:21:23.122883 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:21:23.154140 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:21:23.168023 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:21:23.200862 augenrules[1488]: No rules Jul 10 00:21:23.205915 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:23.206329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:23.222568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:21:23.227564 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:21:23.243799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:21:23.255970 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:21:23.280949 systemd-resolved[1405]: Positive Trust Anchors: Jul 10 00:21:23.281416 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:23.281470 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:23.285180 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:21:23.285569 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:21:23.287809 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jul 10 00:21:23.290366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:23.290626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:23.290893 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:23.294858 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:21:23.295216 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:21:23.295387 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:21:23.296089 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:21:23.297421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:21:23.298792 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:21:23.300178 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:21:23.301356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:21:23.302647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:21:23.303913 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:21:23.303947 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:23.304890 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:23.306906 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:21:23.310034 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:21:23.315789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:21:23.317314 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:21:23.318675 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:21:23.318776 systemd-networkd[1457]: lo: Link UP Jul 10 00:21:23.318793 systemd-networkd[1457]: lo: Gained carrier Jul 10 00:21:23.320492 systemd-networkd[1457]: Enumeration completed Jul 10 00:21:23.322410 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:23.322423 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:21:23.322977 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:21:23.323170 systemd-networkd[1457]: eth0: Link UP Jul 10 00:21:23.323332 systemd-networkd[1457]: eth0: Gained carrier Jul 10 00:21:23.323353 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:23.324576 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:21:23.327137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:23.330005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:21:23.331464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:21:23.336964 systemd[1]: Reached target network.target - Network. Jul 10 00:21:23.338241 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:23.339588 systemd-networkd[1457]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:21:23.340228 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 10 00:21:23.826371 systemd-resolved[1405]: Clock change detected. Flushing caches. Jul 10 00:21:23.826423 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:21:23.826476 systemd-timesyncd[1439]: Initial clock synchronization to Thu 2025-07-10 00:21:23.826321 UTC. Jul 10 00:21:23.826573 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:23.828687 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:23.828721 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:23.831144 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:21:23.836003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:21:23.852932 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:21:23.856340 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:21:23.863872 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:21:23.864953 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:21:23.867229 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:21:23.871236 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:21:23.876598 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:21:23.882726 jq[1525]: false Jul 10 00:21:23.886200 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:21:23.896136 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 10 00:21:23.896144 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 10 00:21:23.933489 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 10 00:21:23.933489 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:23.933476 oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 10 00:21:23.933645 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 10 00:21:23.933501 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:23.933580 oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 10 00:21:23.939130 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 10 00:21:23.939130 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:23.939098 oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 10 00:21:23.939111 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:23.944228 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:21:23.949111 extend-filesystems[1526]: Found /dev/vda6 Jul 10 00:21:23.954235 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:21:23.956644 extend-filesystems[1526]: Found /dev/vda9 Jul 10 00:21:23.964028 extend-filesystems[1526]: Checking size of /dev/vda9 Jul 10 00:21:23.970169 kernel: kvm_amd: TSC scaling supported Jul 10 00:21:23.970222 kernel: kvm_amd: Nested Virtualization enabled Jul 10 00:21:23.970253 kernel: kvm_amd: Nested Paging enabled Jul 10 00:21:23.970277 kernel: kvm_amd: LBR virtualization supported Jul 10 00:21:23.970366 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 00:21:23.970416 kernel: kvm_amd: Virtual GIF supported Jul 10 00:21:23.974267 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:21:23.980090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:21:23.982371 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:21:23.982561 extend-filesystems[1526]: Resized partition /dev/vda9 Jul 10 00:21:23.984502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:21:23.986091 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:21:23.988013 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:21:23.988144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:21:23.993691 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:21:23.996511 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:21:24.000024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:21:24.002489 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:21:24.003203 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:21:24.003551 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:21:24.005469 jq[1554]: true Jul 10 00:21:24.005604 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:21:24.005956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:21:24.010092 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:21:24.010432 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:21:24.029113 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:21:24.055860 jq[1559]: true Jul 10 00:21:24.069994 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:21:24.092937 update_engine[1553]: I20250710 00:21:24.082827 1553 main.cc:92] Flatcar Update Engine starting Jul 10 00:21:24.073559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:24.094114 extend-filesystems[1552]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:21:24.094114 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:21:24.094114 extend-filesystems[1552]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:21:24.100310 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Jul 10 00:21:24.095905 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:21:24.096852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:21:24.103180 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:21:24.110340 tar[1557]: linux-amd64/LICENSE Jul 10 00:21:24.111673 tar[1557]: linux-amd64/helm Jul 10 00:21:24.113324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:24.113667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:24.117058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:24.142464 dbus-daemon[1522]: [system] SELinux support is enabled Jul 10 00:21:24.142665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:21:24.145934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:21:24.145956 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:21:24.147221 update_engine[1553]: I20250710 00:21:24.147163 1553 update_check_scheduler.cc:74] Next update check in 9m4s Jul 10 00:21:24.147402 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:21:24.147427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:21:24.148768 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:21:24.151990 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:21:24.156248 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:21:24.158254 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:24.160484 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:21:24.165812 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:21:24.209925 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:21:24.209960 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:21:24.213422 systemd-logind[1542]: New seat seat0. Jul 10 00:21:24.215376 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:21:24.371079 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:21:24.374787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:24.397395 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:21:24.398372 containerd[1560]: time="2025-07-10T00:21:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:21:24.398830 containerd[1560]: time="2025-07-10T00:21:24.398790404Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:21:24.412998 containerd[1560]: time="2025-07-10T00:21:24.412892986Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.545µs" Jul 10 00:21:24.412998 containerd[1560]: time="2025-07-10T00:21:24.412955934Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:21:24.412998 containerd[1560]: time="2025-07-10T00:21:24.413004195Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:21:24.413295 containerd[1560]: time="2025-07-10T00:21:24.413260676Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:21:24.413295 containerd[1560]: time="2025-07-10T00:21:24.413285252Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:21:24.413373 containerd[1560]: time="2025-07-10T00:21:24.413319346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:24.413566 containerd[1560]: time="2025-07-10T00:21:24.413403544Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:24.413566 containerd[1560]: time="2025-07-10T00:21:24.413426096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414174 containerd[1560]: time="2025-07-10T00:21:24.414137129Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414174 containerd[1560]: time="2025-07-10T00:21:24.414160182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414255 containerd[1560]: time="2025-07-10T00:21:24.414176192Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414255 containerd[1560]: time="2025-07-10T00:21:24.414188586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414356 containerd[1560]: time="2025-07-10T00:21:24.414335271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414630 containerd[1560]: time="2025-07-10T00:21:24.414600107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414658 containerd[1560]: time="2025-07-10T00:21:24.414638359Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:24.414658 containerd[1560]: time="2025-07-10T00:21:24.414649109Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:21:24.414762 containerd[1560]: time="2025-07-10T00:21:24.414727606Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:21:24.418223 containerd[1560]: time="2025-07-10T00:21:24.418192924Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:21:24.418312 containerd[1560]: time="2025-07-10T00:21:24.418292871Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:21:24.473515 containerd[1560]: time="2025-07-10T00:21:24.473429322Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:21:24.473515 containerd[1560]: time="2025-07-10T00:21:24.473512798Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:21:24.473515 containerd[1560]: time="2025-07-10T00:21:24.473530201Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473542664Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473583120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473605693Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473623706Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473639606Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473651629Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:21:24.473691 containerd[1560]: time="2025-07-10T00:21:24.473661808Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:21:24.473891 containerd[1560]: time="2025-07-10T00:21:24.473715879Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:21:24.473891 containerd[1560]: time="2025-07-10T00:21:24.473742579Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:21:24.474387 containerd[1560]: time="2025-07-10T00:21:24.474338076Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:21:24.474387 containerd[1560]: time="2025-07-10T00:21:24.474379003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:21:24.474452 containerd[1560]: time="2025-07-10T00:21:24.474420250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:21:24.474452 containerd[1560]: time="2025-07-10T00:21:24.474434376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:21:24.474452 containerd[1560]: time="2025-07-10T00:21:24.474445097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:21:24.474452 containerd[1560]: time="2025-07-10T00:21:24.474455115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:21:24.474567 containerd[1560]: time="2025-07-10T00:21:24.474466246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:21:24.474567 containerd[1560]: time="2025-07-10T00:21:24.474478980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:21:24.474567 containerd[1560]: time="2025-07-10T00:21:24.474490091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:21:24.474567 containerd[1560]: time="2025-07-10T00:21:24.474517172Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:21:24.474567 containerd[1560]: time="2025-07-10T00:21:24.474529565Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:21:24.474697 containerd[1560]: time="2025-07-10T00:21:24.474620155Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:21:24.474697 containerd[1560]: time="2025-07-10T00:21:24.474636405Z" level=info msg="Start snapshots syncer" Jul 10 00:21:24.474697 containerd[1560]: time="2025-07-10T00:21:24.474674146Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:21:24.475113 containerd[1560]: time="2025-07-10T00:21:24.475054179Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:21:24.475246 containerd[1560]: time="2025-07-10T00:21:24.475129710Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:21:24.477667 containerd[1560]: time="2025-07-10T00:21:24.477620180Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:21:24.478005 containerd[1560]: time="2025-07-10T00:21:24.477959125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:21:24.478127 containerd[1560]: time="2025-07-10T00:21:24.478104979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:21:24.478207 containerd[1560]: time="2025-07-10T00:21:24.478188596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:21:24.478278 containerd[1560]: time="2025-07-10T00:21:24.478260911Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:21:24.478370 containerd[1560]: time="2025-07-10T00:21:24.478350259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:21:24.478437 containerd[1560]: time="2025-07-10T00:21:24.478421582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:21:24.478504 containerd[1560]: time="2025-07-10T00:21:24.478487566Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:21:24.478597 containerd[1560]: time="2025-07-10T00:21:24.478581402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:21:24.478652 containerd[1560]: time="2025-07-10T00:21:24.478640333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:21:24.478717 containerd[1560]: time="2025-07-10T00:21:24.478702599Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:21:24.478812 containerd[1560]: time="2025-07-10T00:21:24.478794552Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:24.478900 containerd[1560]: time="2025-07-10T00:21:24.478883509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:24.478949 containerd[1560]: time="2025-07-10T00:21:24.478937820Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:24.479022 containerd[1560]: time="2025-07-10T00:21:24.479008453Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:24.479068 containerd[1560]: time="2025-07-10T00:21:24.479056813Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:21:24.479115 containerd[1560]: time="2025-07-10T00:21:24.479104002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:21:24.479170 containerd[1560]: time="2025-07-10T00:21:24.479158234Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:21:24.479261 containerd[1560]: time="2025-07-10T00:21:24.479242582Z" level=info msg="runtime interface created" Jul 10 00:21:24.479327 containerd[1560]: time="2025-07-10T00:21:24.479312763Z" level=info msg="created NRI interface" Jul 10 00:21:24.479390 containerd[1560]: time="2025-07-10T00:21:24.479374519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:21:24.479476 containerd[1560]: time="2025-07-10T00:21:24.479460881Z" level=info msg="Connect containerd service" Jul 10 00:21:24.479581 containerd[1560]: time="2025-07-10T00:21:24.479563043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:21:24.483079 containerd[1560]: time="2025-07-10T00:21:24.482580270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:21:24.670215 containerd[1560]: time="2025-07-10T00:21:24.670090622Z" level=info msg="Start subscribing containerd event" Jul 10 00:21:24.670215 containerd[1560]: time="2025-07-10T00:21:24.670170291Z" level=info msg="Start recovering state" Jul 10 00:21:24.670360 containerd[1560]: time="2025-07-10T00:21:24.670326765Z" level=info msg="Start event monitor" Jul 10 00:21:24.670360 containerd[1560]: time="2025-07-10T00:21:24.670347463Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:21:24.670360 containerd[1560]: time="2025-07-10T00:21:24.670354707Z" level=info msg="Start streaming server" Jul 10 00:21:24.670417 containerd[1560]: time="2025-07-10T00:21:24.670373642Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:21:24.670417 containerd[1560]: time="2025-07-10T00:21:24.670386446Z" level=info msg="runtime interface starting up..." Jul 10 00:21:24.670417 containerd[1560]: time="2025-07-10T00:21:24.670396355Z" level=info msg="starting plugins..." Jul 10 00:21:24.670417 containerd[1560]: time="2025-07-10T00:21:24.670412435Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:21:24.670499 containerd[1560]: time="2025-07-10T00:21:24.670433495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:21:24.670552 containerd[1560]: time="2025-07-10T00:21:24.670522502Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:21:24.670962 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:21:24.672233 containerd[1560]: time="2025-07-10T00:21:24.671149197Z" level=info msg="containerd successfully booted in 0.273710s" Jul 10 00:21:24.686989 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:21:24.716597 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:21:24.721331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:21:24.727277 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Jul 10 00:21:24.751105 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:21:24.751530 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:21:24.758661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:21:24.790002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:21:24.793843 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:21:24.796482 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:21:24.797873 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:21:24.807011 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:24.809264 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:24.816035 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:21:24.818230 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:21:24.825555 systemd-logind[1542]: New session 1 of user core. Jul 10 00:21:24.848760 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:21:24.854146 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:21:24.855781 tar[1557]: linux-amd64/README.md Jul 10 00:21:24.872453 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:21:24.875565 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:21:24.877640 systemd-logind[1542]: New session c1 of user core. Jul 10 00:21:25.029339 systemd[1649]: Queued start job for default target default.target. Jul 10 00:21:25.052317 systemd[1649]: Created slice app.slice - User Application Slice. Jul 10 00:21:25.052346 systemd[1649]: Reached target paths.target - Paths. Jul 10 00:21:25.052387 systemd[1649]: Reached target timers.target - Timers. Jul 10 00:21:25.054096 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:21:25.067838 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:21:25.068067 systemd[1649]: Reached target sockets.target - Sockets. Jul 10 00:21:25.068128 systemd[1649]: Reached target basic.target - Basic System. Jul 10 00:21:25.068183 systemd[1649]: Reached target default.target - Main User Target. Jul 10 00:21:25.068230 systemd[1649]: Startup finished in 183ms. Jul 10 00:21:25.068730 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:21:25.072180 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:21:25.141703 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:58876.service - OpenSSH per-connection server daemon (10.0.0.1:58876). Jul 10 00:21:25.188224 systemd-networkd[1457]: eth0: Gained IPv6LL Jul 10 00:21:25.191776 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:21:25.193605 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:21:25.196568 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:21:25.199429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:25.205114 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:21:25.224753 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 58876 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:25.228098 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:25.240817 systemd-logind[1542]: New session 2 of user core. Jul 10 00:21:25.245780 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:21:25.264430 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:21:25.270087 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:21:25.270501 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:21:25.273096 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:21:25.337125 sshd[1683]: Connection closed by 10.0.0.1 port 58876 Jul 10 00:21:25.338650 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:25.350531 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:58876.service: Deactivated successfully. Jul 10 00:21:25.353089 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:21:25.354154 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:21:25.358419 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:58890.service - OpenSSH per-connection server daemon (10.0.0.1:58890). Jul 10 00:21:25.360697 systemd-logind[1542]: Removed session 2. Jul 10 00:21:25.429690 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 58890 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:25.431550 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:25.436622 systemd-logind[1542]: New session 3 of user core. Jul 10 00:21:25.450152 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:21:25.505198 sshd[1691]: Connection closed by 10.0.0.1 port 58890 Jul 10 00:21:25.505565 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:25.509468 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:58890.service: Deactivated successfully. Jul 10 00:21:25.511672 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:21:25.513400 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:21:25.515674 systemd-logind[1542]: Removed session 3. Jul 10 00:21:26.381869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:26.384099 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:21:26.385642 systemd[1]: Startup finished in 3.992s (kernel) + 10.140s (initrd) + 5.134s (userspace) = 19.267s. Jul 10 00:21:26.386491 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:26.912479 kubelet[1701]: E0710 00:21:26.912374 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:26.917057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:26.917309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:26.917694 systemd[1]: kubelet.service: Consumed 1.505s CPU time, 265M memory peak. Jul 10 00:21:35.522870 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:44184.service - OpenSSH per-connection server daemon (10.0.0.1:44184). Jul 10 00:21:35.580468 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 44184 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:35.582582 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:35.588176 systemd-logind[1542]: New session 4 of user core. Jul 10 00:21:35.598165 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:21:35.653879 sshd[1716]: Connection closed by 10.0.0.1 port 44184 Jul 10 00:21:35.654306 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:35.673539 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:44184.service: Deactivated successfully. Jul 10 00:21:35.675799 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:21:35.676681 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:21:35.679822 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:44190.service - OpenSSH per-connection server daemon (10.0.0.1:44190). Jul 10 00:21:35.680701 systemd-logind[1542]: Removed session 4. Jul 10 00:21:35.749593 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 44190 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:35.751219 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:35.755785 systemd-logind[1542]: New session 5 of user core. Jul 10 00:21:35.777137 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:21:35.825832 sshd[1725]: Connection closed by 10.0.0.1 port 44190 Jul 10 00:21:35.826177 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:35.841723 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:44190.service: Deactivated successfully. Jul 10 00:21:35.843511 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:21:35.844400 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:21:35.847516 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:44200.service - OpenSSH per-connection server daemon (10.0.0.1:44200). Jul 10 00:21:35.848277 systemd-logind[1542]: Removed session 5. Jul 10 00:21:35.904445 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 44200 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:35.905908 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:35.910535 systemd-logind[1542]: New session 6 of user core. Jul 10 00:21:35.924283 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:21:35.980019 sshd[1733]: Connection closed by 10.0.0.1 port 44200 Jul 10 00:21:35.980402 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:35.989749 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:44200.service: Deactivated successfully. Jul 10 00:21:35.991676 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:21:35.992377 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:21:35.995524 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:44202.service - OpenSSH per-connection server daemon (10.0.0.1:44202). Jul 10 00:21:35.996376 systemd-logind[1542]: Removed session 6. Jul 10 00:21:36.058721 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 44202 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:36.060335 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:36.065333 systemd-logind[1542]: New session 7 of user core. Jul 10 00:21:36.076127 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:21:36.137103 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:21:36.137431 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:36.154334 sudo[1742]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:36.156505 sshd[1741]: Connection closed by 10.0.0.1 port 44202 Jul 10 00:21:36.156928 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:36.175342 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:44202.service: Deactivated successfully. Jul 10 00:21:36.177864 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:21:36.178713 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:21:36.182465 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Jul 10 00:21:36.183138 systemd-logind[1542]: Removed session 7. Jul 10 00:21:36.242475 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:36.244615 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:36.249300 systemd-logind[1542]: New session 8 of user core. Jul 10 00:21:36.260097 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:21:36.314211 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:21:36.314538 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:36.421042 sudo[1752]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:36.428515 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:21:36.428888 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:36.440430 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:36.488943 augenrules[1774]: No rules Jul 10 00:21:36.491312 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:36.491676 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:36.492965 sudo[1751]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:36.494791 sshd[1750]: Connection closed by 10.0.0.1 port 44218 Jul 10 00:21:36.495202 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:36.514055 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:44218.service: Deactivated successfully. Jul 10 00:21:36.515899 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:21:36.516786 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:21:36.519756 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:44232.service - OpenSSH per-connection server daemon (10.0.0.1:44232). Jul 10 00:21:36.520444 systemd-logind[1542]: Removed session 8. Jul 10 00:21:36.583650 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 44232 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:21:36.585493 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:36.590560 systemd-logind[1542]: New session 9 of user core. Jul 10 00:21:36.601225 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:21:36.655605 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:21:36.655926 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:37.167635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:37.169390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:37.172712 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:21:37.190564 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:21:37.545549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:37.576830 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:37.623042 dockerd[1807]: time="2025-07-10T00:21:37.622942566Z" level=info msg="Starting up" Jul 10 00:21:37.623947 dockerd[1807]: time="2025-07-10T00:21:37.623915921Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:21:38.077352 kubelet[1820]: E0710 00:21:38.077266 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:38.084367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:38.084653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:38.085054 systemd[1]: kubelet.service: Consumed 348ms CPU time, 109.7M memory peak. Jul 10 00:21:39.224720 dockerd[1807]: time="2025-07-10T00:21:39.224623056Z" level=info msg="Loading containers: start." Jul 10 00:21:39.287023 kernel: Initializing XFRM netlink socket Jul 10 00:21:40.222089 systemd-networkd[1457]: docker0: Link UP Jul 10 00:21:40.808887 dockerd[1807]: time="2025-07-10T00:21:40.808813834Z" level=info msg="Loading containers: done." Jul 10 00:21:40.823466 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3420375042-merged.mount: Deactivated successfully. Jul 10 00:21:41.188134 dockerd[1807]: time="2025-07-10T00:21:41.187946076Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:21:41.188134 dockerd[1807]: time="2025-07-10T00:21:41.188114182Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:21:41.188317 dockerd[1807]: time="2025-07-10T00:21:41.188302525Z" level=info msg="Initializing buildkit" Jul 10 00:21:41.712173 dockerd[1807]: time="2025-07-10T00:21:41.712087922Z" level=info msg="Completed buildkit initialization" Jul 10 00:21:41.720133 dockerd[1807]: time="2025-07-10T00:21:41.720040954Z" level=info msg="Daemon has completed initialization" Jul 10 00:21:41.720299 dockerd[1807]: time="2025-07-10T00:21:41.720171078Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:21:41.720416 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:21:42.556736 containerd[1560]: time="2025-07-10T00:21:42.556637583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:21:43.380670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482336261.mount: Deactivated successfully. Jul 10 00:21:45.623367 containerd[1560]: time="2025-07-10T00:21:45.623243909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:45.624095 containerd[1560]: time="2025-07-10T00:21:45.624025324Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 10 00:21:45.625317 containerd[1560]: time="2025-07-10T00:21:45.625256382Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:45.627928 containerd[1560]: time="2025-07-10T00:21:45.627877216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:45.628699 containerd[1560]: time="2025-07-10T00:21:45.628667247Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.071943473s" Jul 10 00:21:45.628758 containerd[1560]: time="2025-07-10T00:21:45.628704477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:21:45.629506 containerd[1560]: time="2025-07-10T00:21:45.629471926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:21:47.180376 containerd[1560]: time="2025-07-10T00:21:47.180274851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:47.183840 containerd[1560]: time="2025-07-10T00:21:47.183682010Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 10 00:21:47.185450 containerd[1560]: time="2025-07-10T00:21:47.185336993Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:47.188749 containerd[1560]: time="2025-07-10T00:21:47.188670533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:47.190655 containerd[1560]: time="2025-07-10T00:21:47.190597717Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.561096095s" Jul 10 00:21:47.190655 containerd[1560]: time="2025-07-10T00:21:47.190639755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:21:47.191703 containerd[1560]: time="2025-07-10T00:21:47.191636264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:21:48.335528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:21:48.338349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:48.635330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:48.654589 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:48.759092 kubelet[2098]: E0710 00:21:48.758948 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:48.763527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:48.763779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:48.764370 systemd[1]: kubelet.service: Consumed 333ms CPU time, 111.2M memory peak. Jul 10 00:21:49.744321 containerd[1560]: time="2025-07-10T00:21:49.744260945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:49.745189 containerd[1560]: time="2025-07-10T00:21:49.745142678Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 10 00:21:49.746615 containerd[1560]: time="2025-07-10T00:21:49.746560877Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:49.749608 containerd[1560]: time="2025-07-10T00:21:49.749539311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:49.750761 containerd[1560]: time="2025-07-10T00:21:49.750717921Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.559027025s" Jul 10 00:21:49.750761 containerd[1560]: time="2025-07-10T00:21:49.750760391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:21:49.751391 containerd[1560]: time="2025-07-10T00:21:49.751360406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:21:51.705888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295573290.mount: Deactivated successfully. Jul 10 00:21:52.094463 containerd[1560]: time="2025-07-10T00:21:52.094314084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:52.095361 containerd[1560]: time="2025-07-10T00:21:52.095290675Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 10 00:21:52.096898 containerd[1560]: time="2025-07-10T00:21:52.096842094Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:52.098769 containerd[1560]: time="2025-07-10T00:21:52.098739401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:52.099237 containerd[1560]: time="2025-07-10T00:21:52.099192210Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.347796588s" Jul 10 00:21:52.099294 containerd[1560]: time="2025-07-10T00:21:52.099239258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:21:52.099799 containerd[1560]: time="2025-07-10T00:21:52.099775594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:21:52.676441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877735229.mount: Deactivated successfully. Jul 10 00:21:53.999623 containerd[1560]: time="2025-07-10T00:21:53.999513362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:54.000563 containerd[1560]: time="2025-07-10T00:21:54.000497206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 10 00:21:54.002552 containerd[1560]: time="2025-07-10T00:21:54.002433277Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:54.005539 containerd[1560]: time="2025-07-10T00:21:54.005468958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:54.006860 containerd[1560]: time="2025-07-10T00:21:54.006786529Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.906980848s" Jul 10 00:21:54.006860 containerd[1560]: time="2025-07-10T00:21:54.006850148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:21:54.007721 containerd[1560]: time="2025-07-10T00:21:54.007663664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:21:54.785198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374932522.mount: Deactivated successfully. Jul 10 00:21:54.792695 containerd[1560]: time="2025-07-10T00:21:54.792644482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:54.793569 containerd[1560]: time="2025-07-10T00:21:54.793528740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:21:54.794784 containerd[1560]: time="2025-07-10T00:21:54.794735062Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:54.798798 containerd[1560]: time="2025-07-10T00:21:54.798734932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:54.799481 containerd[1560]: time="2025-07-10T00:21:54.799436858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 791.720636ms" Jul 10 00:21:54.799481 containerd[1560]: time="2025-07-10T00:21:54.799467736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:21:54.800140 containerd[1560]: time="2025-07-10T00:21:54.800100803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:21:55.560068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216397226.mount: Deactivated successfully. Jul 10 00:21:58.067596 containerd[1560]: time="2025-07-10T00:21:58.067541783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:58.068523 containerd[1560]: time="2025-07-10T00:21:58.068496900Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 10 00:21:58.069628 containerd[1560]: time="2025-07-10T00:21:58.069599360Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:58.072331 containerd[1560]: time="2025-07-10T00:21:58.072296729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:58.073304 containerd[1560]: time="2025-07-10T00:21:58.073282124Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.273148941s" Jul 10 00:21:58.073357 containerd[1560]: time="2025-07-10T00:21:58.073308104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:21:58.788152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:21:58.791526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:59.875804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:59.895494 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:22:00.024950 kubelet[2263]: E0710 00:22:00.024858 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:22:00.030445 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:22:00.030748 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:22:00.031192 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 108.7M memory peak. Jul 10 00:22:00.852984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:22:00.853220 systemd[1]: kubelet.service: Consumed 1.163s CPU time, 108.7M memory peak. Jul 10 00:22:00.856044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:22:00.883497 systemd[1]: Reload requested from client PID 2278 ('systemctl') (unit session-9.scope)... Jul 10 00:22:00.883524 systemd[1]: Reloading... Jul 10 00:22:00.978018 zram_generator::config[2320]: No configuration found. Jul 10 00:22:01.918562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:22:02.042288 systemd[1]: Reloading finished in 1158 ms. Jul 10 00:22:02.110777 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:22:02.110881 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:22:02.111225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:22:02.111271 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.2M memory peak. Jul 10 00:22:02.113010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:22:02.297107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:22:02.311382 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:22:02.350347 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:22:02.350347 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:22:02.350347 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:22:02.350779 kubelet[2368]: I0710 00:22:02.350401 2368 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:22:02.751270 kubelet[2368]: I0710 00:22:02.751135 2368 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:22:02.751270 kubelet[2368]: I0710 00:22:02.751180 2368 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:22:02.751487 kubelet[2368]: I0710 00:22:02.751454 2368 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:22:02.954310 kubelet[2368]: I0710 00:22:02.954236 2368 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:22:02.986891 kubelet[2368]: E0710 00:22:02.986826 2368 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:22:03.084100 kubelet[2368]: I0710 00:22:03.084026 2368 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:22:03.101926 kubelet[2368]: I0710 00:22:03.101837 2368 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:22:03.103674 kubelet[2368]: I0710 00:22:03.102432 2368 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:22:03.103674 kubelet[2368]: I0710 00:22:03.102479 2368 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:22:03.103674 kubelet[2368]: I0710 00:22:03.102738 2368 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:22:03.103674 kubelet[2368]: I0710 00:22:03.102751 2368 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:22:03.104930 kubelet[2368]: I0710 00:22:03.104825 2368 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:22:03.162830 kubelet[2368]: I0710 00:22:03.162702 2368 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:22:03.162830 kubelet[2368]: I0710 00:22:03.162782 2368 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:22:03.197865 kubelet[2368]: I0710 00:22:03.197274 2368 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:22:03.197865 kubelet[2368]: I0710 00:22:03.197361 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:22:03.203843 kubelet[2368]: E0710 00:22:03.203754 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:22:03.204819 kubelet[2368]: E0710 00:22:03.203677 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:22:03.231473 kubelet[2368]: I0710 00:22:03.230814 2368 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:22:03.232597 kubelet[2368]: I0710 00:22:03.232346 2368 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:22:03.233604 kubelet[2368]: W0710 00:22:03.233543 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:22:03.247550 kubelet[2368]: I0710 00:22:03.247493 2368 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:22:03.247705 kubelet[2368]: I0710 00:22:03.247602 2368 server.go:1289] "Started kubelet" Jul 10 00:22:03.248036 kubelet[2368]: I0710 00:22:03.247901 2368 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:22:03.248036 kubelet[2368]: I0710 00:22:03.248002 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:22:03.249141 kubelet[2368]: I0710 00:22:03.248738 2368 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:22:03.249295 kubelet[2368]: I0710 00:22:03.249256 2368 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:22:03.284193 kubelet[2368]: I0710 00:22:03.284127 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:22:03.286184 kubelet[2368]: I0710 00:22:03.286135 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:22:03.292368 kubelet[2368]: I0710 00:22:03.290607 2368 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:22:03.292368 kubelet[2368]: E0710 00:22:03.292109 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.293961 kubelet[2368]: I0710 00:22:03.293905 2368 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:22:03.294537 kubelet[2368]: I0710 00:22:03.294147 2368 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:22:03.295951 kubelet[2368]: I0710 00:22:03.295902 2368 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:22:03.296130 kubelet[2368]: I0710 00:22:03.296084 2368 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:22:03.296363 kubelet[2368]: E0710 00:22:03.296321 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Jul 10 00:22:03.296710 kubelet[2368]: E0710 00:22:03.296663 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:22:03.620849 kubelet[2368]: E0710 00:22:03.299298 2368 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.299510 2368 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:22:03.620849 kubelet[2368]: E0710 00:22:03.342767 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bbf6b0ac152b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:22:03.247539499 +0000 UTC m=+0.931238330,LastTimestamp:2025-07-10 00:22:03.247539499 +0000 UTC m=+0.931238330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.358595 2368 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.358681 2368 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.358767 2368 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.360627 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.363160 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.363201 2368 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.363238 2368 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:22:03.620849 kubelet[2368]: I0710 00:22:03.363273 2368 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.363341 2368 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.363998 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.393316 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.463507 2368 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.493922 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.498145 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Jul 10 00:22:03.621935 kubelet[2368]: E0710 00:22:03.594487 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.664024 kubelet[2368]: E0710 00:22:03.663922 2368 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:22:03.695547 kubelet[2368]: E0710 00:22:03.695439 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.795668 kubelet[2368]: E0710 00:22:03.795588 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.896445 kubelet[2368]: E0710 00:22:03.896268 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:03.899200 kubelet[2368]: E0710 00:22:03.899155 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Jul 10 00:22:03.996528 kubelet[2368]: E0710 00:22:03.996421 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.065120 kubelet[2368]: E0710 00:22:04.064928 2368 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:22:04.098363 kubelet[2368]: E0710 00:22:04.097083 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.198697 kubelet[2368]: E0710 00:22:04.198446 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.269609 kubelet[2368]: E0710 00:22:04.269343 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:22:04.282096 kubelet[2368]: E0710 00:22:04.281630 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:22:04.300139 kubelet[2368]: E0710 00:22:04.299420 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.399960 kubelet[2368]: E0710 00:22:04.399826 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.501175 kubelet[2368]: E0710 00:22:04.500906 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:22:04.509621 kubelet[2368]: I0710 00:22:04.509526 2368 policy_none.go:49] "None policy: Start" Jul 10 00:22:04.509621 kubelet[2368]: I0710 00:22:04.509601 2368 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:22:04.509621 kubelet[2368]: I0710 00:22:04.509627 2368 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:22:04.519911 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:22:04.533125 kubelet[2368]: E0710 00:22:04.533030 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:22:04.541735 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:22:04.546030 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:22:04.554018 kubelet[2368]: E0710 00:22:04.553966 2368 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:22:04.554292 kubelet[2368]: I0710 00:22:04.554269 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:22:04.554356 kubelet[2368]: I0710 00:22:04.554287 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:22:04.554633 kubelet[2368]: I0710 00:22:04.554587 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:22:04.555523 kubelet[2368]: E0710 00:22:04.555498 2368 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:22:04.555578 kubelet[2368]: E0710 00:22:04.555564 2368 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:22:04.633603 kubelet[2368]: E0710 00:22:04.633542 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:22:04.656745 kubelet[2368]: I0710 00:22:04.656691 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:22:04.657246 kubelet[2368]: E0710 00:22:04.657195 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 10 00:22:04.700385 kubelet[2368]: E0710 00:22:04.700319 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="1.6s" Jul 10 00:22:04.859220 kubelet[2368]: I0710 00:22:04.859144 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:22:04.859768 kubelet[2368]: E0710 00:22:04.859707 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 10 00:22:04.878723 systemd[1]: Created slice kubepods-burstable-podff2cc8d58a77f5d1db783fedb235b3a1.slice - libcontainer container kubepods-burstable-podff2cc8d58a77f5d1db783fedb235b3a1.slice. Jul 10 00:22:04.900325 kubelet[2368]: E0710 00:22:04.900253 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:04.903665 kubelet[2368]: I0710 00:22:04.903519 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:04.903665 kubelet[2368]: I0710 00:22:04.903567 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:04.903665 kubelet[2368]: I0710 00:22:04.903588 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:04.903665 kubelet[2368]: I0710 00:22:04.903611 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:04.904170 kubelet[2368]: I0710 00:22:04.903700 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:04.904170 kubelet[2368]: I0710 00:22:04.903775 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:04.904170 kubelet[2368]: I0710 00:22:04.903816 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:04.904170 kubelet[2368]: I0710 00:22:04.903842 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:04.904170 kubelet[2368]: I0710 00:22:04.903889 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:04.903907 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 00:22:04.926129 kubelet[2368]: E0710 00:22:04.926080 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:04.929188 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 00:22:04.931323 kubelet[2368]: E0710 00:22:04.931274 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:05.169765 kubelet[2368]: E0710 00:22:05.169602 2368 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:22:05.201471 kubelet[2368]: E0710 00:22:05.201429 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.202331 containerd[1560]: time="2025-07-10T00:22:05.202257241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff2cc8d58a77f5d1db783fedb235b3a1,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:05.227073 kubelet[2368]: E0710 00:22:05.227001 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.227957 containerd[1560]: time="2025-07-10T00:22:05.227861489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:05.229088 containerd[1560]: time="2025-07-10T00:22:05.228953581Z" level=info msg="connecting to shim c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d" address="unix:///run/containerd/s/c90452ce6e2a028455a1829d08214bc0e55a1c1120cf8c87cba93b8c2b359107" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:05.232462 kubelet[2368]: E0710 00:22:05.232423 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.233115 containerd[1560]: time="2025-07-10T00:22:05.232843013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:05.260913 containerd[1560]: time="2025-07-10T00:22:05.260812951Z" level=info msg="connecting to shim 88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753" address="unix:///run/containerd/s/18639365f85dafe5758d60d9624240e571a244fc6b78eef552bb7a00e198dca7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:05.263200 systemd[1]: Started cri-containerd-c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d.scope - libcontainer container c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d. Jul 10 00:22:05.266191 kubelet[2368]: I0710 00:22:05.266157 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:22:05.267216 kubelet[2368]: E0710 00:22:05.266932 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jul 10 00:22:05.276466 containerd[1560]: time="2025-07-10T00:22:05.276123721Z" level=info msg="connecting to shim fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5" address="unix:///run/containerd/s/7030a2ff9f292ae2268bf58496ec78c8df41f233c3d069174da56e5773c917c8" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:05.304263 systemd[1]: Started cri-containerd-88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753.scope - libcontainer container 88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753. Jul 10 00:22:05.311325 systemd[1]: Started cri-containerd-fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5.scope - libcontainer container fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5. Jul 10 00:22:05.325586 containerd[1560]: time="2025-07-10T00:22:05.325495567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff2cc8d58a77f5d1db783fedb235b3a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d\"" Jul 10 00:22:05.327579 kubelet[2368]: E0710 00:22:05.327449 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.332718 containerd[1560]: time="2025-07-10T00:22:05.332668388Z" level=info msg="CreateContainer within sandbox \"c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:22:05.344048 containerd[1560]: time="2025-07-10T00:22:05.343873294Z" level=info msg="Container fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:05.352337 containerd[1560]: time="2025-07-10T00:22:05.352295585Z" level=info msg="CreateContainer within sandbox \"c495148f004c3d73858b2061f943bd410a222d242cb31d2bebc01eb1baa21f4d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f\"" Jul 10 00:22:05.353171 containerd[1560]: time="2025-07-10T00:22:05.353121369Z" level=info msg="StartContainer for \"fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f\"" Jul 10 00:22:05.354241 containerd[1560]: time="2025-07-10T00:22:05.354208572Z" level=info msg="connecting to shim fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f" address="unix:///run/containerd/s/c90452ce6e2a028455a1829d08214bc0e55a1c1120cf8c87cba93b8c2b359107" protocol=ttrpc version=3 Jul 10 00:22:05.354710 containerd[1560]: time="2025-07-10T00:22:05.354617671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753\"" Jul 10 00:22:05.356199 kubelet[2368]: E0710 00:22:05.356174 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.363242 containerd[1560]: time="2025-07-10T00:22:05.363192935Z" level=info msg="CreateContainer within sandbox \"88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:22:05.415004 containerd[1560]: time="2025-07-10T00:22:05.414907165Z" level=info msg="Container c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:05.420698 containerd[1560]: time="2025-07-10T00:22:05.420555190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5\"" Jul 10 00:22:05.421855 kubelet[2368]: E0710 00:22:05.421817 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:05.424239 systemd[1]: Started cri-containerd-fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f.scope - libcontainer container fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f. Jul 10 00:22:05.427510 containerd[1560]: time="2025-07-10T00:22:05.427471271Z" level=info msg="CreateContainer within sandbox \"fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:22:05.428763 containerd[1560]: time="2025-07-10T00:22:05.428710773Z" level=info msg="CreateContainer within sandbox \"88c10de35e073edc4860313b31fd8dce668b061d892c51af02d832061c099753\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0\"" Jul 10 00:22:05.430140 containerd[1560]: time="2025-07-10T00:22:05.430116082Z" level=info msg="StartContainer for \"c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0\"" Jul 10 00:22:05.431512 containerd[1560]: time="2025-07-10T00:22:05.431350103Z" level=info msg="connecting to shim c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0" address="unix:///run/containerd/s/18639365f85dafe5758d60d9624240e571a244fc6b78eef552bb7a00e198dca7" protocol=ttrpc version=3 Jul 10 00:22:05.441255 containerd[1560]: time="2025-07-10T00:22:05.441198413Z" level=info msg="Container 3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:05.451669 containerd[1560]: time="2025-07-10T00:22:05.451597923Z" level=info msg="CreateContainer within sandbox \"fcc1e273772c3bf25e86ed62b29c3fd3774915addfda320fd3239707fd8629d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248\"" Jul 10 00:22:05.452308 containerd[1560]: time="2025-07-10T00:22:05.452231932Z" level=info msg="StartContainer for \"3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248\"" Jul 10 00:22:05.453317 containerd[1560]: time="2025-07-10T00:22:05.453288135Z" level=info msg="connecting to shim 3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248" address="unix:///run/containerd/s/7030a2ff9f292ae2268bf58496ec78c8df41f233c3d069174da56e5773c917c8" protocol=ttrpc version=3 Jul 10 00:22:05.455202 systemd[1]: Started cri-containerd-c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0.scope - libcontainer container c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0. Jul 10 00:22:05.495180 systemd[1]: Started cri-containerd-3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248.scope - libcontainer container 3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248. Jul 10 00:22:05.577402 containerd[1560]: time="2025-07-10T00:22:05.577238649Z" level=info msg="StartContainer for \"fe90f68fd104b78af2a8cbf3311f6c9d6a503db7304d79d71c89cc900d12af2f\" returns successfully" Jul 10 00:22:05.578258 containerd[1560]: time="2025-07-10T00:22:05.578008536Z" level=info msg="StartContainer for \"3c8d5e93795f1831e25c77dfe4af557c31120830656fab506172b0908007f248\" returns successfully" Jul 10 00:22:05.578375 containerd[1560]: time="2025-07-10T00:22:05.578107094Z" level=info msg="StartContainer for \"c98fd22239f06998cfeb56b04960694bb74710a2893d0f27631ddd4a5e40a5d0\" returns successfully" Jul 10 00:22:06.068883 kubelet[2368]: I0710 00:22:06.068832 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:22:06.619617 kubelet[2368]: E0710 00:22:06.619583 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:06.620083 kubelet[2368]: E0710 00:22:06.619726 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:06.620784 kubelet[2368]: E0710 00:22:06.620761 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:06.620875 kubelet[2368]: E0710 00:22:06.620860 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:06.622393 kubelet[2368]: E0710 00:22:06.622376 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:22:06.622487 kubelet[2368]: E0710 00:22:06.622468 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:07.352438 kubelet[2368]: E0710 00:22:07.352387 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:22:07.587289 kubelet[2368]: I0710 00:22:07.587213 2368 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:22:07.587289 kubelet[2368]: E0710 00:22:07.587258 2368 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:22:07.594136 kubelet[2368]: I0710 00:22:07.594074 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:07.596120 kubelet[2368]: E0710 00:22:07.595993 2368 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1850bbf6b0ac152b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:22:03.247539499 +0000 UTC m=+0.931238330,LastTimestamp:2025-07-10 00:22:03.247539499 +0000 UTC m=+0.931238330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:22:07.605652 kubelet[2368]: E0710 00:22:07.605501 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:07.605652 kubelet[2368]: I0710 00:22:07.605542 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:07.607394 kubelet[2368]: E0710 00:22:07.607167 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:07.607394 kubelet[2368]: I0710 00:22:07.607197 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:07.609340 kubelet[2368]: E0710 00:22:07.609301 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:07.624779 kubelet[2368]: I0710 00:22:07.624731 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:07.624779 kubelet[2368]: I0710 00:22:07.624791 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:07.625141 kubelet[2368]: I0710 00:22:07.624814 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:07.626999 kubelet[2368]: E0710 00:22:07.626874 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:07.627182 kubelet[2368]: E0710 00:22:07.627126 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:07.627344 kubelet[2368]: E0710 00:22:07.627300 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:07.627344 kubelet[2368]: E0710 00:22:07.627328 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:07.627539 kubelet[2368]: E0710 00:22:07.627428 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:07.627539 kubelet[2368]: E0710 00:22:07.627513 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:08.211483 kubelet[2368]: I0710 00:22:08.210952 2368 apiserver.go:52] "Watching apiserver" Jul 10 00:22:08.294831 kubelet[2368]: I0710 00:22:08.294666 2368 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:22:08.626645 kubelet[2368]: I0710 00:22:08.626599 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:08.631704 kubelet[2368]: E0710 00:22:08.631613 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:09.121898 update_engine[1553]: I20250710 00:22:09.121686 1553 update_attempter.cc:509] Updating boot flags... Jul 10 00:22:09.628489 kubelet[2368]: E0710 00:22:09.628450 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:11.482490 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-9.scope)... Jul 10 00:22:11.482514 systemd[1]: Reloading... Jul 10 00:22:11.591087 zram_generator::config[2711]: No configuration found. Jul 10 00:22:11.679250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:22:11.815823 systemd[1]: Reloading finished in 332 ms. Jul 10 00:22:11.843854 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:22:11.869996 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:22:11.870441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:22:11.870510 systemd[1]: kubelet.service: Consumed 1.333s CPU time, 134.6M memory peak. Jul 10 00:22:11.872892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:22:12.125931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:22:12.139343 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:22:12.184293 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:22:12.184293 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:22:12.184293 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:22:12.184745 kubelet[2756]: I0710 00:22:12.184354 2756 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:22:12.192399 kubelet[2756]: I0710 00:22:12.192347 2756 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:22:12.192399 kubelet[2756]: I0710 00:22:12.192379 2756 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:22:12.192676 kubelet[2756]: I0710 00:22:12.192647 2756 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:22:12.195047 kubelet[2756]: I0710 00:22:12.194992 2756 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:22:12.198046 kubelet[2756]: I0710 00:22:12.198006 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:22:12.202006 kubelet[2756]: I0710 00:22:12.201948 2756 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:22:12.248567 kubelet[2756]: I0710 00:22:12.248522 2756 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:22:12.248799 kubelet[2756]: I0710 00:22:12.248763 2756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:22:12.248955 kubelet[2756]: I0710 00:22:12.248792 2756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:22:12.248955 kubelet[2756]: I0710 00:22:12.248951 2756 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:22:12.249123 kubelet[2756]: I0710 00:22:12.248962 2756 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:22:12.249123 kubelet[2756]: I0710 00:22:12.249055 2756 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:22:12.249258 kubelet[2756]: I0710 00:22:12.249240 2756 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:22:12.249258 kubelet[2756]: I0710 00:22:12.249256 2756 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:22:12.249306 kubelet[2756]: I0710 00:22:12.249281 2756 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:22:12.249331 kubelet[2756]: I0710 00:22:12.249306 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:22:12.251695 kubelet[2756]: I0710 00:22:12.251574 2756 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:22:12.252120 kubelet[2756]: I0710 00:22:12.252088 2756 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:22:12.262003 kubelet[2756]: I0710 00:22:12.261941 2756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:22:12.262151 kubelet[2756]: I0710 00:22:12.262024 2756 server.go:1289] "Started kubelet" Jul 10 00:22:12.264631 kubelet[2756]: I0710 00:22:12.264433 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:22:12.264789 kubelet[2756]: I0710 00:22:12.264760 2756 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:22:12.264881 kubelet[2756]: I0710 00:22:12.264824 2756 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:22:12.265721 kubelet[2756]: I0710 00:22:12.265698 2756 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:22:12.267535 kubelet[2756]: I0710 00:22:12.266516 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:22:12.267535 kubelet[2756]: I0710 00:22:12.266670 2756 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:22:12.267535 kubelet[2756]: I0710 00:22:12.266792 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:22:12.270221 kubelet[2756]: I0710 00:22:12.269720 2756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:22:12.270221 kubelet[2756]: I0710 00:22:12.270104 2756 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:22:12.271330 kubelet[2756]: I0710 00:22:12.271203 2756 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:22:12.271373 kubelet[2756]: I0710 00:22:12.271329 2756 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:22:12.271667 kubelet[2756]: E0710 00:22:12.271637 2756 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:22:12.273406 kubelet[2756]: I0710 00:22:12.272672 2756 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:22:12.280731 kubelet[2756]: I0710 00:22:12.280680 2756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:22:12.282144 kubelet[2756]: I0710 00:22:12.282117 2756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:22:12.282144 kubelet[2756]: I0710 00:22:12.282135 2756 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:22:12.282220 kubelet[2756]: I0710 00:22:12.282156 2756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:22:12.282220 kubelet[2756]: I0710 00:22:12.282165 2756 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:22:12.282220 kubelet[2756]: E0710 00:22:12.282204 2756 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:22:12.315914 kubelet[2756]: I0710 00:22:12.315874 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:22:12.315914 kubelet[2756]: I0710 00:22:12.315898 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:22:12.315914 kubelet[2756]: I0710 00:22:12.315919 2756 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:22:12.316159 kubelet[2756]: I0710 00:22:12.316099 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:22:12.316159 kubelet[2756]: I0710 00:22:12.316113 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:22:12.316159 kubelet[2756]: I0710 00:22:12.316133 2756 policy_none.go:49] "None policy: Start" Jul 10 00:22:12.316159 kubelet[2756]: I0710 00:22:12.316147 2756 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:22:12.316159 kubelet[2756]: I0710 00:22:12.316159 2756 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:22:12.316297 kubelet[2756]: I0710 00:22:12.316257 2756 state_mem.go:75] "Updated machine memory state" Jul 10 00:22:12.320920 kubelet[2756]: E0710 00:22:12.320715 2756 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:22:12.320920 kubelet[2756]: I0710 00:22:12.320886 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:22:12.320920 kubelet[2756]: I0710 00:22:12.320897 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:22:12.321398 kubelet[2756]: I0710 00:22:12.321214 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:22:12.322498 kubelet[2756]: E0710 00:22:12.322288 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:22:12.384207 kubelet[2756]: I0710 00:22:12.383652 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:12.384207 kubelet[2756]: I0710 00:22:12.383666 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:12.384207 kubelet[2756]: I0710 00:22:12.383879 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.427600 kubelet[2756]: I0710 00:22:12.427548 2756 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:22:12.503611 kubelet[2756]: E0710 00:22:12.503522 2756 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:12.531246 kubelet[2756]: I0710 00:22:12.531174 2756 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:22:12.531456 kubelet[2756]: I0710 00:22:12.531273 2756 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:22:12.571710 kubelet[2756]: I0710 00:22:12.571645 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:12.571710 kubelet[2756]: I0710 00:22:12.571698 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:12.571888 kubelet[2756]: I0710 00:22:12.571729 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.571888 kubelet[2756]: I0710 00:22:12.571746 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.571888 kubelet[2756]: I0710 00:22:12.571812 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:22:12.572018 kubelet[2756]: I0710 00:22:12.571887 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff2cc8d58a77f5d1db783fedb235b3a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff2cc8d58a77f5d1db783fedb235b3a1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:12.572018 kubelet[2756]: I0710 00:22:12.571914 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.572018 kubelet[2756]: I0710 00:22:12.571930 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.572018 kubelet[2756]: I0710 00:22:12.571951 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:22:12.775416 kubelet[2756]: E0710 00:22:12.775261 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:12.776326 kubelet[2756]: E0710 00:22:12.776293 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:12.804663 kubelet[2756]: E0710 00:22:12.804614 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:13.250966 kubelet[2756]: I0710 00:22:13.250893 2756 apiserver.go:52] "Watching apiserver" Jul 10 00:22:13.270055 kubelet[2756]: I0710 00:22:13.269958 2756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:22:13.296815 kubelet[2756]: E0710 00:22:13.296766 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:13.297423 kubelet[2756]: I0710 00:22:13.297105 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:13.297423 kubelet[2756]: E0710 00:22:13.297272 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:13.720573 kubelet[2756]: E0710 00:22:13.720316 2756 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:22:13.720573 kubelet[2756]: E0710 00:22:13.720528 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:13.730823 kubelet[2756]: I0710 00:22:13.730704 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.73067158 podStartE2EDuration="1.73067158s" podCreationTimestamp="2025-07-10 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:13.730436546 +0000 UTC m=+1.586609869" watchObservedRunningTime="2025-07-10 00:22:13.73067158 +0000 UTC m=+1.586844903" Jul 10 00:22:13.731073 kubelet[2756]: I0710 00:22:13.730837 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.730832886 podStartE2EDuration="5.730832886s" podCreationTimestamp="2025-07-10 00:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:13.720640508 +0000 UTC m=+1.576813851" watchObservedRunningTime="2025-07-10 00:22:13.730832886 +0000 UTC m=+1.587006209" Jul 10 00:22:13.756454 kubelet[2756]: I0710 00:22:13.756325 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.756300296 podStartE2EDuration="1.756300296s" podCreationTimestamp="2025-07-10 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:13.740692264 +0000 UTC m=+1.596865587" watchObservedRunningTime="2025-07-10 00:22:13.756300296 +0000 UTC m=+1.612473619" Jul 10 00:22:13.757142 sudo[2801]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:22:13.758220 sudo[2801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:22:14.297833 kubelet[2756]: E0710 00:22:14.297799 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:14.298313 kubelet[2756]: E0710 00:22:14.298066 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:14.328922 sudo[2801]: pam_unix(sudo:session): session closed for user root Jul 10 00:22:15.298935 kubelet[2756]: E0710 00:22:15.298891 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:15.822354 kubelet[2756]: I0710 00:22:15.822313 2756 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:22:15.822778 containerd[1560]: time="2025-07-10T00:22:15.822723588Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:22:15.823138 kubelet[2756]: I0710 00:22:15.823073 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:22:15.864580 sudo[1786]: pam_unix(sudo:session): session closed for user root Jul 10 00:22:15.866292 sshd[1785]: Connection closed by 10.0.0.1 port 44232 Jul 10 00:22:15.867044 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:15.871460 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:44232.service: Deactivated successfully. Jul 10 00:22:15.873902 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:22:15.874185 systemd[1]: session-9.scope: Consumed 5.360s CPU time, 256.9M memory peak. Jul 10 00:22:15.875663 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:22:15.877429 systemd-logind[1542]: Removed session 9. Jul 10 00:22:16.383569 kubelet[2756]: E0710 00:22:16.383521 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:17.545079 systemd[1]: Created slice kubepods-besteffort-pod31b600fa_1c48_4325_bf90_473758b785ef.slice - libcontainer container kubepods-besteffort-pod31b600fa_1c48_4325_bf90_473758b785ef.slice. Jul 10 00:22:17.565023 systemd[1]: Created slice kubepods-burstable-pod9d4d2990_6062_4444_80d8_5af38105da5f.slice - libcontainer container kubepods-burstable-pod9d4d2990_6062_4444_80d8_5af38105da5f.slice. Jul 10 00:22:17.577773 systemd[1]: Created slice kubepods-besteffort-podcb9d067f_5aec_443c_8bc7_ddee6cd6eb8d.slice - libcontainer container kubepods-besteffort-podcb9d067f_5aec_443c_8bc7_ddee6cd6eb8d.slice. Jul 10 00:22:17.603171 kubelet[2756]: I0710 00:22:17.603113 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-cgroup\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603171 kubelet[2756]: I0710 00:22:17.603162 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7jx\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-kube-api-access-dc7jx\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603171 kubelet[2756]: I0710 00:22:17.603182 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-hubble-tls\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603248 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-run\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603265 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-bpf-maps\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603280 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31b600fa-1c48-4325-bf90-473758b785ef-kube-proxy\") pod \"kube-proxy-7rfm7\" (UID: \"31b600fa-1c48-4325-bf90-473758b785ef\") " pod="kube-system/kube-proxy-7rfm7" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603323 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-kernel\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603341 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-hostproc\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603773 kubelet[2756]: I0710 00:22:17.603386 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31b600fa-1c48-4325-bf90-473758b785ef-xtables-lock\") pod \"kube-proxy-7rfm7\" (UID: \"31b600fa-1c48-4325-bf90-473758b785ef\") " pod="kube-system/kube-proxy-7rfm7" Jul 10 00:22:17.603913 kubelet[2756]: I0710 00:22:17.603427 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-lib-modules\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603913 kubelet[2756]: I0710 00:22:17.603442 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-xtables-lock\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603913 kubelet[2756]: I0710 00:22:17.603470 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6bh\" (UniqueName: \"kubernetes.io/projected/31b600fa-1c48-4325-bf90-473758b785ef-kube-api-access-gh6bh\") pod \"kube-proxy-7rfm7\" (UID: \"31b600fa-1c48-4325-bf90-473758b785ef\") " pod="kube-system/kube-proxy-7rfm7" Jul 10 00:22:17.603913 kubelet[2756]: I0710 00:22:17.603493 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-etc-cni-netd\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.603913 kubelet[2756]: I0710 00:22:17.603509 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4d2990-6062-4444-80d8-5af38105da5f-clustermesh-secrets\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.604118 kubelet[2756]: I0710 00:22:17.603539 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lj59\" (UniqueName: \"kubernetes.io/projected/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-kube-api-access-5lj59\") pod \"cilium-operator-6c4d7847fc-bcxsd\" (UID: \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\") " pod="kube-system/cilium-operator-6c4d7847fc-bcxsd" Jul 10 00:22:17.604118 kubelet[2756]: I0710 00:22:17.603564 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31b600fa-1c48-4325-bf90-473758b785ef-lib-modules\") pod \"kube-proxy-7rfm7\" (UID: \"31b600fa-1c48-4325-bf90-473758b785ef\") " pod="kube-system/kube-proxy-7rfm7" Jul 10 00:22:17.604118 kubelet[2756]: I0710 00:22:17.603577 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cni-path\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.604118 kubelet[2756]: I0710 00:22:17.603595 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-config-path\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.604118 kubelet[2756]: I0710 00:22:17.603611 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-net\") pod \"cilium-dvmkb\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " pod="kube-system/cilium-dvmkb" Jul 10 00:22:17.604257 kubelet[2756]: I0710 00:22:17.603632 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bcxsd\" (UID: \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\") " pod="kube-system/cilium-operator-6c4d7847fc-bcxsd" Jul 10 00:22:17.856025 kubelet[2756]: E0710 00:22:17.855831 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:17.856934 containerd[1560]: time="2025-07-10T00:22:17.856861113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rfm7,Uid:31b600fa-1c48-4325-bf90-473758b785ef,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:17.871579 kubelet[2756]: E0710 00:22:17.871515 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:17.872288 containerd[1560]: time="2025-07-10T00:22:17.872109822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvmkb,Uid:9d4d2990-6062-4444-80d8-5af38105da5f,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:17.883709 kubelet[2756]: E0710 00:22:17.883671 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:17.884342 containerd[1560]: time="2025-07-10T00:22:17.884292578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcxsd,Uid:cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:17.901859 containerd[1560]: time="2025-07-10T00:22:17.901587692Z" level=info msg="connecting to shim 8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe" address="unix:///run/containerd/s/bffbf22d677ea88e1f214a30492a6467f1bb0bb8e354100b8476bd74837273f9" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:17.911331 containerd[1560]: time="2025-07-10T00:22:17.911279724Z" level=info msg="connecting to shim f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:17.919117 containerd[1560]: time="2025-07-10T00:22:17.919076052Z" level=info msg="connecting to shim 3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4" address="unix:///run/containerd/s/a0f0dc71b95a38260cf6f669a37366deea68810c7861b24ceafb896f087d6beb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:17.963127 systemd[1]: Started cri-containerd-8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe.scope - libcontainer container 8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe. Jul 10 00:22:17.970410 systemd[1]: Started cri-containerd-3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4.scope - libcontainer container 3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4. Jul 10 00:22:17.973535 systemd[1]: Started cri-containerd-f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865.scope - libcontainer container f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865. Jul 10 00:22:18.005879 kubelet[2756]: E0710 00:22:18.005655 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.081894 containerd[1560]: time="2025-07-10T00:22:18.081832905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7rfm7,Uid:31b600fa-1c48-4325-bf90-473758b785ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe\"" Jul 10 00:22:18.084252 kubelet[2756]: E0710 00:22:18.084229 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.092659 containerd[1560]: time="2025-07-10T00:22:18.092619405Z" level=info msg="CreateContainer within sandbox \"8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:22:18.095104 containerd[1560]: time="2025-07-10T00:22:18.095075983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvmkb,Uid:9d4d2990-6062-4444-80d8-5af38105da5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\"" Jul 10 00:22:18.095835 kubelet[2756]: E0710 00:22:18.095801 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.098902 containerd[1560]: time="2025-07-10T00:22:18.098735052Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:22:18.109512 containerd[1560]: time="2025-07-10T00:22:18.109415982Z" level=info msg="Container 5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:18.118357 containerd[1560]: time="2025-07-10T00:22:18.118302102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcxsd,Uid:cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\"" Jul 10 00:22:18.119242 kubelet[2756]: E0710 00:22:18.119203 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.123305 containerd[1560]: time="2025-07-10T00:22:18.123246709Z" level=info msg="CreateContainer within sandbox \"8d9580dd01dda00bebdf9983c68f682c64c029b8b2621d085c01f1ccafd025fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338\"" Jul 10 00:22:18.123826 containerd[1560]: time="2025-07-10T00:22:18.123788412Z" level=info msg="StartContainer for \"5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338\"" Jul 10 00:22:18.129346 containerd[1560]: time="2025-07-10T00:22:18.128922046Z" level=info msg="connecting to shim 5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338" address="unix:///run/containerd/s/bffbf22d677ea88e1f214a30492a6467f1bb0bb8e354100b8476bd74837273f9" protocol=ttrpc version=3 Jul 10 00:22:18.159451 systemd[1]: Started cri-containerd-5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338.scope - libcontainer container 5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338. Jul 10 00:22:18.337264 containerd[1560]: time="2025-07-10T00:22:18.337193985Z" level=info msg="StartContainer for \"5cbf2edf3afa530f4b0b48e253736b74c85d63101762936f0675bfaf3b077338\" returns successfully" Jul 10 00:22:18.342493 kubelet[2756]: E0710 00:22:18.342446 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.343532 kubelet[2756]: E0710 00:22:18.343461 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.915701 kubelet[2756]: E0710 00:22:18.915665 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:18.929445 kubelet[2756]: I0710 00:22:18.929364 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7rfm7" podStartSLOduration=1.929344107 podStartE2EDuration="1.929344107s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:18.366326578 +0000 UTC m=+6.222499901" watchObservedRunningTime="2025-07-10 00:22:18.929344107 +0000 UTC m=+6.785517420" Jul 10 00:22:19.343766 kubelet[2756]: E0710 00:22:19.343669 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:20.345561 kubelet[2756]: E0710 00:22:20.345512 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:27.922849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619836565.mount: Deactivated successfully. Jul 10 00:22:30.270030 containerd[1560]: time="2025-07-10T00:22:30.269936632Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:30.271178 containerd[1560]: time="2025-07-10T00:22:30.271145546Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:22:30.272680 containerd[1560]: time="2025-07-10T00:22:30.272650919Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:30.274002 containerd[1560]: time="2025-07-10T00:22:30.273942909Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.175169353s" Jul 10 00:22:30.274002 containerd[1560]: time="2025-07-10T00:22:30.273992843Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:22:30.275017 containerd[1560]: time="2025-07-10T00:22:30.274966835Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:22:30.281123 containerd[1560]: time="2025-07-10T00:22:30.281061140Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:22:30.294241 containerd[1560]: time="2025-07-10T00:22:30.294191299Z" level=info msg="Container 8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:30.304698 containerd[1560]: time="2025-07-10T00:22:30.304638261Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\"" Jul 10 00:22:30.305334 containerd[1560]: time="2025-07-10T00:22:30.305304104Z" level=info msg="StartContainer for \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\"" Jul 10 00:22:30.306420 containerd[1560]: time="2025-07-10T00:22:30.306383735Z" level=info msg="connecting to shim 8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" protocol=ttrpc version=3 Jul 10 00:22:30.338191 systemd[1]: Started cri-containerd-8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57.scope - libcontainer container 8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57. Jul 10 00:22:30.475646 systemd[1]: cri-containerd-8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57.scope: Deactivated successfully. Jul 10 00:22:30.476018 systemd[1]: cri-containerd-8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57.scope: Consumed 31ms CPU time, 6.9M memory peak, 4K read from disk, 3.2M written to disk. Jul 10 00:22:30.480917 containerd[1560]: time="2025-07-10T00:22:30.478215877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" id:\"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" pid:3193 exited_at:{seconds:1752106950 nanos:477631929}" Jul 10 00:22:30.682687 containerd[1560]: time="2025-07-10T00:22:30.682606677Z" level=info msg="received exit event container_id:\"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" id:\"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" pid:3193 exited_at:{seconds:1752106950 nanos:477631929}" Jul 10 00:22:30.683952 containerd[1560]: time="2025-07-10T00:22:30.683915749Z" level=info msg="StartContainer for \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" returns successfully" Jul 10 00:22:30.705795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57-rootfs.mount: Deactivated successfully. Jul 10 00:22:31.368577 kubelet[2756]: E0710 00:22:31.368535 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:32.370384 kubelet[2756]: E0710 00:22:32.370348 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:32.481070 containerd[1560]: time="2025-07-10T00:22:32.481000594Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:22:32.552250 containerd[1560]: time="2025-07-10T00:22:32.552174339Z" level=info msg="Container b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:32.557192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817572672.mount: Deactivated successfully. Jul 10 00:22:32.559835 containerd[1560]: time="2025-07-10T00:22:32.559772528Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\"" Jul 10 00:22:32.560403 containerd[1560]: time="2025-07-10T00:22:32.560374530Z" level=info msg="StartContainer for \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\"" Jul 10 00:22:32.561219 containerd[1560]: time="2025-07-10T00:22:32.561176397Z" level=info msg="connecting to shim b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" protocol=ttrpc version=3 Jul 10 00:22:32.593553 systemd[1]: Started cri-containerd-b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93.scope - libcontainer container b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93. Jul 10 00:22:32.755146 containerd[1560]: time="2025-07-10T00:22:32.755004533Z" level=info msg="StartContainer for \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" returns successfully" Jul 10 00:22:32.820761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:22:32.821047 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:32.821252 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:32.822870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:22:32.824650 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:22:32.825784 systemd[1]: cri-containerd-b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93.scope: Deactivated successfully. Jul 10 00:22:32.826103 containerd[1560]: time="2025-07-10T00:22:32.826054776Z" level=info msg="received exit event container_id:\"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" id:\"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" pid:3239 exited_at:{seconds:1752106952 nanos:825575485}" Jul 10 00:22:32.826421 containerd[1560]: time="2025-07-10T00:22:32.826357695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" id:\"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" pid:3239 exited_at:{seconds:1752106952 nanos:825575485}" Jul 10 00:22:32.854017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:22:33.420573 kubelet[2756]: E0710 00:22:33.420533 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:33.553051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93-rootfs.mount: Deactivated successfully. Jul 10 00:22:33.607712 containerd[1560]: time="2025-07-10T00:22:33.607617315Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:22:34.135419 containerd[1560]: time="2025-07-10T00:22:34.135334406Z" level=info msg="Container 0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:34.146421 containerd[1560]: time="2025-07-10T00:22:34.146353759Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\"" Jul 10 00:22:34.147067 containerd[1560]: time="2025-07-10T00:22:34.147033868Z" level=info msg="StartContainer for \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\"" Jul 10 00:22:34.148707 containerd[1560]: time="2025-07-10T00:22:34.148674853Z" level=info msg="connecting to shim 0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" protocol=ttrpc version=3 Jul 10 00:22:34.172186 systemd[1]: Started cri-containerd-0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42.scope - libcontainer container 0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42. Jul 10 00:22:34.213756 systemd[1]: cri-containerd-0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42.scope: Deactivated successfully. Jul 10 00:22:34.214616 containerd[1560]: time="2025-07-10T00:22:34.214550442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" id:\"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" pid:3287 exited_at:{seconds:1752106954 nanos:214254526}" Jul 10 00:22:34.217931 containerd[1560]: time="2025-07-10T00:22:34.217866516Z" level=info msg="received exit event container_id:\"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" id:\"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" pid:3287 exited_at:{seconds:1752106954 nanos:214254526}" Jul 10 00:22:34.219925 containerd[1560]: time="2025-07-10T00:22:34.219866786Z" level=info msg="StartContainer for \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" returns successfully" Jul 10 00:22:34.243060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42-rootfs.mount: Deactivated successfully. Jul 10 00:22:34.425538 kubelet[2756]: E0710 00:22:34.425401 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:34.433396 containerd[1560]: time="2025-07-10T00:22:34.433348719Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:22:34.446918 containerd[1560]: time="2025-07-10T00:22:34.446858043Z" level=info msg="Container 2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:34.453567 containerd[1560]: time="2025-07-10T00:22:34.453520076Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\"" Jul 10 00:22:34.454222 containerd[1560]: time="2025-07-10T00:22:34.454170559Z" level=info msg="StartContainer for \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\"" Jul 10 00:22:34.454986 containerd[1560]: time="2025-07-10T00:22:34.454946458Z" level=info msg="connecting to shim 2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" protocol=ttrpc version=3 Jul 10 00:22:34.478250 systemd[1]: Started cri-containerd-2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b.scope - libcontainer container 2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b. Jul 10 00:22:34.512031 systemd[1]: cri-containerd-2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b.scope: Deactivated successfully. Jul 10 00:22:34.512799 containerd[1560]: time="2025-07-10T00:22:34.512761896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" id:\"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" pid:3326 exited_at:{seconds:1752106954 nanos:512169572}" Jul 10 00:22:34.513796 containerd[1560]: time="2025-07-10T00:22:34.513743662Z" level=info msg="received exit event container_id:\"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" id:\"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" pid:3326 exited_at:{seconds:1752106954 nanos:512169572}" Jul 10 00:22:34.522954 containerd[1560]: time="2025-07-10T00:22:34.522900895Z" level=info msg="StartContainer for \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" returns successfully" Jul 10 00:22:34.625482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount115439205.mount: Deactivated successfully. Jul 10 00:22:35.429093 kubelet[2756]: E0710 00:22:35.429051 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:35.614048 containerd[1560]: time="2025-07-10T00:22:35.613965482Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:22:36.292915 containerd[1560]: time="2025-07-10T00:22:36.292851577Z" level=info msg="Container bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:36.626188 containerd[1560]: time="2025-07-10T00:22:36.626038724Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:36.640570 containerd[1560]: time="2025-07-10T00:22:36.640532099Z" level=info msg="CreateContainer within sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\"" Jul 10 00:22:36.641212 containerd[1560]: time="2025-07-10T00:22:36.641060483Z" level=info msg="StartContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\"" Jul 10 00:22:36.641912 containerd[1560]: time="2025-07-10T00:22:36.641883449Z" level=info msg="connecting to shim bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051" address="unix:///run/containerd/s/088dad9fa41abb8b719c8e5cb256f1dc0b6acb89fdea72305d957036722c389d" protocol=ttrpc version=3 Jul 10 00:22:36.669108 systemd[1]: Started cri-containerd-bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051.scope - libcontainer container bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051. Jul 10 00:22:36.691111 containerd[1560]: time="2025-07-10T00:22:36.690944713Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:22:36.825123 containerd[1560]: time="2025-07-10T00:22:36.825047125Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:36.826480 containerd[1560]: time="2025-07-10T00:22:36.826232362Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.551210173s" Jul 10 00:22:36.826480 containerd[1560]: time="2025-07-10T00:22:36.826332570Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:22:36.829188 containerd[1560]: time="2025-07-10T00:22:36.829156196Z" level=info msg="StartContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" returns successfully" Jul 10 00:22:36.903595 containerd[1560]: time="2025-07-10T00:22:36.903470667Z" level=info msg="CreateContainer within sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:22:36.923149 containerd[1560]: time="2025-07-10T00:22:36.923092079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" id:\"4a01cf670e7bca96b6a73a70544459ddb7885d4a40b9acb17fba245fc8cadab0\" pid:3420 exited_at:{seconds:1752106956 nanos:922614511}" Jul 10 00:22:36.998211 kubelet[2756]: I0710 00:22:36.998178 2756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:22:37.438003 kubelet[2756]: E0710 00:22:37.437309 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:37.708096 kubelet[2756]: I0710 00:22:37.707379 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dvmkb" podStartSLOduration=8.52990582 podStartE2EDuration="20.707360393s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="2025-07-10 00:22:18.097268444 +0000 UTC m=+5.953441767" lastFinishedPulling="2025-07-10 00:22:30.274723017 +0000 UTC m=+18.130896340" observedRunningTime="2025-07-10 00:22:37.706632635 +0000 UTC m=+25.562805978" watchObservedRunningTime="2025-07-10 00:22:37.707360393 +0000 UTC m=+25.563533726" Jul 10 00:22:37.737560 containerd[1560]: time="2025-07-10T00:22:37.737505212Z" level=info msg="Container 5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:37.773030 systemd[1]: Created slice kubepods-burstable-pod055e5000_d05f_42d5_a20e_0c5459e02854.slice - libcontainer container kubepods-burstable-pod055e5000_d05f_42d5_a20e_0c5459e02854.slice. Jul 10 00:22:37.832759 kubelet[2756]: I0710 00:22:37.832673 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/055e5000-d05f-42d5-a20e-0c5459e02854-config-volume\") pod \"coredns-674b8bbfcf-w26mr\" (UID: \"055e5000-d05f-42d5-a20e-0c5459e02854\") " pod="kube-system/coredns-674b8bbfcf-w26mr" Jul 10 00:22:37.832759 kubelet[2756]: I0710 00:22:37.832740 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkds9\" (UniqueName: \"kubernetes.io/projected/055e5000-d05f-42d5-a20e-0c5459e02854-kube-api-access-xkds9\") pod \"coredns-674b8bbfcf-w26mr\" (UID: \"055e5000-d05f-42d5-a20e-0c5459e02854\") " pod="kube-system/coredns-674b8bbfcf-w26mr" Jul 10 00:22:37.986395 systemd[1]: Created slice kubepods-burstable-pod8617f150_2fc3_433d_a39b_78ac77a0eccc.slice - libcontainer container kubepods-burstable-pod8617f150_2fc3_433d_a39b_78ac77a0eccc.slice. Jul 10 00:22:38.025485 containerd[1560]: time="2025-07-10T00:22:38.025418270Z" level=info msg="CreateContainer within sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\"" Jul 10 00:22:38.026022 containerd[1560]: time="2025-07-10T00:22:38.025993330Z" level=info msg="StartContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\"" Jul 10 00:22:38.027057 containerd[1560]: time="2025-07-10T00:22:38.027028154Z" level=info msg="connecting to shim 5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162" address="unix:///run/containerd/s/a0f0dc71b95a38260cf6f669a37366deea68810c7861b24ceafb896f087d6beb" protocol=ttrpc version=3 Jul 10 00:22:38.034266 kubelet[2756]: I0710 00:22:38.034218 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8617f150-2fc3-433d-a39b-78ac77a0eccc-config-volume\") pod \"coredns-674b8bbfcf-rpr26\" (UID: \"8617f150-2fc3-433d-a39b-78ac77a0eccc\") " pod="kube-system/coredns-674b8bbfcf-rpr26" Jul 10 00:22:38.034266 kubelet[2756]: I0710 00:22:38.034264 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc626\" (UniqueName: \"kubernetes.io/projected/8617f150-2fc3-433d-a39b-78ac77a0eccc-kube-api-access-tc626\") pod \"coredns-674b8bbfcf-rpr26\" (UID: \"8617f150-2fc3-433d-a39b-78ac77a0eccc\") " pod="kube-system/coredns-674b8bbfcf-rpr26" Jul 10 00:22:38.056252 systemd[1]: Started cri-containerd-5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162.scope - libcontainer container 5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162. Jul 10 00:22:38.075890 kubelet[2756]: E0710 00:22:38.075728 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:38.077038 containerd[1560]: time="2025-07-10T00:22:38.076728806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w26mr,Uid:055e5000-d05f-42d5-a20e-0c5459e02854,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:38.214003 containerd[1560]: time="2025-07-10T00:22:38.213926594Z" level=info msg="StartContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" returns successfully" Jul 10 00:22:38.292882 kubelet[2756]: E0710 00:22:38.292837 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:38.293323 containerd[1560]: time="2025-07-10T00:22:38.293281216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rpr26,Uid:8617f150-2fc3-433d-a39b-78ac77a0eccc,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:38.440342 kubelet[2756]: E0710 00:22:38.440293 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:38.440342 kubelet[2756]: E0710 00:22:38.440351 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:39.442904 kubelet[2756]: E0710 00:22:39.442858 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:39.443471 kubelet[2756]: E0710 00:22:39.443121 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:42.232376 systemd-networkd[1457]: cilium_host: Link UP Jul 10 00:22:42.233161 systemd-networkd[1457]: cilium_net: Link UP Jul 10 00:22:42.233447 systemd-networkd[1457]: cilium_net: Gained carrier Jul 10 00:22:42.233699 systemd-networkd[1457]: cilium_host: Gained carrier Jul 10 00:22:42.372569 systemd-networkd[1457]: cilium_vxlan: Link UP Jul 10 00:22:42.372583 systemd-networkd[1457]: cilium_vxlan: Gained carrier Jul 10 00:22:42.655023 kernel: NET: Registered PF_ALG protocol family Jul 10 00:22:42.756258 systemd-networkd[1457]: cilium_net: Gained IPv6LL Jul 10 00:22:42.948182 systemd-networkd[1457]: cilium_host: Gained IPv6LL Jul 10 00:22:43.398025 systemd-networkd[1457]: lxc_health: Link UP Jul 10 00:22:43.399453 systemd-networkd[1457]: lxc_health: Gained carrier Jul 10 00:22:43.771843 systemd-networkd[1457]: lxcc85d068c3a81: Link UP Jul 10 00:22:43.779679 kernel: eth0: renamed from tmp5aee9 Jul 10 00:22:43.781776 systemd-networkd[1457]: lxcc85d068c3a81: Gained carrier Jul 10 00:22:43.879198 kubelet[2756]: E0710 00:22:43.879151 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:43.893731 systemd-networkd[1457]: lxcee189a464d92: Link UP Jul 10 00:22:43.898027 kernel: eth0: renamed from tmp0357c Jul 10 00:22:43.907692 systemd-networkd[1457]: lxcee189a464d92: Gained carrier Jul 10 00:22:43.918499 kubelet[2756]: I0710 00:22:43.918429 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bcxsd" podStartSLOduration=8.210436394 podStartE2EDuration="26.918401005s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="2025-07-10 00:22:18.119703479 +0000 UTC m=+5.975876802" lastFinishedPulling="2025-07-10 00:22:36.82766809 +0000 UTC m=+24.683841413" observedRunningTime="2025-07-10 00:22:38.521214286 +0000 UTC m=+26.377387609" watchObservedRunningTime="2025-07-10 00:22:43.918401005 +0000 UTC m=+31.774574328" Jul 10 00:22:44.292253 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Jul 10 00:22:44.453276 kubelet[2756]: E0710 00:22:44.453218 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:45.188234 systemd-networkd[1457]: lxcee189a464d92: Gained IPv6LL Jul 10 00:22:45.252401 systemd-networkd[1457]: lxc_health: Gained IPv6LL Jul 10 00:22:45.572247 systemd-networkd[1457]: lxcc85d068c3a81: Gained IPv6LL Jul 10 00:22:47.776065 containerd[1560]: time="2025-07-10T00:22:47.776004799Z" level=info msg="connecting to shim 5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809" address="unix:///run/containerd/s/2a8123500e9de096c99cae900aa45bc1fccc7cb97e5f303082a145b54c1e0188" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:47.777948 containerd[1560]: time="2025-07-10T00:22:47.777921978Z" level=info msg="connecting to shim 0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3" address="unix:///run/containerd/s/ae8713c13f11671806029c310996a3e377bea55ace920e0af0e34b2d2d148a44" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:47.809120 systemd[1]: Started cri-containerd-5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809.scope - libcontainer container 5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809. Jul 10 00:22:47.813611 systemd[1]: Started cri-containerd-0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3.scope - libcontainer container 0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3. Jul 10 00:22:47.828966 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:47.831939 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:47.869864 containerd[1560]: time="2025-07-10T00:22:47.869665546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rpr26,Uid:8617f150-2fc3-433d-a39b-78ac77a0eccc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3\"" Jul 10 00:22:47.874483 containerd[1560]: time="2025-07-10T00:22:47.874348436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w26mr,Uid:055e5000-d05f-42d5-a20e-0c5459e02854,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809\"" Jul 10 00:22:47.875397 kubelet[2756]: E0710 00:22:47.875340 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:47.876002 kubelet[2756]: E0710 00:22:47.875634 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:47.882107 containerd[1560]: time="2025-07-10T00:22:47.882035656Z" level=info msg="CreateContainer within sandbox \"5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:47.885574 containerd[1560]: time="2025-07-10T00:22:47.885505980Z" level=info msg="CreateContainer within sandbox \"0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:47.898780 containerd[1560]: time="2025-07-10T00:22:47.898722437Z" level=info msg="Container b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:47.904142 containerd[1560]: time="2025-07-10T00:22:47.904104680Z" level=info msg="Container d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:47.912460 containerd[1560]: time="2025-07-10T00:22:47.912129574Z" level=info msg="CreateContainer within sandbox \"0357cf6f51362731ee50610ff19d1d1c3d6bbddd2cef89841b45b9f605e720b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894\"" Jul 10 00:22:47.913330 containerd[1560]: time="2025-07-10T00:22:47.913284391Z" level=info msg="StartContainer for \"d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894\"" Jul 10 00:22:47.914683 containerd[1560]: time="2025-07-10T00:22:47.914653130Z" level=info msg="connecting to shim d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894" address="unix:///run/containerd/s/ae8713c13f11671806029c310996a3e377bea55ace920e0af0e34b2d2d148a44" protocol=ttrpc version=3 Jul 10 00:22:47.929161 containerd[1560]: time="2025-07-10T00:22:47.929063529Z" level=info msg="CreateContainer within sandbox \"5aee9c0d29bea81bd6cfb176778be924d84d0e9ae8590a4afc1129912e7c4809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993\"" Jul 10 00:22:47.930196 containerd[1560]: time="2025-07-10T00:22:47.930132827Z" level=info msg="StartContainer for \"b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993\"" Jul 10 00:22:47.932800 containerd[1560]: time="2025-07-10T00:22:47.932750330Z" level=info msg="connecting to shim b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993" address="unix:///run/containerd/s/2a8123500e9de096c99cae900aa45bc1fccc7cb97e5f303082a145b54c1e0188" protocol=ttrpc version=3 Jul 10 00:22:47.939334 systemd[1]: Started cri-containerd-d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894.scope - libcontainer container d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894. Jul 10 00:22:47.958161 systemd[1]: Started cri-containerd-b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993.scope - libcontainer container b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993. Jul 10 00:22:47.998234 containerd[1560]: time="2025-07-10T00:22:47.998166460Z" level=info msg="StartContainer for \"d0bc5adc07b3599ee900d9ed769579561e6de3dd9fef392227dbc7b933ef1894\" returns successfully" Jul 10 00:22:48.006706 containerd[1560]: time="2025-07-10T00:22:48.006644261Z" level=info msg="StartContainer for \"b8274bf544d67c0b233ef7c4eb7c7a2c5526b564b55c81e6fac779556b7d5993\" returns successfully" Jul 10 00:22:48.464955 kubelet[2756]: E0710 00:22:48.464820 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:48.467523 kubelet[2756]: E0710 00:22:48.467477 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:48.477741 kubelet[2756]: I0710 00:22:48.477645 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w26mr" podStartSLOduration=31.477624736 podStartE2EDuration="31.477624736s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:48.476197687 +0000 UTC m=+36.332371030" watchObservedRunningTime="2025-07-10 00:22:48.477624736 +0000 UTC m=+36.333798059" Jul 10 00:22:48.500035 kubelet[2756]: I0710 00:22:48.499923 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rpr26" podStartSLOduration=31.499896198 podStartE2EDuration="31.499896198s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:48.499122677 +0000 UTC m=+36.355296000" watchObservedRunningTime="2025-07-10 00:22:48.499896198 +0000 UTC m=+36.356069531" Jul 10 00:22:48.767159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422417538.mount: Deactivated successfully. Jul 10 00:22:49.469553 kubelet[2756]: E0710 00:22:49.469498 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:49.470078 kubelet[2756]: E0710 00:22:49.469498 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:50.471546 kubelet[2756]: E0710 00:22:50.471502 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:22:50.472102 kubelet[2756]: E0710 00:22:50.471742 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:00.129952 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:41132.service - OpenSSH per-connection server daemon (10.0.0.1:41132). Jul 10 00:23:00.275249 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 41132 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:00.280965 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:00.319424 systemd-logind[1542]: New session 10 of user core. Jul 10 00:23:00.329520 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:23:00.768573 sshd[4097]: Connection closed by 10.0.0.1 port 41132 Jul 10 00:23:00.768611 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:00.777948 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:41132.service: Deactivated successfully. Jul 10 00:23:00.783679 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:23:00.796177 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:23:00.798228 systemd-logind[1542]: Removed session 10. Jul 10 00:23:05.790133 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:41148.service - OpenSSH per-connection server daemon (10.0.0.1:41148). Jul 10 00:23:05.912545 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 41148 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:05.913457 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:05.939726 systemd-logind[1542]: New session 11 of user core. Jul 10 00:23:05.953461 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:23:06.210705 sshd[4116]: Connection closed by 10.0.0.1 port 41148 Jul 10 00:23:06.209639 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:06.227958 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:41148.service: Deactivated successfully. Jul 10 00:23:06.233787 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:23:06.246196 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:23:06.256317 systemd-logind[1542]: Removed session 11. Jul 10 00:23:11.236866 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:49582.service - OpenSSH per-connection server daemon (10.0.0.1:49582). Jul 10 00:23:11.362369 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 49582 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:11.365452 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:11.377097 systemd-logind[1542]: New session 12 of user core. Jul 10 00:23:11.387377 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:23:11.668374 sshd[4132]: Connection closed by 10.0.0.1 port 49582 Jul 10 00:23:11.664382 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:11.680584 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:49582.service: Deactivated successfully. Jul 10 00:23:11.683458 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:23:11.684838 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:23:11.693741 systemd-logind[1542]: Removed session 12. Jul 10 00:23:12.639858 kernel: hrtimer: interrupt took 3063376 ns Jul 10 00:23:16.690567 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:49592.service - OpenSSH per-connection server daemon (10.0.0.1:49592). Jul 10 00:23:16.888436 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 49592 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:16.891624 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:16.925579 systemd-logind[1542]: New session 13 of user core. Jul 10 00:23:16.940353 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:23:17.183102 sshd[4150]: Connection closed by 10.0.0.1 port 49592 Jul 10 00:23:17.183956 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:17.194263 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:49592.service: Deactivated successfully. Jul 10 00:23:17.204472 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:23:17.207404 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:23:17.217734 systemd-logind[1542]: Removed session 13. Jul 10 00:23:19.284698 kubelet[2756]: E0710 00:23:19.283996 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:22.209492 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:46478.service - OpenSSH per-connection server daemon (10.0.0.1:46478). Jul 10 00:23:22.351094 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 46478 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:22.356644 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:22.381818 systemd-logind[1542]: New session 14 of user core. Jul 10 00:23:22.397002 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:23:22.672499 sshd[4168]: Connection closed by 10.0.0.1 port 46478 Jul 10 00:23:22.668745 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:22.679096 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:46478.service: Deactivated successfully. Jul 10 00:23:22.691750 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:23:22.710000 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:23:22.726951 systemd-logind[1542]: Removed session 14. Jul 10 00:23:27.285722 kubelet[2756]: E0710 00:23:27.283649 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:27.688414 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:46480.service - OpenSSH per-connection server daemon (10.0.0.1:46480). Jul 10 00:23:27.780110 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:27.783471 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:27.812260 systemd-logind[1542]: New session 15 of user core. Jul 10 00:23:27.826539 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:23:28.140338 sshd[4184]: Connection closed by 10.0.0.1 port 46480 Jul 10 00:23:28.145741 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:28.150680 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:46480.service: Deactivated successfully. Jul 10 00:23:28.156179 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:23:28.164177 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:23:28.166109 systemd-logind[1542]: Removed session 15. Jul 10 00:23:30.291410 kubelet[2756]: E0710 00:23:30.288810 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:33.181804 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:34142.service - OpenSSH per-connection server daemon (10.0.0.1:34142). Jul 10 00:23:33.319586 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 34142 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:33.326943 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:33.355068 systemd-logind[1542]: New session 16 of user core. Jul 10 00:23:33.364743 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:23:33.660281 sshd[4200]: Connection closed by 10.0.0.1 port 34142 Jul 10 00:23:33.661392 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:33.683750 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:34142.service: Deactivated successfully. Jul 10 00:23:33.691790 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:23:33.703961 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:23:33.714755 systemd-logind[1542]: Removed session 16. Jul 10 00:23:38.712928 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:58942.service - OpenSSH per-connection server daemon (10.0.0.1:58942). Jul 10 00:23:38.827775 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 58942 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:38.830331 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:38.862732 systemd-logind[1542]: New session 17 of user core. Jul 10 00:23:38.886110 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:23:39.180519 sshd[4216]: Connection closed by 10.0.0.1 port 58942 Jul 10 00:23:39.181109 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:39.203129 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:58942.service: Deactivated successfully. Jul 10 00:23:39.207456 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:23:39.214646 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:23:39.217376 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:58948.service - OpenSSH per-connection server daemon (10.0.0.1:58948). Jul 10 00:23:39.222161 systemd-logind[1542]: Removed session 17. Jul 10 00:23:39.319202 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 58948 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:39.318939 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:39.347416 systemd-logind[1542]: New session 18 of user core. Jul 10 00:23:39.352444 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:23:39.630326 sshd[4232]: Connection closed by 10.0.0.1 port 58948 Jul 10 00:23:39.632216 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:39.673575 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:58948.service: Deactivated successfully. Jul 10 00:23:39.679937 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:23:39.691437 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:23:39.709186 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:58952.service - OpenSSH per-connection server daemon (10.0.0.1:58952). Jul 10 00:23:39.732655 systemd-logind[1542]: Removed session 18. Jul 10 00:23:39.818410 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 58952 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:39.825002 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:39.843286 systemd-logind[1542]: New session 19 of user core. Jul 10 00:23:39.855504 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:23:40.077687 sshd[4247]: Connection closed by 10.0.0.1 port 58952 Jul 10 00:23:40.078204 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:40.093915 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:58952.service: Deactivated successfully. Jul 10 00:23:40.098649 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:23:40.103863 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:23:40.111178 systemd-logind[1542]: Removed session 19. Jul 10 00:23:43.294424 kubelet[2756]: E0710 00:23:43.290268 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:44.285740 kubelet[2756]: E0710 00:23:44.285177 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:45.118997 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:58960.service - OpenSSH per-connection server daemon (10.0.0.1:58960). Jul 10 00:23:45.274397 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 58960 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:45.273434 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:45.295451 systemd-logind[1542]: New session 20 of user core. Jul 10 00:23:45.322651 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:23:45.604504 sshd[4263]: Connection closed by 10.0.0.1 port 58960 Jul 10 00:23:45.616087 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:45.632718 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:58960.service: Deactivated successfully. Jul 10 00:23:45.641359 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:23:45.648584 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:23:45.658331 systemd-logind[1542]: Removed session 20. Jul 10 00:23:50.625741 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:56894.service - OpenSSH per-connection server daemon (10.0.0.1:56894). Jul 10 00:23:50.701822 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 56894 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:50.704537 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:50.711490 systemd-logind[1542]: New session 21 of user core. Jul 10 00:23:50.721361 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:23:50.856149 sshd[4282]: Connection closed by 10.0.0.1 port 56894 Jul 10 00:23:50.856555 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:50.862365 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:56894.service: Deactivated successfully. Jul 10 00:23:50.865335 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:23:50.866577 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:23:50.868561 systemd-logind[1542]: Removed session 21. Jul 10 00:23:53.283143 kubelet[2756]: E0710 00:23:53.283062 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:23:55.874745 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:56908.service - OpenSSH per-connection server daemon (10.0.0.1:56908). Jul 10 00:23:55.938212 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 56908 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:23:55.940535 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:55.946617 systemd-logind[1542]: New session 22 of user core. Jul 10 00:23:55.962234 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:23:56.089822 sshd[4297]: Connection closed by 10.0.0.1 port 56908 Jul 10 00:23:56.090244 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:56.096018 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:56908.service: Deactivated successfully. Jul 10 00:23:56.098231 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:23:56.099169 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:23:56.100677 systemd-logind[1542]: Removed session 22. Jul 10 00:24:01.116424 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:36430.service - OpenSSH per-connection server daemon (10.0.0.1:36430). Jul 10 00:24:01.172366 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 36430 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:01.174267 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:01.179207 systemd-logind[1542]: New session 23 of user core. Jul 10 00:24:01.189187 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:24:01.309891 sshd[4312]: Connection closed by 10.0.0.1 port 36430 Jul 10 00:24:01.310263 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:01.314908 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:36430.service: Deactivated successfully. Jul 10 00:24:01.317738 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:24:01.318723 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:24:01.320594 systemd-logind[1542]: Removed session 23. Jul 10 00:24:06.336376 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:36432.service - OpenSSH per-connection server daemon (10.0.0.1:36432). Jul 10 00:24:06.394093 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 36432 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:06.395820 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:06.400432 systemd-logind[1542]: New session 24 of user core. Jul 10 00:24:06.411143 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:24:06.521796 sshd[4327]: Connection closed by 10.0.0.1 port 36432 Jul 10 00:24:06.522080 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:06.525236 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:36432.service: Deactivated successfully. Jul 10 00:24:06.527528 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:24:06.529304 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:24:06.530991 systemd-logind[1542]: Removed session 24. Jul 10 00:24:07.283300 kubelet[2756]: E0710 00:24:07.283246 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:11.539504 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:50750.service - OpenSSH per-connection server daemon (10.0.0.1:50750). Jul 10 00:24:11.595919 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 50750 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:11.597855 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:11.606008 systemd-logind[1542]: New session 25 of user core. Jul 10 00:24:11.616327 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:24:11.737135 sshd[4342]: Connection closed by 10.0.0.1 port 50750 Jul 10 00:24:11.737478 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:11.742683 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:50750.service: Deactivated successfully. Jul 10 00:24:11.745634 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:24:11.746688 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:24:11.749187 systemd-logind[1542]: Removed session 25. Jul 10 00:24:13.283331 kubelet[2756]: E0710 00:24:13.283261 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:16.749869 systemd[1]: Started sshd@25-10.0.0.69:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Jul 10 00:24:16.812545 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:16.814493 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:16.819573 systemd-logind[1542]: New session 26 of user core. Jul 10 00:24:16.826203 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:24:16.938806 sshd[4361]: Connection closed by 10.0.0.1 port 50756 Jul 10 00:24:16.939186 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:16.943528 systemd[1]: sshd@25-10.0.0.69:22-10.0.0.1:50756.service: Deactivated successfully. Jul 10 00:24:16.945681 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:24:16.946614 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:24:16.948035 systemd-logind[1542]: Removed session 26. Jul 10 00:24:21.958023 systemd[1]: Started sshd@26-10.0.0.69:22-10.0.0.1:57700.service - OpenSSH per-connection server daemon (10.0.0.1:57700). Jul 10 00:24:22.017808 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 57700 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:22.019663 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:22.025107 systemd-logind[1542]: New session 27 of user core. Jul 10 00:24:22.035193 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:24:22.151226 sshd[4378]: Connection closed by 10.0.0.1 port 57700 Jul 10 00:24:22.151545 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:22.155908 systemd[1]: sshd@26-10.0.0.69:22-10.0.0.1:57700.service: Deactivated successfully. Jul 10 00:24:22.158355 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:24:22.159267 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:24:22.160675 systemd-logind[1542]: Removed session 27. Jul 10 00:24:23.283596 kubelet[2756]: E0710 00:24:23.283516 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:27.166189 systemd[1]: Started sshd@27-10.0.0.69:22-10.0.0.1:57710.service - OpenSSH per-connection server daemon (10.0.0.1:57710). Jul 10 00:24:27.227268 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 57710 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:27.229408 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:27.235217 systemd-logind[1542]: New session 28 of user core. Jul 10 00:24:27.249213 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:24:27.361886 sshd[4393]: Connection closed by 10.0.0.1 port 57710 Jul 10 00:24:27.362276 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:27.367327 systemd[1]: sshd@27-10.0.0.69:22-10.0.0.1:57710.service: Deactivated successfully. Jul 10 00:24:27.369748 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:24:27.370787 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:24:27.372426 systemd-logind[1542]: Removed session 28. Jul 10 00:24:32.283382 kubelet[2756]: E0710 00:24:32.283330 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:32.375089 systemd[1]: Started sshd@28-10.0.0.69:22-10.0.0.1:36762.service - OpenSSH per-connection server daemon (10.0.0.1:36762). Jul 10 00:24:32.434259 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:32.435703 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:32.439814 systemd-logind[1542]: New session 29 of user core. Jul 10 00:24:32.443095 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 00:24:32.604608 sshd[4409]: Connection closed by 10.0.0.1 port 36762 Jul 10 00:24:32.605014 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:32.613809 systemd[1]: sshd@28-10.0.0.69:22-10.0.0.1:36762.service: Deactivated successfully. Jul 10 00:24:32.615835 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 00:24:32.616684 systemd-logind[1542]: Session 29 logged out. Waiting for processes to exit. Jul 10 00:24:32.619852 systemd[1]: Started sshd@29-10.0.0.69:22-10.0.0.1:36768.service - OpenSSH per-connection server daemon (10.0.0.1:36768). Jul 10 00:24:32.620547 systemd-logind[1542]: Removed session 29. Jul 10 00:24:32.678654 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 36768 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:32.680236 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:32.684783 systemd-logind[1542]: New session 30 of user core. Jul 10 00:24:32.698131 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 00:24:33.109103 sshd[4425]: Connection closed by 10.0.0.1 port 36768 Jul 10 00:24:33.109571 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:33.124853 systemd[1]: sshd@29-10.0.0.69:22-10.0.0.1:36768.service: Deactivated successfully. Jul 10 00:24:33.126804 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 00:24:33.127677 systemd-logind[1542]: Session 30 logged out. Waiting for processes to exit. Jul 10 00:24:33.131569 systemd[1]: Started sshd@30-10.0.0.69:22-10.0.0.1:36770.service - OpenSSH per-connection server daemon (10.0.0.1:36770). Jul 10 00:24:33.132423 systemd-logind[1542]: Removed session 30. Jul 10 00:24:33.195069 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 36770 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:33.196444 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:33.200866 systemd-logind[1542]: New session 31 of user core. Jul 10 00:24:33.214112 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 10 00:24:34.318063 sshd[4438]: Connection closed by 10.0.0.1 port 36770 Jul 10 00:24:34.318548 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:34.331573 systemd[1]: sshd@30-10.0.0.69:22-10.0.0.1:36770.service: Deactivated successfully. Jul 10 00:24:34.333873 systemd[1]: session-31.scope: Deactivated successfully. Jul 10 00:24:34.334888 systemd-logind[1542]: Session 31 logged out. Waiting for processes to exit. Jul 10 00:24:34.339641 systemd[1]: Started sshd@31-10.0.0.69:22-10.0.0.1:36772.service - OpenSSH per-connection server daemon (10.0.0.1:36772). Jul 10 00:24:34.341403 systemd-logind[1542]: Removed session 31. Jul 10 00:24:34.390881 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 36772 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:34.392613 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:34.397758 systemd-logind[1542]: New session 32 of user core. Jul 10 00:24:34.410177 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 10 00:24:34.663731 sshd[4460]: Connection closed by 10.0.0.1 port 36772 Jul 10 00:24:34.663924 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:34.676844 systemd[1]: sshd@31-10.0.0.69:22-10.0.0.1:36772.service: Deactivated successfully. Jul 10 00:24:34.680288 systemd[1]: session-32.scope: Deactivated successfully. Jul 10 00:24:34.681724 systemd-logind[1542]: Session 32 logged out. Waiting for processes to exit. Jul 10 00:24:34.687121 systemd-logind[1542]: Removed session 32. Jul 10 00:24:34.690247 systemd[1]: Started sshd@32-10.0.0.69:22-10.0.0.1:36778.service - OpenSSH per-connection server daemon (10.0.0.1:36778). Jul 10 00:24:34.743961 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:34.745565 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:34.752833 systemd-logind[1542]: New session 33 of user core. Jul 10 00:24:34.759178 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 10 00:24:34.875511 sshd[4473]: Connection closed by 10.0.0.1 port 36778 Jul 10 00:24:34.875866 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:34.880343 systemd[1]: sshd@32-10.0.0.69:22-10.0.0.1:36778.service: Deactivated successfully. Jul 10 00:24:34.882947 systemd[1]: session-33.scope: Deactivated successfully. Jul 10 00:24:34.883800 systemd-logind[1542]: Session 33 logged out. Waiting for processes to exit. Jul 10 00:24:34.885515 systemd-logind[1542]: Removed session 33. Jul 10 00:24:39.891294 systemd[1]: Started sshd@33-10.0.0.69:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Jul 10 00:24:39.947260 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:39.948722 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:39.953356 systemd-logind[1542]: New session 34 of user core. Jul 10 00:24:39.963157 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 10 00:24:40.197784 sshd[4489]: Connection closed by 10.0.0.1 port 33130 Jul 10 00:24:40.198056 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:40.202405 systemd[1]: sshd@33-10.0.0.69:22-10.0.0.1:33130.service: Deactivated successfully. Jul 10 00:24:40.204763 systemd[1]: session-34.scope: Deactivated successfully. Jul 10 00:24:40.205630 systemd-logind[1542]: Session 34 logged out. Waiting for processes to exit. Jul 10 00:24:40.207005 systemd-logind[1542]: Removed session 34. Jul 10 00:24:45.215570 systemd[1]: Started sshd@34-10.0.0.69:22-10.0.0.1:33144.service - OpenSSH per-connection server daemon (10.0.0.1:33144). Jul 10 00:24:45.271287 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 33144 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:45.273164 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:45.278405 systemd-logind[1542]: New session 35 of user core. Jul 10 00:24:45.284614 kubelet[2756]: E0710 00:24:45.284562 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:45.290116 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 10 00:24:45.404420 sshd[4507]: Connection closed by 10.0.0.1 port 33144 Jul 10 00:24:45.404755 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:45.409695 systemd[1]: sshd@34-10.0.0.69:22-10.0.0.1:33144.service: Deactivated successfully. Jul 10 00:24:45.411950 systemd[1]: session-35.scope: Deactivated successfully. Jul 10 00:24:45.412755 systemd-logind[1542]: Session 35 logged out. Waiting for processes to exit. Jul 10 00:24:45.414326 systemd-logind[1542]: Removed session 35. Jul 10 00:24:50.420926 systemd[1]: Started sshd@35-10.0.0.69:22-10.0.0.1:37622.service - OpenSSH per-connection server daemon (10.0.0.1:37622). Jul 10 00:24:50.475471 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:50.476856 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:50.481187 systemd-logind[1542]: New session 36 of user core. Jul 10 00:24:50.492227 systemd[1]: Started session-36.scope - Session 36 of User core. Jul 10 00:24:50.597581 sshd[4525]: Connection closed by 10.0.0.1 port 37622 Jul 10 00:24:50.598111 sshd-session[4523]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:50.612597 systemd[1]: sshd@35-10.0.0.69:22-10.0.0.1:37622.service: Deactivated successfully. Jul 10 00:24:50.614367 systemd[1]: session-36.scope: Deactivated successfully. Jul 10 00:24:50.615067 systemd-logind[1542]: Session 36 logged out. Waiting for processes to exit. Jul 10 00:24:50.617966 systemd[1]: Started sshd@36-10.0.0.69:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Jul 10 00:24:50.618864 systemd-logind[1542]: Removed session 36. Jul 10 00:24:50.672949 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:50.674362 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:50.679149 systemd-logind[1542]: New session 37 of user core. Jul 10 00:24:50.694111 systemd[1]: Started session-37.scope - Session 37 of User core. Jul 10 00:24:52.031399 containerd[1560]: time="2025-07-10T00:24:52.031096338Z" level=info msg="StopContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" with timeout 30 (s)" Jul 10 00:24:52.052080 containerd[1560]: time="2025-07-10T00:24:52.052026896Z" level=info msg="Stop container \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" with signal terminated" Jul 10 00:24:52.068488 systemd[1]: cri-containerd-5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162.scope: Deactivated successfully. Jul 10 00:24:52.072547 containerd[1560]: time="2025-07-10T00:24:52.071699943Z" level=info msg="received exit event container_id:\"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" id:\"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" pid:3490 exited_at:{seconds:1752107092 nanos:71237903}" Jul 10 00:24:52.072547 containerd[1560]: time="2025-07-10T00:24:52.071837163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" id:\"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" pid:3490 exited_at:{seconds:1752107092 nanos:71237903}" Jul 10 00:24:52.077242 containerd[1560]: time="2025-07-10T00:24:52.077209923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" id:\"353f4542a6fd7b383560a7cb330172c326d37fcf8718d84755c78b3dd6e742ae\" pid:4560 exited_at:{seconds:1752107092 nanos:76778581}" Jul 10 00:24:52.081144 containerd[1560]: time="2025-07-10T00:24:52.081081866Z" level=info msg="StopContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" with timeout 2 (s)" Jul 10 00:24:52.081926 containerd[1560]: time="2025-07-10T00:24:52.081879789Z" level=info msg="Stop container \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" with signal terminated" Jul 10 00:24:52.088821 containerd[1560]: time="2025-07-10T00:24:52.088747957Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:24:52.094586 systemd-networkd[1457]: lxc_health: Link DOWN Jul 10 00:24:52.094597 systemd-networkd[1457]: lxc_health: Lost carrier Jul 10 00:24:52.097362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162-rootfs.mount: Deactivated successfully. Jul 10 00:24:52.115547 systemd[1]: cri-containerd-bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051.scope: Deactivated successfully. Jul 10 00:24:52.116134 systemd[1]: cri-containerd-bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051.scope: Consumed 8.189s CPU time, 125M memory peak, 228K read from disk, 13.3M written to disk. Jul 10 00:24:52.117426 containerd[1560]: time="2025-07-10T00:24:52.117379861Z" level=info msg="StopContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" returns successfully" Jul 10 00:24:52.118429 containerd[1560]: time="2025-07-10T00:24:52.118395764Z" level=info msg="received exit event container_id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" pid:3379 exited_at:{seconds:1752107092 nanos:117080988}" Jul 10 00:24:52.118525 containerd[1560]: time="2025-07-10T00:24:52.118481987Z" level=info msg="StopPodSandbox for \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\"" Jul 10 00:24:52.118564 containerd[1560]: time="2025-07-10T00:24:52.118492637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" id:\"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" pid:3379 exited_at:{seconds:1752107092 nanos:117080988}" Jul 10 00:24:52.118664 containerd[1560]: time="2025-07-10T00:24:52.118570023Z" level=info msg="Container to stop \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.126173 systemd[1]: cri-containerd-3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4.scope: Deactivated successfully. Jul 10 00:24:52.127570 containerd[1560]: time="2025-07-10T00:24:52.127324093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" id:\"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" pid:2955 exit_status:137 exited_at:{seconds:1752107092 nanos:126745563}" Jul 10 00:24:52.142829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051-rootfs.mount: Deactivated successfully. Jul 10 00:24:52.153891 containerd[1560]: time="2025-07-10T00:24:52.153721407Z" level=info msg="StopContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" returns successfully" Jul 10 00:24:52.154565 containerd[1560]: time="2025-07-10T00:24:52.154519871Z" level=info msg="StopPodSandbox for \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\"" Jul 10 00:24:52.154620 containerd[1560]: time="2025-07-10T00:24:52.154602267Z" level=info msg="Container to stop \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.154648 containerd[1560]: time="2025-07-10T00:24:52.154620711Z" level=info msg="Container to stop \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.154648 containerd[1560]: time="2025-07-10T00:24:52.154630340Z" level=info msg="Container to stop \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.154648 containerd[1560]: time="2025-07-10T00:24:52.154638755Z" level=info msg="Container to stop \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.154648 containerd[1560]: time="2025-07-10T00:24:52.154646710Z" level=info msg="Container to stop \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:24:52.158517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4-rootfs.mount: Deactivated successfully. Jul 10 00:24:52.162639 systemd[1]: cri-containerd-f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865.scope: Deactivated successfully. Jul 10 00:24:52.165225 containerd[1560]: time="2025-07-10T00:24:52.165179130Z" level=info msg="shim disconnected" id=3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4 namespace=k8s.io Jul 10 00:24:52.165339 containerd[1560]: time="2025-07-10T00:24:52.165264592Z" level=warning msg="cleaning up after shim disconnected" id=3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4 namespace=k8s.io Jul 10 00:24:52.194160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865-rootfs.mount: Deactivated successfully. Jul 10 00:24:52.215336 containerd[1560]: time="2025-07-10T00:24:52.165279109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:24:52.215572 containerd[1560]: time="2025-07-10T00:24:52.198644139Z" level=info msg="shim disconnected" id=f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865 namespace=k8s.io Jul 10 00:24:52.215605 containerd[1560]: time="2025-07-10T00:24:52.215572447Z" level=warning msg="cleaning up after shim disconnected" id=f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865 namespace=k8s.io Jul 10 00:24:52.215630 containerd[1560]: time="2025-07-10T00:24:52.215582556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:24:52.259803 containerd[1560]: time="2025-07-10T00:24:52.259726210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" id:\"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" pid:2958 exit_status:137 exited_at:{seconds:1752107092 nanos:163385402}" Jul 10 00:24:52.262690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4-shm.mount: Deactivated successfully. Jul 10 00:24:52.262809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865-shm.mount: Deactivated successfully. Jul 10 00:24:52.266758 containerd[1560]: time="2025-07-10T00:24:52.266693864Z" level=info msg="received exit event sandbox_id:\"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" exit_status:137 exited_at:{seconds:1752107092 nanos:163385402}" Jul 10 00:24:52.267086 containerd[1560]: time="2025-07-10T00:24:52.267049794Z" level=info msg="received exit event sandbox_id:\"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" exit_status:137 exited_at:{seconds:1752107092 nanos:126745563}" Jul 10 00:24:52.282488 containerd[1560]: time="2025-07-10T00:24:52.281541072Z" level=info msg="TearDown network for sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" successfully" Jul 10 00:24:52.282488 containerd[1560]: time="2025-07-10T00:24:52.281569044Z" level=info msg="StopPodSandbox for \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" returns successfully" Jul 10 00:24:52.283337 containerd[1560]: time="2025-07-10T00:24:52.283072406Z" level=info msg="TearDown network for sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" successfully" Jul 10 00:24:52.283337 containerd[1560]: time="2025-07-10T00:24:52.283154441Z" level=info msg="StopPodSandbox for \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" returns successfully" Jul 10 00:24:52.355666 kubelet[2756]: I0710 00:24:52.355616 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7jx\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-kube-api-access-dc7jx\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.355666 kubelet[2756]: I0710 00:24:52.355669 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-hostproc\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355698 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-hubble-tls\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355714 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-cgroup\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355783 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355786 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355827 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4d2990-6062-4444-80d8-5af38105da5f-clustermesh-secrets\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356468 kubelet[2756]: I0710 00:24:52.355845 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-net\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.355865 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.355880 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-etc-cni-netd\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.355896 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-xtables-lock\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.355912 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-config-path\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.355931 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cni-path\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356610 kubelet[2756]: I0710 00:24:52.356004 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-kernel\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356028 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-lib-modules\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356047 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-bpf-maps\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356071 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lj59\" (UniqueName: \"kubernetes.io/projected/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-kube-api-access-5lj59\") pod \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\" (UID: \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356088 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-cilium-config-path\") pod \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\" (UID: \"cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356102 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-run\") pod \"9d4d2990-6062-4444-80d8-5af38105da5f\" (UID: \"9d4d2990-6062-4444-80d8-5af38105da5f\") " Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356139 2756 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.356744 kubelet[2756]: I0710 00:24:52.356149 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.356904 kubelet[2756]: I0710 00:24:52.356158 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.356904 kubelet[2756]: I0710 00:24:52.356179 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.356904 kubelet[2756]: I0710 00:24:52.356195 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.356904 kubelet[2756]: I0710 00:24:52.356210 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.358916 kubelet[2756]: I0710 00:24:52.358130 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.358916 kubelet[2756]: I0710 00:24:52.358182 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.358916 kubelet[2756]: I0710 00:24:52.358202 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.358916 kubelet[2756]: I0710 00:24:52.358241 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:24:52.360709 kubelet[2756]: I0710 00:24:52.360672 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:24:52.361708 kubelet[2756]: I0710 00:24:52.361683 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:24:52.361803 kubelet[2756]: I0710 00:24:52.361685 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-kube-api-access-dc7jx" (OuterVolumeSpecName: "kube-api-access-dc7jx") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "kube-api-access-dc7jx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:24:52.362518 kubelet[2756]: I0710 00:24:52.362487 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-kube-api-access-5lj59" (OuterVolumeSpecName: "kube-api-access-5lj59") pod "cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d" (UID: "cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d"). InnerVolumeSpecName "kube-api-access-5lj59". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:24:52.362626 kubelet[2756]: I0710 00:24:52.362601 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4d2990-6062-4444-80d8-5af38105da5f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d4d2990-6062-4444-80d8-5af38105da5f" (UID: "9d4d2990-6062-4444-80d8-5af38105da5f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:24:52.363753 kubelet[2756]: I0710 00:24:52.363720 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d" (UID: "cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:24:52.425373 kubelet[2756]: E0710 00:24:52.425290 2756 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456604 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456638 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dc7jx\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-kube-api-access-dc7jx\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456650 2756 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d4d2990-6062-4444-80d8-5af38105da5f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456659 2756 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d4d2990-6062-4444-80d8-5af38105da5f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456666 2756 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456674 2756 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.456661 kubelet[2756]: I0710 00:24:52.456683 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d4d2990-6062-4444-80d8-5af38105da5f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456690 2756 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456698 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456705 2756 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456712 2756 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d4d2990-6062-4444-80d8-5af38105da5f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456720 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lj59\" (UniqueName: \"kubernetes.io/projected/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-kube-api-access-5lj59\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:52.457030 kubelet[2756]: I0710 00:24:52.456729 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:24:53.044262 kubelet[2756]: I0710 00:24:53.044204 2756 scope.go:117] "RemoveContainer" containerID="5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162" Jul 10 00:24:53.049342 containerd[1560]: time="2025-07-10T00:24:53.048773957Z" level=info msg="RemoveContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\"" Jul 10 00:24:53.053869 systemd[1]: Removed slice kubepods-besteffort-podcb9d067f_5aec_443c_8bc7_ddee6cd6eb8d.slice - libcontainer container kubepods-besteffort-podcb9d067f_5aec_443c_8bc7_ddee6cd6eb8d.slice. Jul 10 00:24:53.057622 containerd[1560]: time="2025-07-10T00:24:53.057590003Z" level=info msg="RemoveContainer for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" returns successfully" Jul 10 00:24:53.058367 kubelet[2756]: I0710 00:24:53.058318 2756 scope.go:117] "RemoveContainer" containerID="5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162" Jul 10 00:24:53.063139 systemd[1]: Removed slice kubepods-burstable-pod9d4d2990_6062_4444_80d8_5af38105da5f.slice - libcontainer container kubepods-burstable-pod9d4d2990_6062_4444_80d8_5af38105da5f.slice. Jul 10 00:24:53.063816 systemd[1]: kubepods-burstable-pod9d4d2990_6062_4444_80d8_5af38105da5f.slice: Consumed 8.388s CPU time, 125.3M memory peak, 240K read from disk, 16.6M written to disk. Jul 10 00:24:53.065566 containerd[1560]: time="2025-07-10T00:24:53.060229625Z" level=error msg="ContainerStatus for \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\": not found" Jul 10 00:24:53.065643 kubelet[2756]: E0710 00:24:53.065392 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\": not found" containerID="5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162" Jul 10 00:24:53.065643 kubelet[2756]: I0710 00:24:53.065429 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162"} err="failed to get container status \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\": rpc error: code = NotFound desc = an error occurred when try to find container \"5aef5604d6b9c15ed1d4fc2b3957e4b2f9d8fa7ee0a77cbced45c7604b450162\": not found" Jul 10 00:24:53.065643 kubelet[2756]: I0710 00:24:53.065471 2756 scope.go:117] "RemoveContainer" containerID="bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051" Jul 10 00:24:53.067272 containerd[1560]: time="2025-07-10T00:24:53.067238908Z" level=info msg="RemoveContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\"" Jul 10 00:24:53.073477 containerd[1560]: time="2025-07-10T00:24:53.073433718Z" level=info msg="RemoveContainer for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" returns successfully" Jul 10 00:24:53.073720 kubelet[2756]: I0710 00:24:53.073674 2756 scope.go:117] "RemoveContainer" containerID="2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b" Jul 10 00:24:53.075547 containerd[1560]: time="2025-07-10T00:24:53.075508806Z" level=info msg="RemoveContainer for \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\"" Jul 10 00:24:53.080708 containerd[1560]: time="2025-07-10T00:24:53.080676110Z" level=info msg="RemoveContainer for \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" returns successfully" Jul 10 00:24:53.080994 kubelet[2756]: I0710 00:24:53.080926 2756 scope.go:117] "RemoveContainer" containerID="0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42" Jul 10 00:24:53.092251 containerd[1560]: time="2025-07-10T00:24:53.092188875Z" level=info msg="RemoveContainer for \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\"" Jul 10 00:24:53.096814 containerd[1560]: time="2025-07-10T00:24:53.096775565Z" level=info msg="RemoveContainer for \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" returns successfully" Jul 10 00:24:53.097085 kubelet[2756]: I0710 00:24:53.097050 2756 scope.go:117] "RemoveContainer" containerID="b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93" Jul 10 00:24:53.098192 containerd[1560]: time="2025-07-10T00:24:53.098169431Z" level=info msg="RemoveContainer for \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\"" Jul 10 00:24:53.098212 systemd[1]: var-lib-kubelet-pods-9d4d2990\x2d6062\x2d4444\x2d80d8\x2d5af38105da5f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:24:53.098382 systemd[1]: var-lib-kubelet-pods-cb9d067f\x2d5aec\x2d443c\x2d8bc7\x2dddee6cd6eb8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5lj59.mount: Deactivated successfully. Jul 10 00:24:53.098502 systemd[1]: var-lib-kubelet-pods-9d4d2990\x2d6062\x2d4444\x2d80d8\x2d5af38105da5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddc7jx.mount: Deactivated successfully. Jul 10 00:24:53.098619 systemd[1]: var-lib-kubelet-pods-9d4d2990\x2d6062\x2d4444\x2d80d8\x2d5af38105da5f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:24:53.102663 containerd[1560]: time="2025-07-10T00:24:53.102598634Z" level=info msg="RemoveContainer for \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" returns successfully" Jul 10 00:24:53.102814 kubelet[2756]: I0710 00:24:53.102784 2756 scope.go:117] "RemoveContainer" containerID="8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57" Jul 10 00:24:53.104127 containerd[1560]: time="2025-07-10T00:24:53.104107677Z" level=info msg="RemoveContainer for \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\"" Jul 10 00:24:53.107566 containerd[1560]: time="2025-07-10T00:24:53.107535573Z" level=info msg="RemoveContainer for \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" returns successfully" Jul 10 00:24:53.107721 kubelet[2756]: I0710 00:24:53.107689 2756 scope.go:117] "RemoveContainer" containerID="bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051" Jul 10 00:24:53.107921 containerd[1560]: time="2025-07-10T00:24:53.107881064Z" level=error msg="ContainerStatus for \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\": not found" Jul 10 00:24:53.108040 kubelet[2756]: E0710 00:24:53.108012 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\": not found" containerID="bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051" Jul 10 00:24:53.108097 kubelet[2756]: I0710 00:24:53.108045 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051"} err="failed to get container status \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdf21e7ff1c22ffd408cf6878583127f7caaee9dd359b35b5ca6eff87b58d051\": not found" Jul 10 00:24:53.108097 kubelet[2756]: I0710 00:24:53.108075 2756 scope.go:117] "RemoveContainer" containerID="2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b" Jul 10 00:24:53.108221 containerd[1560]: time="2025-07-10T00:24:53.108195566Z" level=error msg="ContainerStatus for \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\": not found" Jul 10 00:24:53.108313 kubelet[2756]: E0710 00:24:53.108288 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\": not found" containerID="2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b" Jul 10 00:24:53.108368 kubelet[2756]: I0710 00:24:53.108310 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b"} err="failed to get container status \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e560823f5d9859c28760dab8a0a39845e6dc31bb2267e5ca85f78354b56612b\": not found" Jul 10 00:24:53.108368 kubelet[2756]: I0710 00:24:53.108329 2756 scope.go:117] "RemoveContainer" containerID="0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42" Jul 10 00:24:53.108509 containerd[1560]: time="2025-07-10T00:24:53.108467168Z" level=error msg="ContainerStatus for \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\": not found" Jul 10 00:24:53.108667 kubelet[2756]: E0710 00:24:53.108627 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\": not found" containerID="0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42" Jul 10 00:24:53.108738 kubelet[2756]: I0710 00:24:53.108674 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42"} err="failed to get container status \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\": rpc error: code = NotFound desc = an error occurred when try to find container \"0508689b373099bfa1ee00f2a2c3d955e6b0dd2ecf651df43030830ba3a5ed42\": not found" Jul 10 00:24:53.108738 kubelet[2756]: I0710 00:24:53.108706 2756 scope.go:117] "RemoveContainer" containerID="b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93" Jul 10 00:24:53.108935 containerd[1560]: time="2025-07-10T00:24:53.108901927Z" level=error msg="ContainerStatus for \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\": not found" Jul 10 00:24:53.109048 kubelet[2756]: E0710 00:24:53.109025 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\": not found" containerID="b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93" Jul 10 00:24:53.109205 kubelet[2756]: I0710 00:24:53.109046 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93"} err="failed to get container status \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\": rpc error: code = NotFound desc = an error occurred when try to find container \"b363a7b26fafa03cc969dbccbb5acbdc901c61d34c757bdacdd4b37ce1231a93\": not found" Jul 10 00:24:53.109205 kubelet[2756]: I0710 00:24:53.109060 2756 scope.go:117] "RemoveContainer" containerID="8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57" Jul 10 00:24:53.109285 containerd[1560]: time="2025-07-10T00:24:53.109203185Z" level=error msg="ContainerStatus for \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\": not found" Jul 10 00:24:53.109319 kubelet[2756]: E0710 00:24:53.109286 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\": not found" containerID="8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57" Jul 10 00:24:53.109319 kubelet[2756]: I0710 00:24:53.109302 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57"} err="failed to get container status \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d88c9f539d89609522efa32d9e0c40bfaeb914a57ca44c6a52fc9fd00d06c57\": not found" Jul 10 00:24:53.283746 kubelet[2756]: E0710 00:24:53.283688 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:53.995311 sshd[4540]: Connection closed by 10.0.0.1 port 37624 Jul 10 00:24:53.995783 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:54.008907 systemd[1]: sshd@36-10.0.0.69:22-10.0.0.1:37624.service: Deactivated successfully. Jul 10 00:24:54.010932 systemd[1]: session-37.scope: Deactivated successfully. Jul 10 00:24:54.012001 systemd-logind[1542]: Session 37 logged out. Waiting for processes to exit. Jul 10 00:24:54.015397 systemd[1]: Started sshd@37-10.0.0.69:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Jul 10 00:24:54.016252 systemd-logind[1542]: Removed session 37. Jul 10 00:24:54.078382 sshd[4690]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:54.080179 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:54.084907 systemd-logind[1542]: New session 38 of user core. Jul 10 00:24:54.099243 systemd[1]: Started session-38.scope - Session 38 of User core. Jul 10 00:24:54.286355 kubelet[2756]: I0710 00:24:54.286201 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4d2990-6062-4444-80d8-5af38105da5f" path="/var/lib/kubelet/pods/9d4d2990-6062-4444-80d8-5af38105da5f/volumes" Jul 10 00:24:54.287069 kubelet[2756]: I0710 00:24:54.287045 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d" path="/var/lib/kubelet/pods/cb9d067f-5aec-443c-8bc7-ddee6cd6eb8d/volumes" Jul 10 00:24:54.773668 sshd[4694]: Connection closed by 10.0.0.1 port 37628 Jul 10 00:24:54.775223 sshd-session[4690]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:54.786455 systemd[1]: sshd@37-10.0.0.69:22-10.0.0.1:37628.service: Deactivated successfully. Jul 10 00:24:54.790765 systemd[1]: session-38.scope: Deactivated successfully. Jul 10 00:24:54.793462 systemd-logind[1542]: Session 38 logged out. Waiting for processes to exit. Jul 10 00:24:54.800099 systemd[1]: Started sshd@38-10.0.0.69:22-10.0.0.1:37630.service - OpenSSH per-connection server daemon (10.0.0.1:37630). Jul 10 00:24:54.808044 systemd-logind[1542]: Removed session 38. Jul 10 00:24:54.820711 systemd[1]: Created slice kubepods-burstable-poda9ed1cdd_6127_4b02_a798_f231a9bae190.slice - libcontainer container kubepods-burstable-poda9ed1cdd_6127_4b02_a798_f231a9bae190.slice. Jul 10 00:24:54.857839 sshd[4706]: Accepted publickey for core from 10.0.0.1 port 37630 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:54.859263 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:54.863868 systemd-logind[1542]: New session 39 of user core. Jul 10 00:24:54.871115 systemd[1]: Started session-39.scope - Session 39 of User core. Jul 10 00:24:54.921918 sshd[4709]: Connection closed by 10.0.0.1 port 37630 Jul 10 00:24:54.922365 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:54.930943 systemd[1]: sshd@38-10.0.0.69:22-10.0.0.1:37630.service: Deactivated successfully. Jul 10 00:24:54.933073 systemd[1]: session-39.scope: Deactivated successfully. Jul 10 00:24:54.933808 systemd-logind[1542]: Session 39 logged out. Waiting for processes to exit. Jul 10 00:24:54.937375 systemd[1]: Started sshd@39-10.0.0.69:22-10.0.0.1:37638.service - OpenSSH per-connection server daemon (10.0.0.1:37638). Jul 10 00:24:54.938137 systemd-logind[1542]: Removed session 39. Jul 10 00:24:54.970441 kubelet[2756]: I0710 00:24:54.970390 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9ed1cdd-6127-4b02-a798-f231a9bae190-clustermesh-secrets\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970441 kubelet[2756]: I0710 00:24:54.970437 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-xtables-lock\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970563 kubelet[2756]: I0710 00:24:54.970460 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-host-proc-sys-kernel\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970563 kubelet[2756]: I0710 00:24:54.970537 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-cni-path\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970667 kubelet[2756]: I0710 00:24:54.970588 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-lib-modules\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970667 kubelet[2756]: I0710 00:24:54.970616 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-cilium-cgroup\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970667 kubelet[2756]: I0710 00:24:54.970635 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-host-proc-sys-net\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970667 kubelet[2756]: I0710 00:24:54.970663 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-cilium-run\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970807 kubelet[2756]: I0710 00:24:54.970684 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9ed1cdd-6127-4b02-a798-f231a9bae190-cilium-config-path\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970807 kubelet[2756]: I0710 00:24:54.970733 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9ed1cdd-6127-4b02-a798-f231a9bae190-cilium-ipsec-secrets\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970807 kubelet[2756]: I0710 00:24:54.970769 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-hostproc\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970807 kubelet[2756]: I0710 00:24:54.970795 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-etc-cni-netd\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970949 kubelet[2756]: I0710 00:24:54.970816 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9ed1cdd-6127-4b02-a798-f231a9bae190-hubble-tls\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970949 kubelet[2756]: I0710 00:24:54.970838 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sf6l\" (UniqueName: \"kubernetes.io/projected/a9ed1cdd-6127-4b02-a798-f231a9bae190-kube-api-access-4sf6l\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.970949 kubelet[2756]: I0710 00:24:54.970858 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9ed1cdd-6127-4b02-a798-f231a9bae190-bpf-maps\") pod \"cilium-6g5p5\" (UID: \"a9ed1cdd-6127-4b02-a798-f231a9bae190\") " pod="kube-system/cilium-6g5p5" Jul 10 00:24:54.988359 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 37638 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:24:54.989785 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:54.994895 systemd-logind[1542]: New session 40 of user core. Jul 10 00:24:55.001134 systemd[1]: Started session-40.scope - Session 40 of User core. Jul 10 00:24:55.125824 kubelet[2756]: E0710 00:24:55.125238 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:55.126226 containerd[1560]: time="2025-07-10T00:24:55.126194088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g5p5,Uid:a9ed1cdd-6127-4b02-a798-f231a9bae190,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:55.166028 containerd[1560]: time="2025-07-10T00:24:55.165607876Z" level=info msg="connecting to shim 3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:55.196336 systemd[1]: Started cri-containerd-3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299.scope - libcontainer container 3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299. Jul 10 00:24:55.226529 containerd[1560]: time="2025-07-10T00:24:55.226483427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g5p5,Uid:a9ed1cdd-6127-4b02-a798-f231a9bae190,Namespace:kube-system,Attempt:0,} returns sandbox id \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\"" Jul 10 00:24:55.227344 kubelet[2756]: E0710 00:24:55.227314 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:55.236917 containerd[1560]: time="2025-07-10T00:24:55.236492260Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:24:55.248305 containerd[1560]: time="2025-07-10T00:24:55.248247702Z" level=info msg="Container e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:55.254944 containerd[1560]: time="2025-07-10T00:24:55.254902897Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\"" Jul 10 00:24:55.255436 containerd[1560]: time="2025-07-10T00:24:55.255400915Z" level=info msg="StartContainer for \"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\"" Jul 10 00:24:55.256184 containerd[1560]: time="2025-07-10T00:24:55.256159223Z" level=info msg="connecting to shim e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" protocol=ttrpc version=3 Jul 10 00:24:55.281122 systemd[1]: Started cri-containerd-e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c.scope - libcontainer container e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c. Jul 10 00:24:55.324656 containerd[1560]: time="2025-07-10T00:24:55.324605586Z" level=info msg="StartContainer for \"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\" returns successfully" Jul 10 00:24:55.335075 systemd[1]: cri-containerd-e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c.scope: Deactivated successfully. Jul 10 00:24:55.336335 containerd[1560]: time="2025-07-10T00:24:55.335839425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\" id:\"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\" pid:4787 exited_at:{seconds:1752107095 nanos:335328092}" Jul 10 00:24:55.336335 containerd[1560]: time="2025-07-10T00:24:55.336137808Z" level=info msg="received exit event container_id:\"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\" id:\"e2928118cfca6be9b85e1e3f0b1926a314837701e9f7d96548e246a4e523aa5c\" pid:4787 exited_at:{seconds:1752107095 nanos:335328092}" Jul 10 00:24:56.062302 kubelet[2756]: E0710 00:24:56.062246 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:56.301675 containerd[1560]: time="2025-07-10T00:24:56.301613546Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:24:56.623958 kubelet[2756]: I0710 00:24:56.623859 2756 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:24:56Z","lastTransitionTime":"2025-07-10T00:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:24:57.009937 containerd[1560]: time="2025-07-10T00:24:57.009816368Z" level=info msg="Container 4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:57.407073 containerd[1560]: time="2025-07-10T00:24:57.407014004Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\"" Jul 10 00:24:57.407674 containerd[1560]: time="2025-07-10T00:24:57.407638840Z" level=info msg="StartContainer for \"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\"" Jul 10 00:24:57.408756 containerd[1560]: time="2025-07-10T00:24:57.408713915Z" level=info msg="connecting to shim 4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" protocol=ttrpc version=3 Jul 10 00:24:57.426553 kubelet[2756]: E0710 00:24:57.426512 2756 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:24:57.430111 systemd[1]: Started cri-containerd-4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c.scope - libcontainer container 4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c. Jul 10 00:24:57.466110 systemd[1]: cri-containerd-4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c.scope: Deactivated successfully. Jul 10 00:24:57.467438 containerd[1560]: time="2025-07-10T00:24:57.467408644Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\" id:\"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\" pid:4835 exited_at:{seconds:1752107097 nanos:467171617}" Jul 10 00:24:57.602608 containerd[1560]: time="2025-07-10T00:24:57.602554466Z" level=info msg="received exit event container_id:\"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\" id:\"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\" pid:4835 exited_at:{seconds:1752107097 nanos:467171617}" Jul 10 00:24:57.603582 containerd[1560]: time="2025-07-10T00:24:57.603562504Z" level=info msg="StartContainer for \"4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c\" returns successfully" Jul 10 00:24:57.622297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f5820743fdf28d5c672ba4affdb63860d89bc7c962147a2b3ab55f292262e6c-rootfs.mount: Deactivated successfully. Jul 10 00:24:58.265469 kubelet[2756]: E0710 00:24:58.265430 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:58.471403 containerd[1560]: time="2025-07-10T00:24:58.471353500Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:24:58.610329 containerd[1560]: time="2025-07-10T00:24:58.610122951Z" level=info msg="Container f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:58.614590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643655884.mount: Deactivated successfully. Jul 10 00:24:58.698063 containerd[1560]: time="2025-07-10T00:24:58.698012844Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\"" Jul 10 00:24:58.698556 containerd[1560]: time="2025-07-10T00:24:58.698512194Z" level=info msg="StartContainer for \"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\"" Jul 10 00:24:58.699762 containerd[1560]: time="2025-07-10T00:24:58.699738974Z" level=info msg="connecting to shim f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" protocol=ttrpc version=3 Jul 10 00:24:58.721119 systemd[1]: Started cri-containerd-f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef.scope - libcontainer container f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef. Jul 10 00:24:58.761080 systemd[1]: cri-containerd-f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef.scope: Deactivated successfully. Jul 10 00:24:58.762046 containerd[1560]: time="2025-07-10T00:24:58.762012651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\" id:\"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\" pid:4878 exited_at:{seconds:1752107098 nanos:761710192}" Jul 10 00:24:58.883232 containerd[1560]: time="2025-07-10T00:24:58.883142118Z" level=info msg="received exit event container_id:\"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\" id:\"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\" pid:4878 exited_at:{seconds:1752107098 nanos:761710192}" Jul 10 00:24:58.905178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef-rootfs.mount: Deactivated successfully. Jul 10 00:24:58.914589 containerd[1560]: time="2025-07-10T00:24:58.914544612Z" level=info msg="StartContainer for \"f6878c09deab12613cf1fbdad97f1ac1c9912ae8bf27678990dfd145e00f35ef\" returns successfully" Jul 10 00:24:59.270716 kubelet[2756]: E0710 00:24:59.270575 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:24:59.277830 containerd[1560]: time="2025-07-10T00:24:59.277749115Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:24:59.289900 containerd[1560]: time="2025-07-10T00:24:59.289836610Z" level=info msg="Container 522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:59.297612 containerd[1560]: time="2025-07-10T00:24:59.297569203Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\"" Jul 10 00:24:59.299163 containerd[1560]: time="2025-07-10T00:24:59.298157171Z" level=info msg="StartContainer for \"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\"" Jul 10 00:24:59.299163 containerd[1560]: time="2025-07-10T00:24:59.298916512Z" level=info msg="connecting to shim 522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" protocol=ttrpc version=3 Jul 10 00:24:59.324235 systemd[1]: Started cri-containerd-522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab.scope - libcontainer container 522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab. Jul 10 00:24:59.358606 systemd[1]: cri-containerd-522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab.scope: Deactivated successfully. Jul 10 00:24:59.358954 containerd[1560]: time="2025-07-10T00:24:59.358914800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\" id:\"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\" pid:4916 exited_at:{seconds:1752107099 nanos:358673426}" Jul 10 00:24:59.361924 containerd[1560]: time="2025-07-10T00:24:59.361878282Z" level=info msg="received exit event container_id:\"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\" id:\"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\" pid:4916 exited_at:{seconds:1752107099 nanos:358673426}" Jul 10 00:24:59.371763 containerd[1560]: time="2025-07-10T00:24:59.371692586Z" level=info msg="StartContainer for \"522ea4c8f06eb31ee34d6a8320e831770eb6f522a32d65bbe80110448a2effab\" returns successfully" Jul 10 00:24:59.612865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736441134.mount: Deactivated successfully. Jul 10 00:25:00.275489 kubelet[2756]: E0710 00:25:00.275435 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:00.280118 containerd[1560]: time="2025-07-10T00:25:00.280069986Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:25:00.294133 containerd[1560]: time="2025-07-10T00:25:00.294081925Z" level=info msg="Container 7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:00.301861 containerd[1560]: time="2025-07-10T00:25:00.301822943Z" level=info msg="CreateContainer within sandbox \"3093a594c855a9997a87a1e5bc1d416864ca89bbc896166ca88854e4d7cfa299\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\"" Jul 10 00:25:00.303000 containerd[1560]: time="2025-07-10T00:25:00.302336661Z" level=info msg="StartContainer for \"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\"" Jul 10 00:25:00.303334 containerd[1560]: time="2025-07-10T00:25:00.303311487Z" level=info msg="connecting to shim 7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac" address="unix:///run/containerd/s/f630f75ae997c548ab59c6945a61ffc8eb4f5371e32ebb2b126660b5128ae6ba" protocol=ttrpc version=3 Jul 10 00:25:00.326152 systemd[1]: Started cri-containerd-7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac.scope - libcontainer container 7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac. Jul 10 00:25:00.371894 containerd[1560]: time="2025-07-10T00:25:00.371836813Z" level=info msg="StartContainer for \"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" returns successfully" Jul 10 00:25:00.455920 containerd[1560]: time="2025-07-10T00:25:00.455854836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"52a33ffb6c81c6db53c65ad38ed958b4cb66090470cf90d8663432e905d40091\" pid:4987 exited_at:{seconds:1752107100 nanos:455406201}" Jul 10 00:25:00.898080 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 00:25:01.282663 kubelet[2756]: E0710 00:25:01.282201 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:01.297573 kubelet[2756]: I0710 00:25:01.297512 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6g5p5" podStartSLOduration=7.297493236 podStartE2EDuration="7.297493236s" podCreationTimestamp="2025-07-10 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:01.296760777 +0000 UTC m=+169.152934100" watchObservedRunningTime="2025-07-10 00:25:01.297493236 +0000 UTC m=+169.153666559" Jul 10 00:25:01.743803 containerd[1560]: time="2025-07-10T00:25:01.743731782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"4420187975c754c86568b054c250e2377ec3aaadc2c490b3af7875b118b9810a\" pid:5061 exit_status:1 exited_at:{seconds:1752107101 nanos:743333613}" Jul 10 00:25:03.126481 kubelet[2756]: E0710 00:25:03.126390 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:03.852167 containerd[1560]: time="2025-07-10T00:25:03.852098132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"850859d4a0adf370242cb6380516845352bf4ba4c1344f6eda38931efff76bd1\" pid:5360 exit_status:1 exited_at:{seconds:1752107103 nanos:851298375}" Jul 10 00:25:04.695938 systemd-networkd[1457]: lxc_health: Link UP Jul 10 00:25:04.698559 systemd-networkd[1457]: lxc_health: Gained carrier Jul 10 00:25:05.129014 kubelet[2756]: E0710 00:25:05.127818 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:05.291629 kubelet[2756]: E0710 00:25:05.291585 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:05.969554 containerd[1560]: time="2025-07-10T00:25:05.969491817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"3eae8585a75b6f69e2fc55d31da850765a713b8823347e0b6754c634da801641\" pid:5548 exited_at:{seconds:1752107105 nanos:968987638}" Jul 10 00:25:06.293618 kubelet[2756]: E0710 00:25:06.293581 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:25:06.372241 systemd-networkd[1457]: lxc_health: Gained IPv6LL Jul 10 00:25:08.085514 containerd[1560]: time="2025-07-10T00:25:08.085250938Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"a3389e25275db2b4c966a2fce7eccb53528701cdc9c47a048bdf528d7109bb5d\" pid:5575 exited_at:{seconds:1752107108 nanos:84377263}" Jul 10 00:25:10.184948 containerd[1560]: time="2025-07-10T00:25:10.184900032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"132f092d0aabc4863b9f5e90ad43177d976f45986661d92a5aa3c2f6e091d872\" pid:5603 exited_at:{seconds:1752107110 nanos:184555857}" Jul 10 00:25:12.268547 containerd[1560]: time="2025-07-10T00:25:12.268499517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b209999bf4943aa6cbbcb4bfec818fc8139783d649b26f685650a1ecf207bac\" id:\"281065e0ab2ae3512ec29a986d042b1a3685eb0f0485447645e2c8069a2a3d69\" pid:5627 exited_at:{seconds:1752107112 nanos:267905811}" Jul 10 00:25:12.287531 sshd[4718]: Connection closed by 10.0.0.1 port 37638 Jul 10 00:25:12.287934 sshd-session[4716]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:12.291860 systemd[1]: sshd@39-10.0.0.69:22-10.0.0.1:37638.service: Deactivated successfully. Jul 10 00:25:12.293930 systemd[1]: session-40.scope: Deactivated successfully. Jul 10 00:25:12.294811 systemd-logind[1542]: Session 40 logged out. Waiting for processes to exit. Jul 10 00:25:12.296168 systemd-logind[1542]: Removed session 40. Jul 10 00:25:12.300163 containerd[1560]: time="2025-07-10T00:25:12.300121576Z" level=info msg="StopPodSandbox for \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\"" Jul 10 00:25:12.300280 containerd[1560]: time="2025-07-10T00:25:12.300262026Z" level=info msg="TearDown network for sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" successfully" Jul 10 00:25:12.300280 containerd[1560]: time="2025-07-10T00:25:12.300275152Z" level=info msg="StopPodSandbox for \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" returns successfully" Jul 10 00:25:12.300647 containerd[1560]: time="2025-07-10T00:25:12.300613704Z" level=info msg="RemovePodSandbox for \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\"" Jul 10 00:25:12.300703 containerd[1560]: time="2025-07-10T00:25:12.300647971Z" level=info msg="Forcibly stopping sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\"" Jul 10 00:25:12.300729 containerd[1560]: time="2025-07-10T00:25:12.300718196Z" level=info msg="TearDown network for sandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" successfully" Jul 10 00:25:12.302357 containerd[1560]: time="2025-07-10T00:25:12.302328022Z" level=info msg="Ensure that sandbox 3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4 in task-service has been cleanup successfully" Jul 10 00:25:12.305870 containerd[1560]: time="2025-07-10T00:25:12.305825678Z" level=info msg="RemovePodSandbox \"3d817f10f5f6f1b1a3424ee90873d45d6eda1d3f3f9afdc5051fb64a851537c4\" returns successfully" Jul 10 00:25:12.306143 containerd[1560]: time="2025-07-10T00:25:12.306104716Z" level=info msg="StopPodSandbox for \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\"" Jul 10 00:25:12.306208 containerd[1560]: time="2025-07-10T00:25:12.306187907Z" level=info msg="TearDown network for sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" successfully" Jul 10 00:25:12.306208 containerd[1560]: time="2025-07-10T00:25:12.306200863Z" level=info msg="StopPodSandbox for \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" returns successfully" Jul 10 00:25:12.306416 containerd[1560]: time="2025-07-10T00:25:12.306391360Z" level=info msg="RemovePodSandbox for \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\"" Jul 10 00:25:12.306416 containerd[1560]: time="2025-07-10T00:25:12.306416268Z" level=info msg="Forcibly stopping sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\"" Jul 10 00:25:12.306506 containerd[1560]: time="2025-07-10T00:25:12.306469530Z" level=info msg="TearDown network for sandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" successfully" Jul 10 00:25:12.308825 containerd[1560]: time="2025-07-10T00:25:12.308137388Z" level=info msg="Ensure that sandbox f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865 in task-service has been cleanup successfully" Jul 10 00:25:12.311622 containerd[1560]: time="2025-07-10T00:25:12.311582684Z" level=info msg="RemovePodSandbox \"f95734a83a7d91293d28f7e46aff81273d715684b4e96ba98ea5d3e2d27a8865\" returns successfully"