Jul 10 05:46:30.830838 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 03:48:39 -00 2025 Jul 10 05:46:30.830860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:46:30.830879 kernel: BIOS-provided physical RAM map: Jul 10 05:46:30.830886 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 05:46:30.830892 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 05:46:30.830899 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 05:46:30.830910 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 05:46:30.830918 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 05:46:30.830930 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 05:46:30.830936 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 05:46:30.830943 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 10 05:46:30.830949 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 05:46:30.830956 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 05:46:30.830962 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 05:46:30.830985 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 05:46:30.831005 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 05:46:30.831017 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 05:46:30.831024 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 05:46:30.831036 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 05:46:30.831043 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 05:46:30.831049 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 05:46:30.831056 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 05:46:30.831063 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 05:46:30.831070 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 05:46:30.831077 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 05:46:30.831087 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 05:46:30.831093 kernel: NX (Execute Disable) protection: active Jul 10 05:46:30.831100 kernel: APIC: Static calls initialized Jul 10 05:46:30.831107 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 10 05:46:30.831114 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 10 05:46:30.831121 kernel: extended physical RAM map: Jul 10 05:46:30.831128 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 05:46:30.831135 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 05:46:30.831142 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 05:46:30.831149 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 05:46:30.831156 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 05:46:30.831165 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 05:46:30.831171 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 05:46:30.831178 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 10 05:46:30.831188 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 10 05:46:30.831204 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 10 05:46:30.831211 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 10 05:46:30.831226 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 10 05:46:30.831237 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 05:46:30.831246 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 05:46:30.831253 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 05:46:30.831261 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 05:46:30.831268 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 05:46:30.831275 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 05:46:30.831282 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 05:46:30.831289 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 05:46:30.831306 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 05:46:30.831330 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 05:46:30.831344 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 05:46:30.831351 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 05:46:30.831358 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 05:46:30.831372 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 05:46:30.831380 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 05:46:30.831389 kernel: efi: EFI v2.7 by EDK II Jul 10 05:46:30.831397 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 10 05:46:30.831404 kernel: random: crng init done Jul 10 05:46:30.831421 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 10 05:46:30.831438 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 10 05:46:30.831451 kernel: secureboot: Secure boot disabled Jul 10 05:46:30.831462 kernel: SMBIOS 2.8 present. Jul 10 05:46:30.831481 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 05:46:30.831498 kernel: DMI: Memory slots populated: 1/1 Jul 10 05:46:30.831540 kernel: Hypervisor detected: KVM Jul 10 05:46:30.831550 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 05:46:30.831559 kernel: kvm-clock: using sched offset of 5841061078 cycles Jul 10 05:46:30.831568 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 05:46:30.831577 kernel: tsc: Detected 2794.748 MHz processor Jul 10 05:46:30.831585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 05:46:30.831596 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 05:46:30.831603 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 10 05:46:30.831613 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 05:46:30.831621 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 05:46:30.831628 kernel: Using GB pages for direct mapping Jul 10 05:46:30.831636 kernel: ACPI: Early table checksum verification disabled Jul 10 05:46:30.831643 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 10 05:46:30.831651 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 05:46:30.831658 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831668 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831675 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 10 05:46:30.831683 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831697 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831709 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831717 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 05:46:30.831724 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 05:46:30.831732 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 10 05:46:30.831739 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 10 05:46:30.831749 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 10 05:46:30.831756 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 10 05:46:30.831772 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 10 05:46:30.831782 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 10 05:46:30.831790 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 10 05:46:30.831797 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 10 05:46:30.831804 kernel: No NUMA configuration found Jul 10 05:46:30.831812 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 10 05:46:30.831824 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 10 05:46:30.831844 kernel: Zone ranges: Jul 10 05:46:30.831867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 05:46:30.831875 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 10 05:46:30.831882 kernel: Normal empty Jul 10 05:46:30.831890 kernel: Device empty Jul 10 05:46:30.831897 kernel: Movable zone start for each node Jul 10 05:46:30.831914 kernel: Early memory node ranges Jul 10 05:46:30.831926 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 05:46:30.831934 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 10 05:46:30.831954 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 10 05:46:30.831966 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 10 05:46:30.831973 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 10 05:46:30.831981 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 10 05:46:30.831988 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 10 05:46:30.831996 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 10 05:46:30.832003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 10 05:46:30.832010 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 05:46:30.832021 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 05:46:30.832037 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 10 05:46:30.832045 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 05:46:30.832052 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 10 05:46:30.832060 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 10 05:46:30.832070 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 05:46:30.832077 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 05:46:30.832085 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 10 05:46:30.832093 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 05:46:30.832101 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 05:46:30.832115 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 05:46:30.832127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 05:46:30.832139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 05:46:30.832154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 05:46:30.832162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 05:46:30.832170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 05:46:30.832177 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 05:46:30.832185 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 05:46:30.832193 kernel: TSC deadline timer available Jul 10 05:46:30.832210 kernel: CPU topo: Max. logical packages: 1 Jul 10 05:46:30.832221 kernel: CPU topo: Max. logical dies: 1 Jul 10 05:46:30.832233 kernel: CPU topo: Max. dies per package: 1 Jul 10 05:46:30.832240 kernel: CPU topo: Max. threads per core: 1 Jul 10 05:46:30.832248 kernel: CPU topo: Num. cores per package: 4 Jul 10 05:46:30.832255 kernel: CPU topo: Num. threads per package: 4 Jul 10 05:46:30.832263 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 05:46:30.832270 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 05:46:30.832278 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 05:46:30.832289 kernel: kvm-guest: setup PV sched yield Jul 10 05:46:30.832296 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 05:46:30.832311 kernel: Booting paravirtualized kernel on KVM Jul 10 05:46:30.832319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 05:46:30.832327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 05:46:30.832335 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 05:46:30.832342 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 05:46:30.832352 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 05:46:30.832359 kernel: kvm-guest: PV spinlocks enabled Jul 10 05:46:30.832370 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 05:46:30.832379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:46:30.832390 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 05:46:30.832398 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 05:46:30.832406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 05:46:30.832413 kernel: Fallback order for Node 0: 0 Jul 10 05:46:30.832421 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 10 05:46:30.832428 kernel: Policy zone: DMA32 Jul 10 05:46:30.832438 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 05:46:30.832446 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 05:46:30.832454 kernel: ftrace: allocating 40097 entries in 157 pages Jul 10 05:46:30.832461 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 05:46:30.832469 kernel: Dynamic Preempt: voluntary Jul 10 05:46:30.832476 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 05:46:30.832485 kernel: rcu: RCU event tracing is enabled. Jul 10 05:46:30.832492 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 05:46:30.832500 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 05:46:30.832524 kernel: Rude variant of Tasks RCU enabled. Jul 10 05:46:30.832532 kernel: Tracing variant of Tasks RCU enabled. Jul 10 05:46:30.832540 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 05:46:30.832550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 05:46:30.832558 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:46:30.832566 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:46:30.832574 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 05:46:30.832582 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 05:46:30.832590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 05:46:30.832600 kernel: Console: colour dummy device 80x25 Jul 10 05:46:30.832608 kernel: printk: legacy console [ttyS0] enabled Jul 10 05:46:30.832616 kernel: ACPI: Core revision 20240827 Jul 10 05:46:30.832623 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 05:46:30.832631 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 05:46:30.832644 kernel: x2apic enabled Jul 10 05:46:30.832652 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 05:46:30.832659 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 05:46:30.832667 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 05:46:30.832675 kernel: kvm-guest: setup PV IPIs Jul 10 05:46:30.832686 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 05:46:30.832696 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 05:46:30.832704 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 05:46:30.832718 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 05:46:30.832733 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 05:46:30.832741 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 05:46:30.832755 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 05:46:30.832772 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 05:46:30.832783 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 05:46:30.832790 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 05:46:30.832798 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 05:46:30.832815 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 05:46:30.832836 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 05:46:30.832846 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 05:46:30.832859 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 05:46:30.832867 kernel: x86/bugs: return thunk changed Jul 10 05:46:30.832884 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 05:46:30.832907 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 05:46:30.832927 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 05:46:30.832935 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 05:46:30.832943 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 05:46:30.832950 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 05:46:30.832958 kernel: Freeing SMP alternatives memory: 32K Jul 10 05:46:30.832966 kernel: pid_max: default: 32768 minimum: 301 Jul 10 05:46:30.832973 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 05:46:30.832981 kernel: landlock: Up and running. Jul 10 05:46:30.833006 kernel: SELinux: Initializing. Jul 10 05:46:30.833015 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 05:46:30.833029 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 05:46:30.833044 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 05:46:30.833052 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 05:46:30.833060 kernel: ... version: 0 Jul 10 05:46:30.833067 kernel: ... bit width: 48 Jul 10 05:46:30.833075 kernel: ... generic registers: 6 Jul 10 05:46:30.833083 kernel: ... value mask: 0000ffffffffffff Jul 10 05:46:30.833093 kernel: ... max period: 00007fffffffffff Jul 10 05:46:30.833101 kernel: ... fixed-purpose events: 0 Jul 10 05:46:30.833109 kernel: ... event mask: 000000000000003f Jul 10 05:46:30.833116 kernel: signal: max sigframe size: 1776 Jul 10 05:46:30.833124 kernel: rcu: Hierarchical SRCU implementation. Jul 10 05:46:30.833132 kernel: rcu: Max phase no-delay instances is 400. Jul 10 05:46:30.833142 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 05:46:30.833154 kernel: smp: Bringing up secondary CPUs ... Jul 10 05:46:30.833162 kernel: smpboot: x86: Booting SMP configuration: Jul 10 05:46:30.833180 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 05:46:30.833194 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 05:46:30.833209 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 05:46:30.833224 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54600K init, 2368K bss, 137196K reserved, 0K cma-reserved) Jul 10 05:46:30.833233 kernel: devtmpfs: initialized Jul 10 05:46:30.833250 kernel: x86/mm: Memory block size: 128MB Jul 10 05:46:30.833263 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 10 05:46:30.833271 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 10 05:46:30.833278 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 10 05:46:30.833289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 10 05:46:30.833297 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 10 05:46:30.833304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 10 05:46:30.833312 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 05:46:30.833320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 05:46:30.833328 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 05:46:30.833345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 05:46:30.833360 kernel: audit: initializing netlink subsys (disabled) Jul 10 05:46:30.833383 kernel: audit: type=2000 audit(1752126388.342:1): state=initialized audit_enabled=0 res=1 Jul 10 05:46:30.833397 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 05:46:30.833410 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 05:46:30.833417 kernel: cpuidle: using governor menu Jul 10 05:46:30.833425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 05:46:30.833442 kernel: dca service started, version 1.12.1 Jul 10 05:46:30.833454 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 05:46:30.833468 kernel: PCI: Using configuration type 1 for base access Jul 10 05:46:30.833488 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 05:46:30.833505 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 05:46:30.833545 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 05:46:30.833574 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 05:46:30.833585 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 05:46:30.833593 kernel: ACPI: Added _OSI(Module Device) Jul 10 05:46:30.833600 kernel: ACPI: Added _OSI(Processor Device) Jul 10 05:46:30.833608 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 05:46:30.833616 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 05:46:30.833623 kernel: ACPI: Interpreter enabled Jul 10 05:46:30.833652 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 05:46:30.833669 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 05:46:30.833688 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 05:46:30.833703 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 05:46:30.833711 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 05:46:30.833721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 05:46:30.834034 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 05:46:30.834232 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 05:46:30.834557 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 05:46:30.834582 kernel: PCI host bridge to bus 0000:00 Jul 10 05:46:30.834812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 05:46:30.835036 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 05:46:30.835294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 05:46:30.835407 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 05:46:30.835548 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 05:46:30.835674 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 05:46:30.835978 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 05:46:30.836236 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 05:46:30.836504 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 05:46:30.836835 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 05:46:30.838245 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 05:46:30.838563 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 05:46:30.838713 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 05:46:30.839017 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 05:46:30.839326 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 05:46:30.841649 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 05:46:30.841961 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 05:46:30.842266 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 05:46:30.842678 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 05:46:30.843032 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 05:46:30.843367 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 05:46:30.845036 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 05:46:30.845191 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 05:46:30.845361 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 05:46:30.845572 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 05:46:30.845799 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 05:46:30.846111 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 05:46:30.846442 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 05:46:30.846835 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 05:46:30.848481 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 05:46:30.850191 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 05:46:30.850459 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 05:46:30.850849 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 05:46:30.850862 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 05:46:30.850871 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 05:46:30.850888 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 05:46:30.850898 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 05:46:30.850911 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 05:46:30.850931 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 05:46:30.850950 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 05:46:30.852148 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 05:46:30.852157 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 05:46:30.852165 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 05:46:30.852173 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 05:46:30.852181 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 05:46:30.852193 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 05:46:30.852201 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 05:46:30.852209 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 05:46:30.852217 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 05:46:30.852228 kernel: iommu: Default domain type: Translated Jul 10 05:46:30.852243 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 05:46:30.852252 kernel: efivars: Registered efivars operations Jul 10 05:46:30.852260 kernel: PCI: Using ACPI for IRQ routing Jul 10 05:46:30.852275 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 05:46:30.852290 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 10 05:46:30.852298 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 10 05:46:30.852306 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 10 05:46:30.852314 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 10 05:46:30.852325 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 10 05:46:30.852332 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 10 05:46:30.852340 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 10 05:46:30.852355 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 10 05:46:30.852650 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 05:46:30.852924 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 05:46:30.853210 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 05:46:30.853237 kernel: vgaarb: loaded Jul 10 05:46:30.853255 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 05:46:30.853267 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 05:46:30.853280 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 05:46:30.853298 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 05:46:30.853311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 05:46:30.853327 kernel: pnp: PnP ACPI init Jul 10 05:46:30.853698 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 05:46:30.853822 kernel: pnp: PnP ACPI: found 6 devices Jul 10 05:46:30.853891 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 05:46:30.853909 kernel: NET: Registered PF_INET protocol family Jul 10 05:46:30.853924 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 05:46:30.853942 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 05:46:30.853960 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 05:46:30.853976 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 05:46:30.853997 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 05:46:30.854018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 05:46:30.854054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 05:46:30.854078 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 05:46:30.854102 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 05:46:30.854121 kernel: NET: Registered PF_XDP protocol family Jul 10 05:46:30.854466 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 05:46:30.854785 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 05:46:30.855117 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 05:46:30.855473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 05:46:30.855793 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 05:46:30.856058 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 05:46:30.856366 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 05:46:30.856704 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 05:46:30.856729 kernel: PCI: CLS 0 bytes, default 64 Jul 10 05:46:30.856748 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 05:46:30.856761 kernel: Initialise system trusted keyrings Jul 10 05:46:30.856788 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 05:46:30.856803 kernel: Key type asymmetric registered Jul 10 05:46:30.856851 kernel: Asymmetric key parser 'x509' registered Jul 10 05:46:30.856871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 05:46:30.856887 kernel: io scheduler mq-deadline registered Jul 10 05:46:30.856901 kernel: io scheduler kyber registered Jul 10 05:46:30.856910 kernel: io scheduler bfq registered Jul 10 05:46:30.856928 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 05:46:30.856964 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 05:46:30.856983 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 05:46:30.857001 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 05:46:30.857020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 05:46:30.857040 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 05:46:30.857058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 05:46:30.857076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 05:46:30.857096 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 05:46:30.857467 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 05:46:30.857506 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 05:46:30.857796 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 05:46:30.858002 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T05:46:30 UTC (1752126390) Jul 10 05:46:30.858299 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 05:46:30.858320 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 05:46:30.858329 kernel: efifb: probing for efifb Jul 10 05:46:30.858345 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 05:46:30.858354 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 05:46:30.858379 kernel: efifb: scrolling: redraw Jul 10 05:46:30.858397 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 05:46:30.858415 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 05:46:30.858436 kernel: fb0: EFI VGA frame buffer device Jul 10 05:46:30.858455 kernel: pstore: Using crash dump compression: deflate Jul 10 05:46:30.858476 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 05:46:30.858493 kernel: NET: Registered PF_INET6 protocol family Jul 10 05:46:30.858532 kernel: Segment Routing with IPv6 Jul 10 05:46:30.858553 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 05:46:30.858601 kernel: NET: Registered PF_PACKET protocol family Jul 10 05:46:30.858640 kernel: Key type dns_resolver registered Jul 10 05:46:30.858658 kernel: IPI shorthand broadcast: enabled Jul 10 05:46:30.858678 kernel: sched_clock: Marking stable (2931002614, 205244970)->(3149900247, -13652663) Jul 10 05:46:30.858695 kernel: registered taskstats version 1 Jul 10 05:46:30.858713 kernel: Loading compiled-in X.509 certificates Jul 10 05:46:30.858731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b89e0dc22b3b76335f64d75ef999e68b43a7102' Jul 10 05:46:30.858752 kernel: Demotion targets for Node 0: null Jul 10 05:46:30.858777 kernel: Key type .fscrypt registered Jul 10 05:46:30.858818 kernel: Key type fscrypt-provisioning registered Jul 10 05:46:30.858840 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 05:46:30.858860 kernel: ima: Allocated hash algorithm: sha1 Jul 10 05:46:30.858881 kernel: ima: No architecture policies found Jul 10 05:46:30.858901 kernel: clk: Disabling unused clocks Jul 10 05:46:30.858924 kernel: Warning: unable to open an initial console. Jul 10 05:46:30.858944 kernel: Freeing unused kernel image (initmem) memory: 54600K Jul 10 05:46:30.858964 kernel: Write protecting the kernel read-only data: 24576k Jul 10 05:46:30.859003 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 05:46:30.859023 kernel: Run /init as init process Jul 10 05:46:30.859042 kernel: with arguments: Jul 10 05:46:30.859059 kernel: /init Jul 10 05:46:30.859079 kernel: with environment: Jul 10 05:46:30.859100 kernel: HOME=/ Jul 10 05:46:30.859118 kernel: TERM=linux Jul 10 05:46:30.859138 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 05:46:30.859163 systemd[1]: Successfully made /usr/ read-only. Jul 10 05:46:30.859211 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 05:46:30.859235 systemd[1]: Detected virtualization kvm. Jul 10 05:46:30.859255 systemd[1]: Detected architecture x86-64. Jul 10 05:46:30.859276 systemd[1]: Running in initrd. Jul 10 05:46:30.859297 systemd[1]: No hostname configured, using default hostname. Jul 10 05:46:30.859319 systemd[1]: Hostname set to . Jul 10 05:46:30.859335 systemd[1]: Initializing machine ID from VM UUID. Jul 10 05:46:30.859353 systemd[1]: Queued start job for default target initrd.target. Jul 10 05:46:30.859374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:46:30.859382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:46:30.859404 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 05:46:30.859425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 05:46:30.859446 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 05:46:30.859468 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 05:46:30.859507 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 05:46:30.859547 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 05:46:30.859563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:46:30.859583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:46:30.859605 systemd[1]: Reached target paths.target - Path Units. Jul 10 05:46:30.859623 systemd[1]: Reached target slices.target - Slice Units. Jul 10 05:46:30.859645 systemd[1]: Reached target swap.target - Swaps. Jul 10 05:46:30.859663 systemd[1]: Reached target timers.target - Timer Units. Jul 10 05:46:30.860614 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 05:46:30.860657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 05:46:30.860675 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 05:46:30.860696 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 05:46:30.860719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:46:30.860740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 05:46:30.860760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:46:30.860790 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 05:46:30.860827 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 05:46:30.860866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 05:46:30.860886 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 05:46:30.860908 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 05:46:30.860926 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 05:46:30.860949 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 05:46:30.860966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 05:46:30.860975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:46:30.860987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 05:46:30.861018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:46:30.861031 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 05:46:30.861040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 05:46:30.861112 systemd-journald[219]: Collecting audit messages is disabled. Jul 10 05:46:30.861148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:46:30.861158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 05:46:30.861180 systemd-journald[219]: Journal started Jul 10 05:46:30.861233 systemd-journald[219]: Runtime Journal (/run/log/journal/c8387c05fee54ae9873f7bb487dd2969) is 6M, max 48.5M, 42.4M free. Jul 10 05:46:30.830573 systemd-modules-load[221]: Inserted module 'overlay' Jul 10 05:46:30.865633 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 05:46:31.025549 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 05:46:31.027619 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 10 05:46:31.028658 kernel: Bridge firewalling registered Jul 10 05:46:31.029988 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 05:46:31.031871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 05:46:31.033678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:31.038731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:46:31.040216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 05:46:31.041998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:46:31.062229 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 05:46:31.065781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:46:31.068058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:46:31.069904 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 05:46:31.083327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 05:46:31.084790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 05:46:31.114816 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6f690b83334156407a81e8d4e91333490630194c4657a5a1ae6bc26eb28e6a0b Jul 10 05:46:31.117408 systemd-resolved[254]: Positive Trust Anchors: Jul 10 05:46:31.117420 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 05:46:31.117449 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 05:46:31.119949 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 10 05:46:31.121269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 05:46:31.122388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:46:31.234544 kernel: SCSI subsystem initialized Jul 10 05:46:31.243550 kernel: Loading iSCSI transport class v2.0-870. Jul 10 05:46:31.254553 kernel: iscsi: registered transport (tcp) Jul 10 05:46:31.281847 kernel: iscsi: registered transport (qla4xxx) Jul 10 05:46:31.281883 kernel: QLogic iSCSI HBA Driver Jul 10 05:46:31.304143 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 05:46:31.324188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:46:31.327847 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 05:46:31.386326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 05:46:31.388361 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 05:46:31.452546 kernel: raid6: avx2x4 gen() 30370 MB/s Jul 10 05:46:31.469536 kernel: raid6: avx2x2 gen() 30975 MB/s Jul 10 05:46:31.486576 kernel: raid6: avx2x1 gen() 25943 MB/s Jul 10 05:46:31.486609 kernel: raid6: using algorithm avx2x2 gen() 30975 MB/s Jul 10 05:46:31.504571 kernel: raid6: .... xor() 19971 MB/s, rmw enabled Jul 10 05:46:31.504588 kernel: raid6: using avx2x2 recovery algorithm Jul 10 05:46:31.525536 kernel: xor: automatically using best checksumming function avx Jul 10 05:46:31.719589 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 05:46:31.728476 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 05:46:31.730333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:46:31.762600 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 10 05:46:31.768144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:46:31.786413 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 05:46:31.817132 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jul 10 05:46:31.849192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 05:46:31.850702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 05:46:31.945300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:46:31.949686 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 05:46:31.987538 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 05:46:31.991354 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 05:46:31.994383 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 05:46:31.994402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 05:46:31.998131 kernel: GPT:9289727 != 19775487 Jul 10 05:46:31.998157 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 05:46:31.998171 kernel: GPT:9289727 != 19775487 Jul 10 05:46:31.998185 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 05:46:31.998198 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:46:32.011540 kernel: AES CTR mode by8 optimization enabled Jul 10 05:46:32.019546 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 05:46:32.042544 kernel: libata version 3.00 loaded. Jul 10 05:46:32.043894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:46:32.044256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:32.048177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:46:32.054827 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 05:46:32.055035 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 05:46:32.055048 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 05:46:32.051625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:46:32.057676 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 05:46:32.057880 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 05:46:32.054194 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:46:32.058611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:46:32.058770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:32.067537 kernel: scsi host0: ahci Jul 10 05:46:32.068547 kernel: scsi host1: ahci Jul 10 05:46:32.069731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:46:32.072533 kernel: scsi host2: ahci Jul 10 05:46:32.074551 kernel: scsi host3: ahci Jul 10 05:46:32.074747 kernel: scsi host4: ahci Jul 10 05:46:32.074902 kernel: scsi host5: ahci Jul 10 05:46:32.076002 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 05:46:32.076023 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 05:46:32.077768 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 05:46:32.077790 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 05:46:32.081285 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 05:46:32.081313 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 05:46:32.086266 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 05:46:32.104432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 05:46:32.116362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:32.125584 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 05:46:32.132499 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 05:46:32.135677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 05:46:32.138430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 05:46:32.174853 disk-uuid[636]: Primary Header is updated. Jul 10 05:46:32.174853 disk-uuid[636]: Secondary Entries is updated. Jul 10 05:46:32.174853 disk-uuid[636]: Secondary Header is updated. Jul 10 05:46:32.179538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:46:32.183529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:46:32.386987 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 05:46:32.387078 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 05:46:32.387089 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 05:46:32.388588 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 05:46:32.388619 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 05:46:32.389550 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 05:46:32.390551 kernel: ata3.00: applying bridge limits Jul 10 05:46:32.390579 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 05:46:32.391560 kernel: ata3.00: configured for UDMA/100 Jul 10 05:46:32.392557 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 05:46:32.448562 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 05:46:32.448916 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 05:46:32.474650 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 05:46:32.950710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 05:46:32.953453 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 05:46:32.953748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:46:32.954066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 05:46:32.955447 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 05:46:32.992430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 05:46:33.201544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 05:46:33.201625 disk-uuid[637]: The operation has completed successfully. Jul 10 05:46:33.230191 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 05:46:33.230313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 05:46:33.271218 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 05:46:33.302954 sh[666]: Success Jul 10 05:46:33.322860 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 05:46:33.322917 kernel: device-mapper: uevent: version 1.0.3 Jul 10 05:46:33.323954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 05:46:33.333640 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 05:46:33.488420 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 05:46:33.492156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 05:46:33.509687 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 05:46:33.517936 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 05:46:33.517977 kernel: BTRFS: device fsid 511ba16f-9623-4757-a014-7759f3bcc596 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (678) Jul 10 05:46:33.519362 kernel: BTRFS info (device dm-0): first mount of filesystem 511ba16f-9623-4757-a014-7759f3bcc596 Jul 10 05:46:33.519390 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:46:33.520203 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 05:46:33.525551 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 05:46:33.527763 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 05:46:33.529979 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 05:46:33.532584 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 05:46:33.535462 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 05:46:33.573991 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 10 05:46:33.576015 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:46:33.576042 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:46:33.576057 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:46:33.583538 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:46:33.585222 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 05:46:33.587616 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 05:46:33.692821 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 05:46:33.702106 ignition[752]: Ignition 2.21.0 Jul 10 05:46:33.756091 unknown[752]: fetched base config from "system" Jul 10 05:46:33.702118 ignition[752]: Stage: fetch-offline Jul 10 05:46:33.756100 unknown[752]: fetched user config from "qemu" Jul 10 05:46:33.702324 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:33.702336 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:33.702465 ignition[752]: parsed url from cmdline: "" Jul 10 05:46:33.702469 ignition[752]: no config URL provided Jul 10 05:46:33.702474 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 05:46:33.702487 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 10 05:46:33.702527 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 10 05:46:33.702533 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 05:46:33.711360 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 10 05:46:33.751355 ignition[752]: parsing config with SHA512: 36885af281c36b6959692eb65aab49c4d775b279a82f733faa2ce83c20ff0a98e77ada7163a08c4e4256b4a6bdc418405fa4099283076b0b5350a29dc8536432 Jul 10 05:46:33.756786 ignition[752]: fetch-offline: fetch-offline passed Jul 10 05:46:33.756861 ignition[752]: Ignition finished successfully Jul 10 05:46:33.805775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 05:46:33.808761 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 05:46:33.860057 systemd-networkd[855]: lo: Link UP Jul 10 05:46:33.860072 systemd-networkd[855]: lo: Gained carrier Jul 10 05:46:33.861745 systemd-networkd[855]: Enumeration completed Jul 10 05:46:33.862208 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:46:33.862214 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 05:46:33.862580 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 05:46:33.863364 systemd-networkd[855]: eth0: Link UP Jul 10 05:46:33.863369 systemd-networkd[855]: eth0: Gained carrier Jul 10 05:46:33.863379 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:46:33.869362 systemd[1]: Reached target network.target - Network. Jul 10 05:46:33.874297 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 05:46:33.877319 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 05:46:33.881596 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 05:46:33.985303 ignition[859]: Ignition 2.21.0 Jul 10 05:46:33.985324 ignition[859]: Stage: kargs Jul 10 05:46:33.985936 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:33.985950 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:33.990206 ignition[859]: kargs: kargs passed Jul 10 05:46:33.990908 ignition[859]: Ignition finished successfully Jul 10 05:46:33.995482 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 05:46:33.998387 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 05:46:34.051545 ignition[868]: Ignition 2.21.0 Jul 10 05:46:34.051559 ignition[868]: Stage: disks Jul 10 05:46:34.051696 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:34.051707 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:34.055985 ignition[868]: disks: disks passed Jul 10 05:46:34.056045 ignition[868]: Ignition finished successfully Jul 10 05:46:34.059936 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 05:46:34.061972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 05:46:34.062056 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 05:46:34.065247 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 05:46:34.067247 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 05:46:34.069063 systemd[1]: Reached target basic.target - Basic System. Jul 10 05:46:34.070975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 05:46:34.108977 systemd-resolved[254]: Detected conflict on linux IN A 10.0.0.135 Jul 10 05:46:34.108990 systemd-resolved[254]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jul 10 05:46:34.110799 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 05:46:34.118299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 05:46:34.121737 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 05:46:34.269555 kernel: EXT4-fs (vda9): mounted filesystem f2872d8e-bdd9-4186-89ae-300fdf795a28 r/w with ordered data mode. Quota mode: none. Jul 10 05:46:34.270262 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 05:46:34.271741 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 05:46:34.274211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 05:46:34.276563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 05:46:34.276868 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 05:46:34.276908 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 05:46:34.276930 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 05:46:34.299125 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 05:46:34.300498 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 05:46:34.306393 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Jul 10 05:46:34.306414 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:46:34.306425 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:46:34.307361 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:46:34.312242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 05:46:34.340883 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 05:46:34.344651 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jul 10 05:46:34.348700 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 05:46:34.352646 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 05:46:34.529377 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 05:46:34.532055 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 05:46:34.533228 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 05:46:34.554995 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 05:46:34.556391 kernel: BTRFS info (device vda6): last unmount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:46:34.573717 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 05:46:34.654867 ignition[1003]: INFO : Ignition 2.21.0 Jul 10 05:46:34.654867 ignition[1003]: INFO : Stage: mount Jul 10 05:46:34.657646 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:34.657646 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:34.659799 ignition[1003]: INFO : mount: mount passed Jul 10 05:46:34.659799 ignition[1003]: INFO : Ignition finished successfully Jul 10 05:46:34.660973 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 05:46:34.663501 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 05:46:34.693848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 05:46:34.724534 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 10 05:46:34.724570 kernel: BTRFS info (device vda6): first mount of filesystem 6f2f9b2c-a9fa-4b0f-b4c7-59337f1e3021 Jul 10 05:46:34.726330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 05:46:34.726353 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 05:46:34.730568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 05:46:34.787387 ignition[1032]: INFO : Ignition 2.21.0 Jul 10 05:46:34.787387 ignition[1032]: INFO : Stage: files Jul 10 05:46:34.789784 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:34.789784 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:34.793399 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Jul 10 05:46:34.794715 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 05:46:34.794715 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 05:46:34.800222 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 05:46:34.801769 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 05:46:34.801769 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 05:46:34.800882 unknown[1032]: wrote ssh authorized keys file for user: core Jul 10 05:46:34.806039 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 05:46:34.806039 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 05:46:34.866125 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 05:46:35.026919 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 05:46:35.034239 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 05:46:35.034239 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 05:46:35.339657 systemd-networkd[855]: eth0: Gained IPv6LL Jul 10 05:46:35.546868 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 05:46:35.630737 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 05:46:35.630737 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 05:46:35.634638 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 05:46:35.636460 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 05:46:35.638414 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 05:46:35.640079 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 05:46:35.641967 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 05:46:35.643814 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 05:46:35.645674 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 05:46:35.651067 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 05:46:35.653135 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 05:46:35.655056 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:46:35.660124 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:46:35.660124 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:46:35.665232 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 05:46:36.207989 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 05:46:36.683695 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 05:46:36.683695 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 05:46:36.687377 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 05:46:36.694233 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 05:46:36.694233 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 05:46:36.694233 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 05:46:36.698417 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 05:46:36.698417 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 05:46:36.698417 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 05:46:36.698417 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 05:46:36.721801 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 05:46:36.728641 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 05:46:36.730324 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 05:46:36.730324 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 05:46:36.733118 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 05:46:36.733118 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 05:46:36.733118 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 05:46:36.733118 ignition[1032]: INFO : files: files passed Jul 10 05:46:36.733118 ignition[1032]: INFO : Ignition finished successfully Jul 10 05:46:36.740820 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 05:46:36.743299 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 05:46:36.745036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 05:46:36.769545 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 05:46:36.769683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 05:46:36.774020 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 05:46:36.778748 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:46:36.780410 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:46:36.780410 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 05:46:36.785073 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 05:46:36.786453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 05:46:36.789576 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 05:46:36.844214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 05:46:36.844333 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 05:46:36.845497 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 05:46:36.848585 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 05:46:36.848847 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 05:46:36.851369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 05:46:36.890731 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 05:46:36.892440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 05:46:36.924792 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:46:36.924947 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:46:36.928151 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 05:46:36.929259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 05:46:36.929373 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 05:46:36.932768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 05:46:36.934944 systemd[1]: Stopped target basic.target - Basic System. Jul 10 05:46:36.935938 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 05:46:36.936250 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 05:46:36.936598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 05:46:36.937050 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 05:46:36.937417 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 05:46:36.937897 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 05:46:36.938255 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 05:46:36.938592 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 05:46:36.939040 systemd[1]: Stopped target swap.target - Swaps. Jul 10 05:46:36.939379 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 05:46:36.939485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 05:46:36.955575 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:46:36.955860 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:46:36.956139 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 05:46:36.960464 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:46:36.961426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 05:46:36.961552 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 05:46:36.965547 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 05:46:36.965674 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 05:46:36.966705 systemd[1]: Stopped target paths.target - Path Units. Jul 10 05:46:36.966931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 05:46:36.973579 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:46:36.973745 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 05:46:36.976179 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 05:46:36.976553 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 05:46:36.976653 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 05:46:36.977041 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 05:46:36.977125 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 05:46:36.980998 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 05:46:36.981108 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 05:46:36.982737 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 05:46:36.982839 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 05:46:36.986454 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 05:46:36.988273 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 05:46:36.990319 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 05:46:36.990488 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:46:36.992546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 05:46:36.992664 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 05:46:36.998107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 05:46:36.999677 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 05:46:37.019762 ignition[1088]: INFO : Ignition 2.21.0 Jul 10 05:46:37.019762 ignition[1088]: INFO : Stage: umount Jul 10 05:46:37.021710 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 05:46:37.021710 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 05:46:37.024057 ignition[1088]: INFO : umount: umount passed Jul 10 05:46:37.024057 ignition[1088]: INFO : Ignition finished successfully Jul 10 05:46:37.021992 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 05:46:37.029699 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 05:46:37.029865 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 05:46:37.032817 systemd[1]: Stopped target network.target - Network. Jul 10 05:46:37.032893 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 05:46:37.032949 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 05:46:37.035471 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 05:46:37.035535 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 05:46:37.036429 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 05:46:37.036493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 05:46:37.038245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 05:46:37.038292 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 05:46:37.040148 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 05:46:37.042103 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 05:46:37.049663 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 05:46:37.049804 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 05:46:37.054370 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 05:46:37.054754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 05:46:37.054802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:46:37.060557 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 05:46:37.060817 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 05:46:37.060960 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 05:46:37.064892 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 05:46:37.065446 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 05:46:37.066272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 05:46:37.066320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:46:37.067682 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 05:46:37.070127 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 05:46:37.070185 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 05:46:37.070549 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 05:46:37.070599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:46:37.075807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 05:46:37.075863 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 05:46:37.077257 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:46:37.078612 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 05:46:37.097371 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 05:46:37.097537 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 05:46:37.102169 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 05:46:37.102351 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:46:37.105730 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 05:46:37.105782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 05:46:37.107749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 05:46:37.107785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:46:37.108827 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 05:46:37.108879 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 05:46:37.109532 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 05:46:37.109599 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 05:46:37.110290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 05:46:37.110335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 05:46:37.111912 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 05:46:37.118757 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 05:46:37.118827 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:46:37.122994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 05:46:37.123047 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:46:37.162990 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 05:46:37.163038 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:46:37.167981 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 05:46:37.168034 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:46:37.170741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 05:46:37.170793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:37.174586 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 05:46:37.174724 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 05:46:37.214646 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 05:46:37.214785 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 05:46:37.215860 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 05:46:37.217324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 05:46:37.217379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 05:46:37.218643 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 05:46:37.245755 systemd[1]: Switching root. Jul 10 05:46:37.290253 systemd-journald[219]: Journal stopped Jul 10 05:46:38.578983 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 10 05:46:38.579078 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 05:46:38.579098 kernel: SELinux: policy capability open_perms=1 Jul 10 05:46:38.579114 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 05:46:38.579130 kernel: SELinux: policy capability always_check_network=0 Jul 10 05:46:38.579158 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 05:46:38.579183 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 05:46:38.579199 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 05:46:38.579220 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 05:46:38.579236 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 05:46:38.579258 kernel: audit: type=1403 audit(1752126397.728:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 05:46:38.579277 systemd[1]: Successfully loaded SELinux policy in 65.436ms. Jul 10 05:46:38.579311 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.591ms. Jul 10 05:46:38.579330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 05:46:38.579356 systemd[1]: Detected virtualization kvm. Jul 10 05:46:38.579373 systemd[1]: Detected architecture x86-64. Jul 10 05:46:38.579390 systemd[1]: Detected first boot. Jul 10 05:46:38.579406 systemd[1]: Initializing machine ID from VM UUID. Jul 10 05:46:38.579423 zram_generator::config[1134]: No configuration found. Jul 10 05:46:38.579440 kernel: Guest personality initialized and is inactive Jul 10 05:46:38.579456 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 05:46:38.579472 kernel: Initialized host personality Jul 10 05:46:38.579488 kernel: NET: Registered PF_VSOCK protocol family Jul 10 05:46:38.579534 systemd[1]: Populated /etc with preset unit settings. Jul 10 05:46:38.579556 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 05:46:38.579574 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 05:46:38.579601 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 05:46:38.579619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 05:46:38.579637 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 05:46:38.579655 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 05:46:38.579672 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 05:46:38.579698 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 05:46:38.579715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 05:46:38.579733 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 05:46:38.579756 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 05:46:38.579773 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 05:46:38.579790 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 05:46:38.579808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 05:46:38.579825 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 05:46:38.579842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 05:46:38.579867 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 05:46:38.579886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 05:46:38.579902 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 05:46:38.579919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 05:46:38.579936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 05:46:38.579953 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 05:46:38.579970 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 05:46:38.579987 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 05:46:38.580012 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 05:46:38.580030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 05:46:38.580047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 05:46:38.580064 systemd[1]: Reached target slices.target - Slice Units. Jul 10 05:46:38.580081 systemd[1]: Reached target swap.target - Swaps. Jul 10 05:46:38.580098 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 05:46:38.580116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 05:46:38.580133 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 05:46:38.580151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 05:46:38.580179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 05:46:38.580197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 05:46:38.580213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 05:46:38.580231 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 05:46:38.580247 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 05:46:38.580265 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 05:46:38.580282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:38.580299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 05:46:38.580316 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 05:46:38.580340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 05:46:38.580358 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 05:46:38.580375 systemd[1]: Reached target machines.target - Containers. Jul 10 05:46:38.580392 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 05:46:38.580410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:46:38.580427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 05:46:38.580444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 05:46:38.580461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:46:38.580486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 05:46:38.580503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:46:38.580539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 05:46:38.580556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:46:38.580573 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 05:46:38.580600 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 05:46:38.580617 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 05:46:38.580635 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 05:46:38.580653 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 05:46:38.580680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:46:38.580698 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 05:46:38.580715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 05:46:38.580732 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 05:46:38.580749 kernel: loop: module loaded Jul 10 05:46:38.580765 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 05:46:38.580782 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 05:46:38.580807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 05:46:38.580824 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 05:46:38.580841 systemd[1]: Stopped verity-setup.service. Jul 10 05:46:38.580871 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:38.580895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 05:46:38.580932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 05:46:38.580960 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 05:46:38.580985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 05:46:38.581005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 05:46:38.581022 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 05:46:38.581038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 05:46:38.581055 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 05:46:38.581080 kernel: fuse: init (API version 7.41) Jul 10 05:46:38.581096 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 05:46:38.581113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:46:38.581128 kernel: ACPI: bus type drm_connector registered Jul 10 05:46:38.581176 systemd-journald[1205]: Collecting audit messages is disabled. Jul 10 05:46:38.581208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:46:38.581226 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 05:46:38.581252 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 05:46:38.581268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 05:46:38.581285 systemd-journald[1205]: Journal started Jul 10 05:46:38.581314 systemd-journald[1205]: Runtime Journal (/run/log/journal/c8387c05fee54ae9873f7bb487dd2969) is 6M, max 48.5M, 42.4M free. Jul 10 05:46:38.303030 systemd[1]: Queued start job for default target multi-user.target. Jul 10 05:46:38.324610 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 05:46:38.325100 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 05:46:38.583592 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 05:46:38.585417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:46:38.585752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:46:38.587325 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 05:46:38.587648 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 05:46:38.589148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:46:38.589447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:46:38.590987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 05:46:38.592501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 05:46:38.594183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 05:46:38.595892 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 05:46:38.615578 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 05:46:38.618502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 05:46:38.622637 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 05:46:38.623874 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 05:46:38.623966 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 05:46:38.626434 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 05:46:38.633648 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 05:46:38.635096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:46:38.636914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 05:46:38.640548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 05:46:38.642680 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 05:46:38.645663 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 05:46:38.646776 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 05:46:38.648046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:46:38.652697 systemd-journald[1205]: Time spent on flushing to /var/log/journal/c8387c05fee54ae9873f7bb487dd2969 is 15.582ms for 1068 entries. Jul 10 05:46:38.652697 systemd-journald[1205]: System Journal (/var/log/journal/c8387c05fee54ae9873f7bb487dd2969) is 8M, max 195.6M, 187.6M free. Jul 10 05:46:38.920750 systemd-journald[1205]: Received client request to flush runtime journal. Jul 10 05:46:38.920800 kernel: loop0: detected capacity change from 0 to 114000 Jul 10 05:46:38.920826 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 05:46:38.920842 kernel: loop1: detected capacity change from 0 to 224512 Jul 10 05:46:38.920861 kernel: loop2: detected capacity change from 0 to 146488 Jul 10 05:46:38.920879 kernel: loop3: detected capacity change from 0 to 114000 Jul 10 05:46:38.920898 kernel: loop4: detected capacity change from 0 to 224512 Jul 10 05:46:38.651599 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 05:46:38.658052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 05:46:38.660909 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 05:46:38.662352 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 05:46:38.668023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 05:46:38.723275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:46:38.725092 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 10 05:46:38.725104 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 10 05:46:38.729993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 05:46:38.733307 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 05:46:38.897760 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 05:46:38.901995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 05:46:38.905791 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 05:46:38.907419 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 05:46:38.912891 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 05:46:38.929295 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 05:46:38.929558 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 10 05:46:38.929572 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 10 05:46:38.934449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 05:46:38.940653 kernel: loop5: detected capacity change from 0 to 146488 Jul 10 05:46:38.957796 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 05:46:38.964401 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 05:46:38.965042 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 10 05:46:38.969662 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 05:46:38.969830 systemd[1]: Reloading... Jul 10 05:46:39.071556 zram_generator::config[1302]: No configuration found. Jul 10 05:46:39.250773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:46:39.323156 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 05:46:39.353925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 05:46:39.354338 systemd[1]: Reloading finished in 384 ms. Jul 10 05:46:39.378829 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 05:46:39.380705 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 05:46:39.410006 systemd[1]: Starting ensure-sysext.service... Jul 10 05:46:39.412183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 05:46:39.422302 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Jul 10 05:46:39.422319 systemd[1]: Reloading... Jul 10 05:46:39.438992 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 05:46:39.439041 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 05:46:39.439418 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 05:46:39.439796 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 05:46:39.440964 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 05:46:39.441329 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 10 05:46:39.441425 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 10 05:46:39.449782 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 05:46:39.449801 systemd-tmpfiles[1342]: Skipping /boot Jul 10 05:46:39.474889 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 05:46:39.474905 systemd-tmpfiles[1342]: Skipping /boot Jul 10 05:46:39.503535 zram_generator::config[1369]: No configuration found. Jul 10 05:46:39.617159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:46:39.733140 systemd[1]: Reloading finished in 310 ms. Jul 10 05:46:39.761759 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 05:46:39.790535 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 05:46:39.802480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 05:46:39.805386 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 05:46:39.808142 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 05:46:39.819831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 05:46:39.822665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 05:46:39.826779 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 05:46:39.830928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.831104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:46:39.833422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:46:39.839817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:46:39.848731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:46:39.866378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:46:39.866668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:46:39.866806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.870222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:46:39.870803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:46:39.873320 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 05:46:39.876422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:46:39.881863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:46:39.886014 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:46:39.886324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:46:39.896704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.896958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:46:39.898914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:46:39.902252 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jul 10 05:46:39.902857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:46:39.918023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:46:39.928199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:46:39.928312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:46:39.929700 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 05:46:39.933349 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 05:46:39.934532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.940970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 05:46:39.943532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:46:39.943795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:46:39.945793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:46:39.946018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:46:39.947998 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:46:39.948292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:46:39.959878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.960236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 05:46:39.961878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 05:46:39.964790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 05:46:39.968815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 05:46:39.972531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 05:46:39.973851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 05:46:39.974004 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 05:46:39.974181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 05:46:39.975847 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 05:46:39.978490 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 05:46:39.990754 systemd[1]: Finished ensure-sysext.service. Jul 10 05:46:39.995072 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 05:46:39.995284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 05:46:39.996872 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 05:46:39.997776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 05:46:39.999261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 05:46:40.000699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 05:46:40.002736 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 05:46:40.017828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 05:46:40.018054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 05:46:40.027834 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 05:46:40.037387 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 05:46:40.086060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 05:46:40.086161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 05:46:40.090746 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 05:46:40.094153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 05:46:40.107506 augenrules[1496]: No rules Jul 10 05:46:40.111387 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 05:46:40.113609 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 05:46:40.113905 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 05:46:40.172560 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 05:46:40.177710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 05:46:40.180980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 05:46:40.189538 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 05:46:40.196543 kernel: ACPI: button: Power Button [PWRF] Jul 10 05:46:40.208457 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 05:46:40.231747 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 05:46:40.232072 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 05:46:40.232236 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 05:46:40.273976 systemd-resolved[1411]: Positive Trust Anchors: Jul 10 05:46:40.274000 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 05:46:40.274041 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 05:46:40.286734 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 05:46:40.286970 systemd-resolved[1411]: Defaulting to hostname 'linux'. Jul 10 05:46:40.289109 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 05:46:40.290970 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 05:46:40.292177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 05:46:40.292361 systemd-networkd[1491]: lo: Link UP Jul 10 05:46:40.293132 systemd-networkd[1491]: lo: Gained carrier Jul 10 05:46:40.294596 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 05:46:40.295839 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 05:46:40.297088 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 05:46:40.297923 systemd-networkd[1491]: Enumeration completed Jul 10 05:46:40.298324 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 05:46:40.299370 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:46:40.299640 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 05:46:40.300799 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 05:46:40.300914 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 05:46:40.302225 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 05:46:40.303174 systemd-networkd[1491]: eth0: Link UP Jul 10 05:46:40.303446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 05:46:40.303570 systemd[1]: Reached target paths.target - Path Units. Jul 10 05:46:40.304265 systemd-networkd[1491]: eth0: Gained carrier Jul 10 05:46:40.304610 systemd[1]: Reached target timers.target - Timer Units. Jul 10 05:46:40.305745 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 05:46:40.306453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 05:46:40.310909 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 05:46:40.314632 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 05:46:40.316148 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 05:46:40.317458 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 05:46:40.317599 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 05:46:40.318353 systemd-timesyncd[1493]: Network configuration changed, trying to establish connection. Jul 10 05:46:41.096287 systemd-resolved[1411]: Clock change detected. Flushing caches. Jul 10 05:46:41.097440 systemd-timesyncd[1493]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 05:46:41.097486 systemd-timesyncd[1493]: Initial clock synchronization to Thu 2025-07-10 05:46:41.096238 UTC. Jul 10 05:46:41.100483 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 05:46:41.102519 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 05:46:41.104694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 05:46:41.106026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 05:46:41.138024 systemd[1]: Reached target network.target - Network. Jul 10 05:46:41.140555 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 05:46:41.141688 systemd[1]: Reached target basic.target - Basic System. Jul 10 05:46:41.142776 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 05:46:41.142914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 05:46:41.144917 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 05:46:41.147109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 05:46:41.150686 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 05:46:41.152940 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 05:46:41.155087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 05:46:41.156074 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 05:46:41.160709 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 05:46:41.166015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 05:46:41.168934 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 05:46:41.170232 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 05:46:41.172541 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 05:46:41.177304 jq[1537]: false Jul 10 05:46:41.189179 extend-filesystems[1538]: Found /dev/vda6 Jul 10 05:46:41.191737 extend-filesystems[1538]: Found /dev/vda9 Jul 10 05:46:41.193667 extend-filesystems[1538]: Checking size of /dev/vda9 Jul 10 05:46:41.202407 extend-filesystems[1538]: Resized partition /dev/vda9 Jul 10 05:46:41.204203 extend-filesystems[1553]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 05:46:41.209396 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 05:46:41.227873 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 05:46:41.235376 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jul 10 05:46:41.234943 oslogin_cache_refresh[1539]: Refreshing passwd entry cache Jul 10 05:46:41.268437 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 05:46:41.298286 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 05:46:41.301758 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 05:46:41.305736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 05:46:41.326794 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting Jul 10 05:46:41.326794 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 05:46:41.326794 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache Jul 10 05:46:41.306508 oslogin_cache_refresh[1539]: Failure getting users, quitting Jul 10 05:46:41.308151 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 05:46:41.306534 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 05:46:41.308979 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 05:46:41.326713 oslogin_cache_refresh[1539]: Refreshing group entry cache Jul 10 05:46:41.310100 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 05:46:41.315903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 05:46:41.327993 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 05:46:41.327993 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 05:46:41.327993 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 05:46:41.332245 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jul 10 05:46:41.333979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 05:46:41.336101 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 05:46:41.336376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 05:46:41.336500 jq[1568]: true Jul 10 05:46:41.336929 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 05:46:41.337194 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 05:46:41.338074 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting Jul 10 05:46:41.338067 oslogin_cache_refresh[1539]: Failure getting groups, quitting Jul 10 05:46:41.338138 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 05:46:41.338082 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 05:46:41.354812 kernel: kvm_amd: TSC scaling supported Jul 10 05:46:41.354863 kernel: kvm_amd: Nested Virtualization enabled Jul 10 05:46:41.354879 kernel: kvm_amd: Nested Paging enabled Jul 10 05:46:41.354895 kernel: kvm_amd: LBR virtualization supported Jul 10 05:46:41.356456 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 05:46:41.356486 kernel: kvm_amd: Virtual GIF supported Jul 10 05:46:41.358753 update_engine[1567]: I20250710 05:46:41.358615 1567 main.cc:92] Flatcar Update Engine starting Jul 10 05:46:41.395743 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 05:46:41.396054 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 05:46:41.399396 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 05:46:41.400761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 05:46:41.403396 kernel: EDAC MC: Ver: 3.0.0 Jul 10 05:46:41.403814 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 05:46:41.404117 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 05:46:41.405982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 05:46:41.440074 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 05:46:41.447401 jq[1579]: true Jul 10 05:46:41.451703 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 05:46:41.454751 systemd-logind[1554]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 05:46:41.454782 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 05:46:41.455985 systemd-logind[1554]: New seat seat0. Jul 10 05:46:41.462430 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 05:46:41.474100 tar[1578]: linux-amd64/LICENSE Jul 10 05:46:41.474565 tar[1578]: linux-amd64/helm Jul 10 05:46:41.488491 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 05:46:41.493533 dbus-daemon[1535]: [system] SELinux support is enabled Jul 10 05:46:41.495037 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 05:46:41.498934 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 05:46:41.499897 update_engine[1567]: I20250710 05:46:41.499327 1567 update_check_scheduler.cc:74] Next update check in 6m2s Jul 10 05:46:41.516815 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 05:46:41.517038 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 05:46:41.518260 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 05:46:41.518292 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 05:46:41.519708 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 05:46:41.519727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 05:46:41.521419 systemd[1]: Started update-engine.service - Update Engine. Jul 10 05:46:41.522903 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Jul 10 05:46:41.524740 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 05:46:41.527084 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 05:46:41.530264 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 05:46:41.559736 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 05:46:41.560039 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 05:46:41.573301 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 05:46:41.633773 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 05:46:41.639621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 05:46:41.642807 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 05:46:41.647297 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 05:46:41.649099 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 05:46:41.885014 containerd[1580]: time="2025-07-10T05:46:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 05:46:41.887987 containerd[1580]: time="2025-07-10T05:46:41.887927627Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 10 05:46:41.900654 containerd[1580]: time="2025-07-10T05:46:41.900565487Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.043µs" Jul 10 05:46:41.900654 containerd[1580]: time="2025-07-10T05:46:41.900618767Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 05:46:41.900654 containerd[1580]: time="2025-07-10T05:46:41.900640198Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 05:46:41.900854 containerd[1580]: time="2025-07-10T05:46:41.900827559Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 05:46:41.900854 containerd[1580]: time="2025-07-10T05:46:41.900852225Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 05:46:41.900920 containerd[1580]: time="2025-07-10T05:46:41.900881931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 05:46:41.900998 containerd[1580]: time="2025-07-10T05:46:41.900971539Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 05:46:41.900998 containerd[1580]: time="2025-07-10T05:46:41.900990935Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901300 containerd[1580]: time="2025-07-10T05:46:41.901253758Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901300 containerd[1580]: time="2025-07-10T05:46:41.901272503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901300 containerd[1580]: time="2025-07-10T05:46:41.901282993Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901300 containerd[1580]: time="2025-07-10T05:46:41.901291479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901645 containerd[1580]: time="2025-07-10T05:46:41.901425881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901744 containerd[1580]: time="2025-07-10T05:46:41.901715184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901774 containerd[1580]: time="2025-07-10T05:46:41.901762472Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 05:46:41.901796 containerd[1580]: time="2025-07-10T05:46:41.901772711Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 05:46:41.901834 containerd[1580]: time="2025-07-10T05:46:41.901811264Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 05:46:41.902633 containerd[1580]: time="2025-07-10T05:46:41.902584003Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 05:46:41.902767 containerd[1580]: time="2025-07-10T05:46:41.902746107Z" level=info msg="metadata content store policy set" policy=shared Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910820269Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910909917Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910930796Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910945163Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910959470Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910971242Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 05:46:41.910968 containerd[1580]: time="2025-07-10T05:46:41.910984747Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 05:46:41.911212 containerd[1580]: time="2025-07-10T05:46:41.911000697Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 05:46:41.911212 containerd[1580]: time="2025-07-10T05:46:41.911015074Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 05:46:41.911212 containerd[1580]: time="2025-07-10T05:46:41.911025784Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 05:46:41.911212 containerd[1580]: time="2025-07-10T05:46:41.911036915Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 05:46:41.911212 containerd[1580]: time="2025-07-10T05:46:41.911079375Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 05:46:41.911306 containerd[1580]: time="2025-07-10T05:46:41.911236229Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 05:46:41.911306 containerd[1580]: time="2025-07-10T05:46:41.911264903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 05:46:41.911306 containerd[1580]: time="2025-07-10T05:46:41.911278739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 05:46:41.911306 containerd[1580]: time="2025-07-10T05:46:41.911289449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 05:46:41.911306 containerd[1580]: time="2025-07-10T05:46:41.911300239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911311109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911323272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911339543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911390518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911405176Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 05:46:41.911444 containerd[1580]: time="2025-07-10T05:46:41.911415395Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 05:46:41.911662 containerd[1580]: time="2025-07-10T05:46:41.911532404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 05:46:41.911662 containerd[1580]: time="2025-07-10T05:46:41.911634636Z" level=info msg="Start snapshots syncer" Jul 10 05:46:41.911711 containerd[1580]: time="2025-07-10T05:46:41.911666546Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 05:46:41.911995 containerd[1580]: time="2025-07-10T05:46:41.911928026Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 05:46:41.912157 containerd[1580]: time="2025-07-10T05:46:41.912001494Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 05:46:41.912297 containerd[1580]: time="2025-07-10T05:46:41.912180560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 05:46:41.912401 containerd[1580]: time="2025-07-10T05:46:41.912351260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 05:46:41.912425 containerd[1580]: time="2025-07-10T05:46:41.912402456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 05:46:41.912425 containerd[1580]: time="2025-07-10T05:46:41.912414509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 05:46:41.912425 containerd[1580]: time="2025-07-10T05:46:41.912424217Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 05:46:41.912491 containerd[1580]: time="2025-07-10T05:46:41.912445266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 05:46:41.912491 containerd[1580]: time="2025-07-10T05:46:41.912456678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 05:46:41.912491 containerd[1580]: time="2025-07-10T05:46:41.912467258Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 05:46:41.912546 containerd[1580]: time="2025-07-10T05:46:41.912497204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 05:46:41.912546 containerd[1580]: time="2025-07-10T05:46:41.912508755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 05:46:41.912546 containerd[1580]: time="2025-07-10T05:46:41.912520497Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 05:46:41.912600 containerd[1580]: time="2025-07-10T05:46:41.912583946Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 05:46:41.912626 containerd[1580]: time="2025-07-10T05:46:41.912601800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 05:46:41.912626 containerd[1580]: time="2025-07-10T05:46:41.912611207Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 05:46:41.912626 containerd[1580]: time="2025-07-10T05:46:41.912621136Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 05:46:41.912684 containerd[1580]: time="2025-07-10T05:46:41.912629011Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 05:46:41.912684 containerd[1580]: time="2025-07-10T05:46:41.912640563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 05:46:41.912684 containerd[1580]: time="2025-07-10T05:46:41.912651253Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 05:46:41.912684 containerd[1580]: time="2025-07-10T05:46:41.912684996Z" level=info msg="runtime interface created" Jul 10 05:46:41.912760 containerd[1580]: time="2025-07-10T05:46:41.912691117Z" level=info msg="created NRI interface" Jul 10 05:46:41.912760 containerd[1580]: time="2025-07-10T05:46:41.912699894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 05:46:41.912760 containerd[1580]: time="2025-07-10T05:46:41.912711405Z" level=info msg="Connect containerd service" Jul 10 05:46:41.912821 containerd[1580]: time="2025-07-10T05:46:41.912767370Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 05:46:41.913913 containerd[1580]: time="2025-07-10T05:46:41.913875989Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 05:46:41.994922 tar[1578]: linux-amd64/README.md Jul 10 05:46:42.025557 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 05:46:42.179881 containerd[1580]: time="2025-07-10T05:46:42.179746962Z" level=info msg="Start subscribing containerd event" Jul 10 05:46:42.179975 containerd[1580]: time="2025-07-10T05:46:42.179880132Z" level=info msg="Start recovering state" Jul 10 05:46:42.180140 containerd[1580]: time="2025-07-10T05:46:42.180098472Z" level=info msg="Start event monitor" Jul 10 05:46:42.180140 containerd[1580]: time="2025-07-10T05:46:42.180106056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180142935Z" level=info msg="Start cni network conf syncer for default" Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180187809Z" level=info msg="Start streaming server" Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180204811Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180207476Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180228075Z" level=info msg="runtime interface starting up..." Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180237572Z" level=info msg="starting plugins..." Jul 10 05:46:42.180315 containerd[1580]: time="2025-07-10T05:46:42.180266787Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 05:46:42.180560 containerd[1580]: time="2025-07-10T05:46:42.180522978Z" level=info msg="containerd successfully booted in 0.296200s" Jul 10 05:46:42.180701 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 05:46:42.836584 systemd-networkd[1491]: eth0: Gained IPv6LL Jul 10 05:46:42.840376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 05:46:42.842573 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 05:46:42.846057 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 05:46:42.848641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:46:42.850862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 05:46:42.885068 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 05:46:42.885414 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 05:46:42.887185 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 05:46:42.892794 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 05:46:44.344082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:46:44.345996 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 05:46:44.347478 systemd[1]: Startup finished in 2.992s (kernel) + 7.085s (initrd) + 5.905s (userspace) = 15.983s. Jul 10 05:46:44.357710 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 05:46:44.982848 kubelet[1679]: E0710 05:46:44.982755 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 05:46:44.986733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 05:46:44.986948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 05:46:44.987402 systemd[1]: kubelet.service: Consumed 1.934s CPU time, 265.1M memory peak. Jul 10 05:46:45.586080 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 05:46:45.587333 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:53422.service - OpenSSH per-connection server daemon (10.0.0.1:53422). Jul 10 05:46:45.693056 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 53422 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:45.694947 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:45.702088 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 05:46:45.703257 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 05:46:45.710772 systemd-logind[1554]: New session 1 of user core. Jul 10 05:46:45.729009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 05:46:45.732403 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 05:46:45.750933 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 05:46:45.753477 systemd-logind[1554]: New session c1 of user core. Jul 10 05:46:45.903600 systemd[1697]: Queued start job for default target default.target. Jul 10 05:46:45.918664 systemd[1697]: Created slice app.slice - User Application Slice. Jul 10 05:46:45.918689 systemd[1697]: Reached target paths.target - Paths. Jul 10 05:46:45.918730 systemd[1697]: Reached target timers.target - Timers. Jul 10 05:46:45.920336 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 05:46:45.934621 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 05:46:45.934744 systemd[1697]: Reached target sockets.target - Sockets. Jul 10 05:46:45.934785 systemd[1697]: Reached target basic.target - Basic System. Jul 10 05:46:45.934826 systemd[1697]: Reached target default.target - Main User Target. Jul 10 05:46:45.934856 systemd[1697]: Startup finished in 174ms. Jul 10 05:46:45.935130 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 05:46:45.936746 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 05:46:46.004806 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:53436.service - OpenSSH per-connection server daemon (10.0.0.1:53436). Jul 10 05:46:46.069155 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53436 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.071231 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.076532 systemd-logind[1554]: New session 2 of user core. Jul 10 05:46:46.086487 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 05:46:46.141476 sshd[1711]: Connection closed by 10.0.0.1 port 53436 Jul 10 05:46:46.141790 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 10 05:46:46.151669 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:53436.service: Deactivated successfully. Jul 10 05:46:46.154049 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 05:46:46.155069 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Jul 10 05:46:46.158706 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Jul 10 05:46:46.159535 systemd-logind[1554]: Removed session 2. Jul 10 05:46:46.230954 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.232694 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.237214 systemd-logind[1554]: New session 3 of user core. Jul 10 05:46:46.246495 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 05:46:46.297386 sshd[1720]: Connection closed by 10.0.0.1 port 53452 Jul 10 05:46:46.297862 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jul 10 05:46:46.319731 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:53452.service: Deactivated successfully. Jul 10 05:46:46.321820 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 05:46:46.322821 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Jul 10 05:46:46.325763 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:53466.service - OpenSSH per-connection server daemon (10.0.0.1:53466). Jul 10 05:46:46.326531 systemd-logind[1554]: Removed session 3. Jul 10 05:46:46.399143 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 53466 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.401256 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.406470 systemd-logind[1554]: New session 4 of user core. Jul 10 05:46:46.416604 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 05:46:46.469974 sshd[1729]: Connection closed by 10.0.0.1 port 53466 Jul 10 05:46:46.470413 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 10 05:46:46.478970 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:53466.service: Deactivated successfully. Jul 10 05:46:46.480790 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 05:46:46.481492 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Jul 10 05:46:46.484246 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:53478.service - OpenSSH per-connection server daemon (10.0.0.1:53478). Jul 10 05:46:46.484827 systemd-logind[1554]: Removed session 4. Jul 10 05:46:46.544146 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 53478 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.545658 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.550277 systemd-logind[1554]: New session 5 of user core. Jul 10 05:46:46.560490 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 05:46:46.620970 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 05:46:46.621327 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:46:46.647778 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 10 05:46:46.649993 sshd[1738]: Connection closed by 10.0.0.1 port 53478 Jul 10 05:46:46.650499 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jul 10 05:46:46.664158 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:53478.service: Deactivated successfully. Jul 10 05:46:46.666224 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 05:46:46.666971 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Jul 10 05:46:46.669966 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:53482.service - OpenSSH per-connection server daemon (10.0.0.1:53482). Jul 10 05:46:46.670554 systemd-logind[1554]: Removed session 5. Jul 10 05:46:46.732158 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 53482 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.733493 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.738055 systemd-logind[1554]: New session 6 of user core. Jul 10 05:46:46.753493 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 05:46:46.807809 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 05:46:46.808157 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:46:46.815813 sudo[1750]: pam_unix(sudo:session): session closed for user root Jul 10 05:46:46.822736 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 05:46:46.823057 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:46:46.833637 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 05:46:46.880243 augenrules[1772]: No rules Jul 10 05:46:46.882503 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 05:46:46.882860 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 05:46:46.884482 sudo[1749]: pam_unix(sudo:session): session closed for user root Jul 10 05:46:46.886555 sshd[1748]: Connection closed by 10.0.0.1 port 53482 Jul 10 05:46:46.887137 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 10 05:46:46.899629 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:53482.service: Deactivated successfully. Jul 10 05:46:46.902038 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 05:46:46.902954 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Jul 10 05:46:46.906576 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:53484.service - OpenSSH per-connection server daemon (10.0.0.1:53484). Jul 10 05:46:46.907260 systemd-logind[1554]: Removed session 6. Jul 10 05:46:46.966723 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 53484 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:46:46.968088 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:46:46.972660 systemd-logind[1554]: New session 7 of user core. Jul 10 05:46:46.983595 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 05:46:47.036721 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 05:46:47.037039 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 05:46:47.788782 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 05:46:47.803766 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 05:46:48.298538 dockerd[1807]: time="2025-07-10T05:46:48.298458791Z" level=info msg="Starting up" Jul 10 05:46:48.299501 dockerd[1807]: time="2025-07-10T05:46:48.299453426Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 05:46:48.318187 dockerd[1807]: time="2025-07-10T05:46:48.318141103Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 10 05:46:48.871249 dockerd[1807]: time="2025-07-10T05:46:48.871155953Z" level=info msg="Loading containers: start." Jul 10 05:46:48.880393 kernel: Initializing XFRM netlink socket Jul 10 05:46:49.193447 systemd-networkd[1491]: docker0: Link UP Jul 10 05:46:49.200136 dockerd[1807]: time="2025-07-10T05:46:49.200080754Z" level=info msg="Loading containers: done." Jul 10 05:46:49.221261 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1800420204-merged.mount: Deactivated successfully. Jul 10 05:46:49.223460 dockerd[1807]: time="2025-07-10T05:46:49.223410255Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 05:46:49.223554 dockerd[1807]: time="2025-07-10T05:46:49.223533997Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 10 05:46:49.223683 dockerd[1807]: time="2025-07-10T05:46:49.223663220Z" level=info msg="Initializing buildkit" Jul 10 05:46:49.256949 dockerd[1807]: time="2025-07-10T05:46:49.256888228Z" level=info msg="Completed buildkit initialization" Jul 10 05:46:49.263331 dockerd[1807]: time="2025-07-10T05:46:49.263289373Z" level=info msg="Daemon has completed initialization" Jul 10 05:46:49.263499 dockerd[1807]: time="2025-07-10T05:46:49.263417754Z" level=info msg="API listen on /run/docker.sock" Jul 10 05:46:49.263591 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 05:46:50.460528 containerd[1580]: time="2025-07-10T05:46:50.460479652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 05:46:51.068416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735482517.mount: Deactivated successfully. Jul 10 05:46:52.260714 containerd[1580]: time="2025-07-10T05:46:52.260627081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:52.261201 containerd[1580]: time="2025-07-10T05:46:52.261095920Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 10 05:46:52.262684 containerd[1580]: time="2025-07-10T05:46:52.262615921Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:52.265513 containerd[1580]: time="2025-07-10T05:46:52.265484009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:52.266565 containerd[1580]: time="2025-07-10T05:46:52.266518179Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.805987081s" Jul 10 05:46:52.266632 containerd[1580]: time="2025-07-10T05:46:52.266567291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 05:46:52.267473 containerd[1580]: time="2025-07-10T05:46:52.267445418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 05:46:53.558652 containerd[1580]: time="2025-07-10T05:46:53.558577521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:53.559345 containerd[1580]: time="2025-07-10T05:46:53.559289527Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 10 05:46:53.560577 containerd[1580]: time="2025-07-10T05:46:53.560529672Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:53.568307 containerd[1580]: time="2025-07-10T05:46:53.568271691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:53.569512 containerd[1580]: time="2025-07-10T05:46:53.569474587Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.301991489s" Jul 10 05:46:53.569562 containerd[1580]: time="2025-07-10T05:46:53.569516295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 05:46:53.570074 containerd[1580]: time="2025-07-10T05:46:53.570020411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 05:46:55.048122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 05:46:55.049729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:46:55.469695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:46:55.484660 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 05:46:55.543890 containerd[1580]: time="2025-07-10T05:46:55.543824208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:55.544582 containerd[1580]: time="2025-07-10T05:46:55.544542555Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 10 05:46:55.545766 containerd[1580]: time="2025-07-10T05:46:55.545738097Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:55.548644 containerd[1580]: time="2025-07-10T05:46:55.548614752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:55.549462 containerd[1580]: time="2025-07-10T05:46:55.549437595Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.979365867s" Jul 10 05:46:55.549462 containerd[1580]: time="2025-07-10T05:46:55.549468422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 05:46:55.550159 containerd[1580]: time="2025-07-10T05:46:55.550130654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 05:46:55.632050 kubelet[2098]: E0710 05:46:55.631975 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 05:46:55.638890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 05:46:55.639093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 05:46:55.639486 systemd[1]: kubelet.service: Consumed 409ms CPU time, 110.3M memory peak. Jul 10 05:46:56.671421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804542340.mount: Deactivated successfully. Jul 10 05:46:57.319063 containerd[1580]: time="2025-07-10T05:46:57.318966613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:57.320516 containerd[1580]: time="2025-07-10T05:46:57.319983220Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 10 05:46:57.322572 containerd[1580]: time="2025-07-10T05:46:57.322516731Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:57.324379 containerd[1580]: time="2025-07-10T05:46:57.324307189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:57.324734 containerd[1580]: time="2025-07-10T05:46:57.324692611Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.774533684s" Jul 10 05:46:57.324734 containerd[1580]: time="2025-07-10T05:46:57.324723560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 05:46:57.325340 containerd[1580]: time="2025-07-10T05:46:57.325296133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 05:46:57.868539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797029099.mount: Deactivated successfully. Jul 10 05:46:58.670112 containerd[1580]: time="2025-07-10T05:46:58.670051930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:58.670740 containerd[1580]: time="2025-07-10T05:46:58.670712670Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 05:46:58.671936 containerd[1580]: time="2025-07-10T05:46:58.671904665Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:58.674566 containerd[1580]: time="2025-07-10T05:46:58.674537432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:46:58.675381 containerd[1580]: time="2025-07-10T05:46:58.675343063Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.35000977s" Jul 10 05:46:58.675434 containerd[1580]: time="2025-07-10T05:46:58.675389570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 05:46:58.675915 containerd[1580]: time="2025-07-10T05:46:58.675872967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 05:46:59.238221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095623976.mount: Deactivated successfully. Jul 10 05:46:59.244516 containerd[1580]: time="2025-07-10T05:46:59.244457696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:46:59.245141 containerd[1580]: time="2025-07-10T05:46:59.245080464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 05:46:59.246199 containerd[1580]: time="2025-07-10T05:46:59.246154929Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:46:59.248339 containerd[1580]: time="2025-07-10T05:46:59.248294973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 05:46:59.248927 containerd[1580]: time="2025-07-10T05:46:59.248878206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 572.982456ms" Jul 10 05:46:59.248927 containerd[1580]: time="2025-07-10T05:46:59.248916007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 05:46:59.249558 containerd[1580]: time="2025-07-10T05:46:59.249532503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 05:46:59.915499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353572898.mount: Deactivated successfully. Jul 10 05:47:01.976077 containerd[1580]: time="2025-07-10T05:47:01.975977731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:01.976776 containerd[1580]: time="2025-07-10T05:47:01.976714112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 10 05:47:01.977874 containerd[1580]: time="2025-07-10T05:47:01.977819686Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:01.980508 containerd[1580]: time="2025-07-10T05:47:01.980479614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:01.981554 containerd[1580]: time="2025-07-10T05:47:01.981527750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.731969328s" Jul 10 05:47:01.981606 containerd[1580]: time="2025-07-10T05:47:01.981556353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 05:47:04.611412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:47:04.611637 systemd[1]: kubelet.service: Consumed 409ms CPU time, 110.3M memory peak. Jul 10 05:47:04.614291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:47:04.637426 systemd[1]: Reload requested from client PID 2254 ('systemctl') (unit session-7.scope)... Jul 10 05:47:04.637440 systemd[1]: Reloading... Jul 10 05:47:04.725390 zram_generator::config[2296]: No configuration found. Jul 10 05:47:04.892035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:47:05.010634 systemd[1]: Reloading finished in 372 ms. Jul 10 05:47:05.077085 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 05:47:05.077186 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 05:47:05.077497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:47:05.077549 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.4M memory peak. Jul 10 05:47:05.079077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:47:05.251729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:47:05.265635 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 05:47:05.433702 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:47:05.433702 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 05:47:05.433702 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:47:05.434122 kubelet[2344]: I0710 05:47:05.433768 2344 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 05:47:05.629516 kubelet[2344]: I0710 05:47:05.629460 2344 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 05:47:05.629516 kubelet[2344]: I0710 05:47:05.629497 2344 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 05:47:05.631373 kubelet[2344]: I0710 05:47:05.630220 2344 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 05:47:05.650674 kubelet[2344]: E0710 05:47:05.650616 2344 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:05.651439 kubelet[2344]: I0710 05:47:05.651391 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 05:47:05.658854 kubelet[2344]: I0710 05:47:05.658823 2344 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 05:47:05.664273 kubelet[2344]: I0710 05:47:05.664239 2344 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 05:47:05.665428 kubelet[2344]: I0710 05:47:05.665386 2344 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 05:47:05.665610 kubelet[2344]: I0710 05:47:05.665422 2344 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 05:47:05.665720 kubelet[2344]: I0710 05:47:05.665627 2344 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 05:47:05.665720 kubelet[2344]: I0710 05:47:05.665636 2344 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 05:47:05.665822 kubelet[2344]: I0710 05:47:05.665796 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:47:05.668409 kubelet[2344]: I0710 05:47:05.668393 2344 kubelet.go:446] "Attempting to sync node with API server" Jul 10 05:47:05.669820 kubelet[2344]: I0710 05:47:05.669790 2344 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 05:47:05.669865 kubelet[2344]: I0710 05:47:05.669840 2344 kubelet.go:352] "Adding apiserver pod source" Jul 10 05:47:05.669865 kubelet[2344]: I0710 05:47:05.669863 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 05:47:05.672454 kubelet[2344]: W0710 05:47:05.672330 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:05.672454 kubelet[2344]: W0710 05:47:05.672387 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:05.672454 kubelet[2344]: E0710 05:47:05.672420 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:05.672454 kubelet[2344]: E0710 05:47:05.672447 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:05.673833 kubelet[2344]: I0710 05:47:05.673218 2344 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 05:47:05.673833 kubelet[2344]: I0710 05:47:05.673671 2344 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 05:47:05.674304 kubelet[2344]: W0710 05:47:05.674289 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 05:47:05.676787 kubelet[2344]: I0710 05:47:05.676746 2344 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 05:47:05.676787 kubelet[2344]: I0710 05:47:05.676792 2344 server.go:1287] "Started kubelet" Jul 10 05:47:05.678285 kubelet[2344]: I0710 05:47:05.678256 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 05:47:05.680247 kubelet[2344]: I0710 05:47:05.680157 2344 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 05:47:05.681375 kubelet[2344]: I0710 05:47:05.681330 2344 server.go:479] "Adding debug handlers to kubelet server" Jul 10 05:47:05.682331 kubelet[2344]: I0710 05:47:05.682256 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 05:47:05.683964 kubelet[2344]: I0710 05:47:05.682506 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 05:47:05.683964 kubelet[2344]: I0710 05:47:05.682609 2344 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 05:47:05.683964 kubelet[2344]: E0710 05:47:05.682160 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850cdb37409df92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 05:47:05.676767122 +0000 UTC m=+0.407477978,LastTimestamp:2025-07-10 05:47:05.676767122 +0000 UTC m=+0.407477978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 05:47:05.683964 kubelet[2344]: E0710 05:47:05.683303 2344 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 05:47:05.683964 kubelet[2344]: E0710 05:47:05.683537 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:47:05.683964 kubelet[2344]: I0710 05:47:05.683569 2344 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 05:47:05.683964 kubelet[2344]: I0710 05:47:05.683719 2344 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 05:47:05.683964 kubelet[2344]: I0710 05:47:05.683761 2344 reconciler.go:26] "Reconciler: start to sync state" Jul 10 05:47:05.684433 kubelet[2344]: E0710 05:47:05.683851 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Jul 10 05:47:05.684645 kubelet[2344]: W0710 05:47:05.684593 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:05.684645 kubelet[2344]: E0710 05:47:05.684638 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:05.686332 kubelet[2344]: I0710 05:47:05.685932 2344 factory.go:221] Registration of the containerd container factory successfully Jul 10 05:47:05.686332 kubelet[2344]: I0710 05:47:05.685955 2344 factory.go:221] Registration of the systemd container factory successfully Jul 10 05:47:05.686332 kubelet[2344]: I0710 05:47:05.686115 2344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 05:47:05.696328 kubelet[2344]: I0710 05:47:05.696245 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 05:47:05.697958 kubelet[2344]: I0710 05:47:05.697918 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 05:47:05.697958 kubelet[2344]: I0710 05:47:05.697958 2344 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 05:47:05.698031 kubelet[2344]: I0710 05:47:05.697988 2344 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 05:47:05.698031 kubelet[2344]: I0710 05:47:05.697998 2344 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 05:47:05.698085 kubelet[2344]: E0710 05:47:05.698061 2344 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 05:47:05.700242 kubelet[2344]: W0710 05:47:05.700216 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:05.700294 kubelet[2344]: E0710 05:47:05.700252 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:05.704018 kubelet[2344]: I0710 05:47:05.703997 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 05:47:05.704018 kubelet[2344]: I0710 05:47:05.704014 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 05:47:05.704111 kubelet[2344]: I0710 05:47:05.704037 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:47:05.784141 kubelet[2344]: E0710 05:47:05.784082 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:47:05.798461 kubelet[2344]: E0710 05:47:05.798348 2344 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 05:47:05.884837 kubelet[2344]: E0710 05:47:05.884636 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:47:05.885172 kubelet[2344]: E0710 05:47:05.885136 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Jul 10 05:47:05.985984 kubelet[2344]: E0710 05:47:05.985894 2344 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:47:05.999108 kubelet[2344]: E0710 05:47:05.999037 2344 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 05:47:06.003499 kubelet[2344]: I0710 05:47:06.003472 2344 policy_none.go:49] "None policy: Start" Jul 10 05:47:06.003545 kubelet[2344]: I0710 05:47:06.003518 2344 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 05:47:06.003545 kubelet[2344]: I0710 05:47:06.003539 2344 state_mem.go:35] "Initializing new in-memory state store" Jul 10 05:47:06.017057 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 05:47:06.029220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 05:47:06.032290 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 05:47:06.050653 kubelet[2344]: I0710 05:47:06.050596 2344 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 05:47:06.051349 kubelet[2344]: I0710 05:47:06.050882 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 05:47:06.051349 kubelet[2344]: I0710 05:47:06.050896 2344 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 05:47:06.051349 kubelet[2344]: I0710 05:47:06.051161 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 05:47:06.052413 kubelet[2344]: E0710 05:47:06.052391 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 05:47:06.052550 kubelet[2344]: E0710 05:47:06.052510 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 05:47:06.153410 kubelet[2344]: I0710 05:47:06.153196 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:06.153878 kubelet[2344]: E0710 05:47:06.153848 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jul 10 05:47:06.286316 kubelet[2344]: E0710 05:47:06.286250 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Jul 10 05:47:06.355952 kubelet[2344]: I0710 05:47:06.355903 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:06.356484 kubelet[2344]: E0710 05:47:06.356444 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jul 10 05:47:06.409604 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 05:47:06.435171 kubelet[2344]: E0710 05:47:06.435116 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:06.438121 systemd[1]: Created slice kubepods-burstable-podcd66b28176cfa81dc44ff140f276c451.slice - libcontainer container kubepods-burstable-podcd66b28176cfa81dc44ff140f276c451.slice. Jul 10 05:47:06.448628 kubelet[2344]: E0710 05:47:06.448592 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:06.451353 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 05:47:06.453419 kubelet[2344]: E0710 05:47:06.453350 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:06.488983 kubelet[2344]: I0710 05:47:06.488912 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 05:47:06.488983 kubelet[2344]: I0710 05:47:06.488972 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:06.488983 kubelet[2344]: I0710 05:47:06.489000 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:06.489190 kubelet[2344]: I0710 05:47:06.489018 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:06.489190 kubelet[2344]: I0710 05:47:06.489051 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:06.489190 kubelet[2344]: I0710 05:47:06.489068 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:06.489190 kubelet[2344]: I0710 05:47:06.489082 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:06.489190 kubelet[2344]: I0710 05:47:06.489161 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:06.489312 kubelet[2344]: I0710 05:47:06.489223 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:06.622262 kubelet[2344]: W0710 05:47:06.622163 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:06.622262 kubelet[2344]: E0710 05:47:06.622264 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:06.693045 kubelet[2344]: W0710 05:47:06.692843 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:06.693045 kubelet[2344]: E0710 05:47:06.692939 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:06.736015 kubelet[2344]: E0710 05:47:06.735938 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:06.736814 containerd[1580]: time="2025-07-10T05:47:06.736741733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:06.749126 kubelet[2344]: E0710 05:47:06.749057 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:06.749727 containerd[1580]: time="2025-07-10T05:47:06.749675629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd66b28176cfa81dc44ff140f276c451,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:06.754100 kubelet[2344]: E0710 05:47:06.754022 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:06.754679 containerd[1580]: time="2025-07-10T05:47:06.754589133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:06.757894 kubelet[2344]: I0710 05:47:06.757871 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:06.758477 kubelet[2344]: E0710 05:47:06.758407 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jul 10 05:47:06.827263 containerd[1580]: time="2025-07-10T05:47:06.827195056Z" level=info msg="connecting to shim b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83" address="unix:///run/containerd/s/010c787f0ac3b3ac098f6910ac61a0c179f39221ebc8069f34f5dde6f4abe309" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:06.838392 containerd[1580]: time="2025-07-10T05:47:06.837528424Z" level=info msg="connecting to shim 4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde" address="unix:///run/containerd/s/9b425540dcba061138699a2752207b200b25cfc7704e582fc96d9fe650bd637b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:06.923982 kubelet[2344]: W0710 05:47:06.909603 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:06.923982 kubelet[2344]: E0710 05:47:06.909683 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:06.930514 systemd[1]: Started cri-containerd-b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83.scope - libcontainer container b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83. Jul 10 05:47:06.948521 systemd[1]: Started cri-containerd-4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde.scope - libcontainer container 4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde. Jul 10 05:47:07.086882 kubelet[2344]: E0710 05:47:07.086836 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Jul 10 05:47:07.204780 containerd[1580]: time="2025-07-10T05:47:07.204639298Z" level=info msg="connecting to shim 8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d" address="unix:///run/containerd/s/d6457badc93118bdfae3c6fb34cc5b13d839954606d18433d3729806a88de260" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:07.213470 kubelet[2344]: W0710 05:47:07.213353 2344 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jul 10 05:47:07.213470 kubelet[2344]: E0710 05:47:07.213471 2344 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Jul 10 05:47:07.237504 systemd[1]: Started cri-containerd-8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d.scope - libcontainer container 8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d. Jul 10 05:47:07.280682 containerd[1580]: time="2025-07-10T05:47:07.280615210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd66b28176cfa81dc44ff140f276c451,Namespace:kube-system,Attempt:0,} returns sandbox id \"4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde\"" Jul 10 05:47:07.282182 kubelet[2344]: E0710 05:47:07.282135 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:07.283825 containerd[1580]: time="2025-07-10T05:47:07.283794793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83\"" Jul 10 05:47:07.284255 kubelet[2344]: E0710 05:47:07.284227 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:07.284515 containerd[1580]: time="2025-07-10T05:47:07.284439692Z" level=info msg="CreateContainer within sandbox \"4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 05:47:07.285249 containerd[1580]: time="2025-07-10T05:47:07.285224314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d\"" Jul 10 05:47:07.285920 kubelet[2344]: E0710 05:47:07.285892 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:07.286232 containerd[1580]: time="2025-07-10T05:47:07.286199783Z" level=info msg="CreateContainer within sandbox \"b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 05:47:07.295473 containerd[1580]: time="2025-07-10T05:47:07.295427048Z" level=info msg="Container 13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:07.299489 containerd[1580]: time="2025-07-10T05:47:07.299452998Z" level=info msg="CreateContainer within sandbox \"8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 05:47:07.314824 containerd[1580]: time="2025-07-10T05:47:07.314743493Z" level=info msg="Container 05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:07.315342 containerd[1580]: time="2025-07-10T05:47:07.315292252Z" level=info msg="CreateContainer within sandbox \"b3a13aa3c012177fac8cb67747d940ea1ef1fa66c881e05387de6c9d81d12f83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51\"" Jul 10 05:47:07.316383 containerd[1580]: time="2025-07-10T05:47:07.316149149Z" level=info msg="StartContainer for \"13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51\"" Jul 10 05:47:07.317749 containerd[1580]: time="2025-07-10T05:47:07.317723792Z" level=info msg="connecting to shim 13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51" address="unix:///run/containerd/s/010c787f0ac3b3ac098f6910ac61a0c179f39221ebc8069f34f5dde6f4abe309" protocol=ttrpc version=3 Jul 10 05:47:07.318167 containerd[1580]: time="2025-07-10T05:47:07.318121848Z" level=info msg="Container 1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:07.324735 containerd[1580]: time="2025-07-10T05:47:07.324697570Z" level=info msg="CreateContainer within sandbox \"4afa13553ccebde0856860e9a4ae0c5959558503e7f62f6b764f23f71a955bde\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6\"" Jul 10 05:47:07.325444 containerd[1580]: time="2025-07-10T05:47:07.325333212Z" level=info msg="StartContainer for \"05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6\"" Jul 10 05:47:07.326729 containerd[1580]: time="2025-07-10T05:47:07.326699314Z" level=info msg="connecting to shim 05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6" address="unix:///run/containerd/s/9b425540dcba061138699a2752207b200b25cfc7704e582fc96d9fe650bd637b" protocol=ttrpc version=3 Jul 10 05:47:07.328185 containerd[1580]: time="2025-07-10T05:47:07.328126731Z" level=info msg="CreateContainer within sandbox \"8e45ce81769b36e4181038c5de094d76d5f72086bd537fe3063d23e3c325b86d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2\"" Jul 10 05:47:07.328825 containerd[1580]: time="2025-07-10T05:47:07.328791107Z" level=info msg="StartContainer for \"1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2\"" Jul 10 05:47:07.329754 containerd[1580]: time="2025-07-10T05:47:07.329723947Z" level=info msg="connecting to shim 1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2" address="unix:///run/containerd/s/d6457badc93118bdfae3c6fb34cc5b13d839954606d18433d3729806a88de260" protocol=ttrpc version=3 Jul 10 05:47:07.344762 systemd[1]: Started cri-containerd-13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51.scope - libcontainer container 13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51. Jul 10 05:47:07.383648 systemd[1]: Started cri-containerd-05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6.scope - libcontainer container 05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6. Jul 10 05:47:07.388902 systemd[1]: Started cri-containerd-1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2.scope - libcontainer container 1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2. Jul 10 05:47:07.458418 containerd[1580]: time="2025-07-10T05:47:07.457778394Z" level=info msg="StartContainer for \"05a08e536697e21650a1862e2a1fb31cbecaa7ecb19fae66234ab770ba87cfc6\" returns successfully" Jul 10 05:47:07.458885 containerd[1580]: time="2025-07-10T05:47:07.458840586Z" level=info msg="StartContainer for \"13c1ceaf48370c807d5e11e9b9e8e07154b38f37a4173032977afd5f57b93e51\" returns successfully" Jul 10 05:47:07.479612 containerd[1580]: time="2025-07-10T05:47:07.479571945Z" level=info msg="StartContainer for \"1e001d56f35a13b2b0caf5c73eb9069ae15c125ec65509904025a97022abb2d2\" returns successfully" Jul 10 05:47:07.561312 kubelet[2344]: I0710 05:47:07.560903 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:07.561312 kubelet[2344]: E0710 05:47:07.561210 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jul 10 05:47:07.708024 kubelet[2344]: E0710 05:47:07.707980 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:07.708174 kubelet[2344]: E0710 05:47:07.708117 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:07.711997 kubelet[2344]: E0710 05:47:07.711916 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:07.712037 kubelet[2344]: E0710 05:47:07.712016 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:07.714572 kubelet[2344]: E0710 05:47:07.714544 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:07.714677 kubelet[2344]: E0710 05:47:07.714654 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:08.719114 kubelet[2344]: E0710 05:47:08.719069 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:08.719626 kubelet[2344]: E0710 05:47:08.719229 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:08.720333 kubelet[2344]: E0710 05:47:08.719747 2344 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 05:47:08.720333 kubelet[2344]: E0710 05:47:08.719875 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:09.162973 kubelet[2344]: I0710 05:47:09.162928 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:09.256853 kubelet[2344]: E0710 05:47:09.256802 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 05:47:09.331159 kubelet[2344]: I0710 05:47:09.331081 2344 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 05:47:09.331159 kubelet[2344]: E0710 05:47:09.331121 2344 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 05:47:09.384735 kubelet[2344]: I0710 05:47:09.384646 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:09.393688 kubelet[2344]: E0710 05:47:09.393654 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:09.393688 kubelet[2344]: I0710 05:47:09.393685 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:09.395652 kubelet[2344]: E0710 05:47:09.395617 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:09.395652 kubelet[2344]: I0710 05:47:09.395634 2344 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 05:47:09.396856 kubelet[2344]: E0710 05:47:09.396827 2344 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 05:47:09.672811 kubelet[2344]: I0710 05:47:09.672768 2344 apiserver.go:52] "Watching apiserver" Jul 10 05:47:09.684343 kubelet[2344]: I0710 05:47:09.684296 2344 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 05:47:11.283928 systemd[1]: Reload requested from client PID 2620 ('systemctl') (unit session-7.scope)... Jul 10 05:47:11.283944 systemd[1]: Reloading... Jul 10 05:47:11.403404 zram_generator::config[2666]: No configuration found. Jul 10 05:47:11.504460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 05:47:11.637125 systemd[1]: Reloading finished in 352 ms. Jul 10 05:47:11.667608 kubelet[2344]: I0710 05:47:11.667559 2344 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 05:47:11.667758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:47:11.693659 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 05:47:11.694021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:47:11.694077 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 133M memory peak. Jul 10 05:47:11.696044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 05:47:11.894217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 05:47:11.899491 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 05:47:11.975256 kubelet[2708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:47:11.975256 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 05:47:11.975256 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 05:47:11.975881 kubelet[2708]: I0710 05:47:11.975818 2708 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 05:47:11.984045 kubelet[2708]: I0710 05:47:11.984007 2708 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 05:47:11.984045 kubelet[2708]: I0710 05:47:11.984032 2708 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 05:47:11.984335 kubelet[2708]: I0710 05:47:11.984311 2708 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 05:47:11.985437 kubelet[2708]: I0710 05:47:11.985413 2708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 05:47:11.987463 kubelet[2708]: I0710 05:47:11.987411 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 05:47:11.994255 kubelet[2708]: I0710 05:47:11.994229 2708 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 05:47:12.000155 kubelet[2708]: I0710 05:47:12.000120 2708 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 05:47:12.000410 kubelet[2708]: I0710 05:47:12.000349 2708 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 05:47:12.000588 kubelet[2708]: I0710 05:47:12.000402 2708 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 05:47:12.000698 kubelet[2708]: I0710 05:47:12.000591 2708 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 05:47:12.000698 kubelet[2708]: I0710 05:47:12.000601 2708 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 05:47:12.000698 kubelet[2708]: I0710 05:47:12.000652 2708 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:47:12.000840 kubelet[2708]: I0710 05:47:12.000825 2708 kubelet.go:446] "Attempting to sync node with API server" Jul 10 05:47:12.000890 kubelet[2708]: I0710 05:47:12.000851 2708 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 05:47:12.000890 kubelet[2708]: I0710 05:47:12.000874 2708 kubelet.go:352] "Adding apiserver pod source" Jul 10 05:47:12.000890 kubelet[2708]: I0710 05:47:12.000884 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 05:47:12.002038 kubelet[2708]: I0710 05:47:12.002005 2708 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 10 05:47:12.002464 kubelet[2708]: I0710 05:47:12.002436 2708 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 05:47:12.003201 kubelet[2708]: I0710 05:47:12.003172 2708 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 05:47:12.003253 kubelet[2708]: I0710 05:47:12.003230 2708 server.go:1287] "Started kubelet" Jul 10 05:47:12.003684 kubelet[2708]: I0710 05:47:12.003599 2708 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.004500 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005161 2708 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005234 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005244 2708 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005430 2708 reconciler.go:26] "Reconciler: start to sync state" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005649 2708 server.go:479] "Adding debug handlers to kubelet server" Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.005926 2708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 05:47:12.006376 kubelet[2708]: I0710 05:47:12.006242 2708 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 05:47:12.007680 kubelet[2708]: I0710 05:47:12.007655 2708 factory.go:221] Registration of the systemd container factory successfully Jul 10 05:47:12.007816 kubelet[2708]: I0710 05:47:12.007798 2708 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 05:47:12.016800 kubelet[2708]: I0710 05:47:12.016768 2708 factory.go:221] Registration of the containerd container factory successfully Jul 10 05:47:12.022642 kubelet[2708]: E0710 05:47:12.012066 2708 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 05:47:12.023761 kubelet[2708]: I0710 05:47:12.023694 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 05:47:12.029286 kubelet[2708]: I0710 05:47:12.029248 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 05:47:12.029349 kubelet[2708]: I0710 05:47:12.029307 2708 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 05:47:12.029349 kubelet[2708]: I0710 05:47:12.029337 2708 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 05:47:12.029349 kubelet[2708]: I0710 05:47:12.029349 2708 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 05:47:12.029457 kubelet[2708]: E0710 05:47:12.029439 2708 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 05:47:12.054937 kubelet[2708]: I0710 05:47:12.054901 2708 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 05:47:12.054937 kubelet[2708]: I0710 05:47:12.054922 2708 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 05:47:12.054937 kubelet[2708]: I0710 05:47:12.054940 2708 state_mem.go:36] "Initialized new in-memory state store" Jul 10 05:47:12.055123 kubelet[2708]: I0710 05:47:12.055089 2708 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 05:47:12.055123 kubelet[2708]: I0710 05:47:12.055100 2708 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 05:47:12.055123 kubelet[2708]: I0710 05:47:12.055119 2708 policy_none.go:49] "None policy: Start" Jul 10 05:47:12.055197 kubelet[2708]: I0710 05:47:12.055134 2708 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 05:47:12.055197 kubelet[2708]: I0710 05:47:12.055144 2708 state_mem.go:35] "Initializing new in-memory state store" Jul 10 05:47:12.055308 kubelet[2708]: I0710 05:47:12.055288 2708 state_mem.go:75] "Updated machine memory state" Jul 10 05:47:12.062967 kubelet[2708]: I0710 05:47:12.062843 2708 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 05:47:12.063106 kubelet[2708]: I0710 05:47:12.063064 2708 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 05:47:12.063106 kubelet[2708]: I0710 05:47:12.063076 2708 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 05:47:12.063301 kubelet[2708]: I0710 05:47:12.063276 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 05:47:12.063947 kubelet[2708]: E0710 05:47:12.063907 2708 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 05:47:12.130649 kubelet[2708]: I0710 05:47:12.130573 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:12.130787 kubelet[2708]: I0710 05:47:12.130754 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 05:47:12.132022 kubelet[2708]: I0710 05:47:12.131913 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.169053 kubelet[2708]: I0710 05:47:12.168673 2708 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 05:47:12.175833 kubelet[2708]: I0710 05:47:12.175803 2708 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 05:47:12.175990 kubelet[2708]: I0710 05:47:12.175869 2708 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 05:47:12.197028 sudo[2743]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 05:47:12.197389 sudo[2743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 05:47:12.306527 kubelet[2708]: I0710 05:47:12.306438 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:12.306527 kubelet[2708]: I0710 05:47:12.306515 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.306797 kubelet[2708]: I0710 05:47:12.306624 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.306797 kubelet[2708]: I0710 05:47:12.306679 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 05:47:12.306797 kubelet[2708]: I0710 05:47:12.306716 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:12.306991 kubelet[2708]: I0710 05:47:12.306939 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd66b28176cfa81dc44ff140f276c451-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd66b28176cfa81dc44ff140f276c451\") " pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:12.307085 kubelet[2708]: I0710 05:47:12.307019 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.307169 kubelet[2708]: I0710 05:47:12.307088 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.307230 kubelet[2708]: I0710 05:47:12.307176 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 05:47:12.436010 kubelet[2708]: E0710 05:47:12.435921 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:12.437132 kubelet[2708]: E0710 05:47:12.437095 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:12.437388 kubelet[2708]: E0710 05:47:12.437333 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:12.614294 sudo[2743]: pam_unix(sudo:session): session closed for user root Jul 10 05:47:13.001155 kubelet[2708]: I0710 05:47:13.001098 2708 apiserver.go:52] "Watching apiserver" Jul 10 05:47:13.005833 kubelet[2708]: I0710 05:47:13.005794 2708 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 05:47:13.040519 kubelet[2708]: E0710 05:47:13.040480 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:13.040693 kubelet[2708]: I0710 05:47:13.040532 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:13.042534 kubelet[2708]: E0710 05:47:13.041331 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:13.046656 kubelet[2708]: E0710 05:47:13.046548 2708 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 05:47:13.046744 kubelet[2708]: E0710 05:47:13.046717 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:13.068523 kubelet[2708]: I0710 05:47:13.068404 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.068385816 podStartE2EDuration="1.068385816s" podCreationTimestamp="2025-07-10 05:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:13.059698965 +0000 UTC m=+1.156411155" watchObservedRunningTime="2025-07-10 05:47:13.068385816 +0000 UTC m=+1.165097986" Jul 10 05:47:13.077481 kubelet[2708]: I0710 05:47:13.077405 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.077378631 podStartE2EDuration="1.077378631s" podCreationTimestamp="2025-07-10 05:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:13.077345679 +0000 UTC m=+1.174057849" watchObservedRunningTime="2025-07-10 05:47:13.077378631 +0000 UTC m=+1.174090801" Jul 10 05:47:13.077584 kubelet[2708]: I0710 05:47:13.077498 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.077494438 podStartE2EDuration="1.077494438s" podCreationTimestamp="2025-07-10 05:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:13.06863307 +0000 UTC m=+1.165345230" watchObservedRunningTime="2025-07-10 05:47:13.077494438 +0000 UTC m=+1.174206608" Jul 10 05:47:13.941416 sudo[1785]: pam_unix(sudo:session): session closed for user root Jul 10 05:47:13.943124 sshd[1784]: Connection closed by 10.0.0.1 port 53484 Jul 10 05:47:13.943566 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jul 10 05:47:13.948123 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:53484.service: Deactivated successfully. Jul 10 05:47:13.950407 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 05:47:13.950643 systemd[1]: session-7.scope: Consumed 5.540s CPU time, 262.5M memory peak. Jul 10 05:47:13.951872 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Jul 10 05:47:13.952971 systemd-logind[1554]: Removed session 7. Jul 10 05:47:14.041196 kubelet[2708]: E0710 05:47:14.041168 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:14.041571 kubelet[2708]: E0710 05:47:14.041204 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:15.645109 kubelet[2708]: E0710 05:47:15.645071 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:15.955448 kubelet[2708]: E0710 05:47:15.955294 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:16.395684 kubelet[2708]: I0710 05:47:16.395622 2708 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 05:47:16.396028 containerd[1580]: time="2025-07-10T05:47:16.395980154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 05:47:16.396469 kubelet[2708]: I0710 05:47:16.396293 2708 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 05:47:16.892777 kubelet[2708]: E0710 05:47:16.892735 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:17.399958 systemd[1]: Created slice kubepods-besteffort-podb69ebdbd_c09e_45c2_8b57_4bec0b4bf520.slice - libcontainer container kubepods-besteffort-podb69ebdbd_c09e_45c2_8b57_4bec0b4bf520.slice. Jul 10 05:47:17.440588 kubelet[2708]: I0710 05:47:17.440520 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b69ebdbd-c09e-45c2-8b57-4bec0b4bf520-lib-modules\") pod \"kube-proxy-zzjmb\" (UID: \"b69ebdbd-c09e-45c2-8b57-4bec0b4bf520\") " pod="kube-system/kube-proxy-zzjmb" Jul 10 05:47:17.440588 kubelet[2708]: I0710 05:47:17.440572 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btgxq\" (UniqueName: \"kubernetes.io/projected/b69ebdbd-c09e-45c2-8b57-4bec0b4bf520-kube-api-access-btgxq\") pod \"kube-proxy-zzjmb\" (UID: \"b69ebdbd-c09e-45c2-8b57-4bec0b4bf520\") " pod="kube-system/kube-proxy-zzjmb" Jul 10 05:47:17.440817 kubelet[2708]: I0710 05:47:17.440610 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b69ebdbd-c09e-45c2-8b57-4bec0b4bf520-kube-proxy\") pod \"kube-proxy-zzjmb\" (UID: \"b69ebdbd-c09e-45c2-8b57-4bec0b4bf520\") " pod="kube-system/kube-proxy-zzjmb" Jul 10 05:47:17.440817 kubelet[2708]: I0710 05:47:17.440635 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b69ebdbd-c09e-45c2-8b57-4bec0b4bf520-xtables-lock\") pod \"kube-proxy-zzjmb\" (UID: \"b69ebdbd-c09e-45c2-8b57-4bec0b4bf520\") " pod="kube-system/kube-proxy-zzjmb" Jul 10 05:47:17.480805 systemd[1]: Created slice kubepods-burstable-pod3109a114_bf52_4057_9feb_a423c1a9b834.slice - libcontainer container kubepods-burstable-pod3109a114_bf52_4057_9feb_a423c1a9b834.slice. Jul 10 05:47:17.540524 systemd[1]: Created slice kubepods-besteffort-pod9b893876_18df_4210_ac5a_888dfd8f36fc.slice - libcontainer container kubepods-besteffort-pod9b893876_18df_4210_ac5a_888dfd8f36fc.slice. Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.540889 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-kernel\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.540929 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-bpf-maps\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.540948 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-cgroup\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.540966 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3109a114-bf52-4057-9feb-a423c1a9b834-clustermesh-secrets\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.540989 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-net\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541494 kubelet[2708]: I0710 05:47:17.541010 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-etc-cni-netd\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541027 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-config-path\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541050 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-hubble-tls\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541096 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-hostproc\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541117 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cni-path\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541269 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-lib-modules\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541706 kubelet[2708]: I0710 05:47:17.541422 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-xtables-lock\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541875 kubelet[2708]: I0710 05:47:17.541513 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-run\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.541875 kubelet[2708]: I0710 05:47:17.541543 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-kube-api-access-g98rk\") pod \"cilium-4mhtt\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " pod="kube-system/cilium-4mhtt" Jul 10 05:47:17.641990 kubelet[2708]: I0710 05:47:17.641927 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b893876-18df-4210-ac5a-888dfd8f36fc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vk8f4\" (UID: \"9b893876-18df-4210-ac5a-888dfd8f36fc\") " pod="kube-system/cilium-operator-6c4d7847fc-vk8f4" Jul 10 05:47:17.642122 kubelet[2708]: I0710 05:47:17.642041 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mh7\" (UniqueName: \"kubernetes.io/projected/9b893876-18df-4210-ac5a-888dfd8f36fc-kube-api-access-q7mh7\") pod \"cilium-operator-6c4d7847fc-vk8f4\" (UID: \"9b893876-18df-4210-ac5a-888dfd8f36fc\") " pod="kube-system/cilium-operator-6c4d7847fc-vk8f4" Jul 10 05:47:17.784936 kubelet[2708]: E0710 05:47:17.784812 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:17.785658 containerd[1580]: time="2025-07-10T05:47:17.785596351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mhtt,Uid:3109a114-bf52-4057-9feb-a423c1a9b834,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:17.804740 containerd[1580]: time="2025-07-10T05:47:17.804672486Z" level=info msg="connecting to shim 4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:17.835527 systemd[1]: Started cri-containerd-4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270.scope - libcontainer container 4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270. Jul 10 05:47:17.844809 kubelet[2708]: E0710 05:47:17.844771 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:17.845456 containerd[1580]: time="2025-07-10T05:47:17.845421103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vk8f4,Uid:9b893876-18df-4210-ac5a-888dfd8f36fc,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:17.866064 containerd[1580]: time="2025-07-10T05:47:17.866003619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mhtt,Uid:3109a114-bf52-4057-9feb-a423c1a9b834,Namespace:kube-system,Attempt:0,} returns sandbox id \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\"" Jul 10 05:47:17.866830 kubelet[2708]: E0710 05:47:17.866802 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:17.867987 containerd[1580]: time="2025-07-10T05:47:17.867960380Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 05:47:17.868556 containerd[1580]: time="2025-07-10T05:47:17.868474644Z" level=info msg="connecting to shim a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df" address="unix:///run/containerd/s/549a77ed9f7a3e95a1e3eea329c14c2defbfd5ee49c9c3b8761bc5c30e6e6066" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:17.894517 systemd[1]: Started cri-containerd-a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df.scope - libcontainer container a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df. Jul 10 05:47:17.942623 containerd[1580]: time="2025-07-10T05:47:17.942583881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vk8f4,Uid:9b893876-18df-4210-ac5a-888dfd8f36fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\"" Jul 10 05:47:17.943153 kubelet[2708]: E0710 05:47:17.943132 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:18.012593 kubelet[2708]: E0710 05:47:18.012532 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:18.013279 containerd[1580]: time="2025-07-10T05:47:18.013230537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzjmb,Uid:b69ebdbd-c09e-45c2-8b57-4bec0b4bf520,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:18.055556 containerd[1580]: time="2025-07-10T05:47:18.055492612Z" level=info msg="connecting to shim 7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698" address="unix:///run/containerd/s/fcd09fd9d45f6100704befff8d73330588c2f83ad05e0b0e6c5853dd449ae160" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:18.087528 systemd[1]: Started cri-containerd-7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698.scope - libcontainer container 7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698. Jul 10 05:47:18.117008 containerd[1580]: time="2025-07-10T05:47:18.116947299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzjmb,Uid:b69ebdbd-c09e-45c2-8b57-4bec0b4bf520,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698\"" Jul 10 05:47:18.117868 kubelet[2708]: E0710 05:47:18.117837 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:18.120518 containerd[1580]: time="2025-07-10T05:47:18.120468134Z" level=info msg="CreateContainer within sandbox \"7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 05:47:18.132937 containerd[1580]: time="2025-07-10T05:47:18.132879618Z" level=info msg="Container 7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:18.143726 containerd[1580]: time="2025-07-10T05:47:18.143672771Z" level=info msg="CreateContainer within sandbox \"7e70e30196563678e109619951ebb7a107a9011c677ec983c58099c7946a4698\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4\"" Jul 10 05:47:18.144425 containerd[1580]: time="2025-07-10T05:47:18.144376825Z" level=info msg="StartContainer for \"7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4\"" Jul 10 05:47:18.145882 containerd[1580]: time="2025-07-10T05:47:18.145853856Z" level=info msg="connecting to shim 7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4" address="unix:///run/containerd/s/fcd09fd9d45f6100704befff8d73330588c2f83ad05e0b0e6c5853dd449ae160" protocol=ttrpc version=3 Jul 10 05:47:18.168508 systemd[1]: Started cri-containerd-7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4.scope - libcontainer container 7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4. Jul 10 05:47:18.213424 containerd[1580]: time="2025-07-10T05:47:18.213372856Z" level=info msg="StartContainer for \"7b4d3f470741037f9cb410abc47b9a395f85cc3bfdb0a533133797859735acf4\" returns successfully" Jul 10 05:47:19.052795 kubelet[2708]: E0710 05:47:19.052352 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:19.061113 kubelet[2708]: I0710 05:47:19.061046 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zzjmb" podStartSLOduration=2.06102418 podStartE2EDuration="2.06102418s" podCreationTimestamp="2025-07-10 05:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:19.060896066 +0000 UTC m=+7.157608266" watchObservedRunningTime="2025-07-10 05:47:19.06102418 +0000 UTC m=+7.157736350" Jul 10 05:47:21.803506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666093705.mount: Deactivated successfully. Jul 10 05:47:25.346638 containerd[1580]: time="2025-07-10T05:47:25.346571766Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:25.347821 containerd[1580]: time="2025-07-10T05:47:25.347783976Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 05:47:25.349076 containerd[1580]: time="2025-07-10T05:47:25.349045048Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:25.350308 containerd[1580]: time="2025-07-10T05:47:25.350259683Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.482267322s" Jul 10 05:47:25.350308 containerd[1580]: time="2025-07-10T05:47:25.350293387Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 05:47:25.351594 containerd[1580]: time="2025-07-10T05:47:25.351544701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 05:47:25.352447 containerd[1580]: time="2025-07-10T05:47:25.352416054Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 05:47:25.362445 containerd[1580]: time="2025-07-10T05:47:25.362394497Z" level=info msg="Container d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:25.366013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051691329.mount: Deactivated successfully. Jul 10 05:47:25.368848 containerd[1580]: time="2025-07-10T05:47:25.368811171Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\"" Jul 10 05:47:25.369232 containerd[1580]: time="2025-07-10T05:47:25.369202714Z" level=info msg="StartContainer for \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\"" Jul 10 05:47:25.370190 containerd[1580]: time="2025-07-10T05:47:25.370143999Z" level=info msg="connecting to shim d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" protocol=ttrpc version=3 Jul 10 05:47:25.421498 systemd[1]: Started cri-containerd-d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a.scope - libcontainer container d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a. Jul 10 05:47:25.454976 containerd[1580]: time="2025-07-10T05:47:25.454925966Z" level=info msg="StartContainer for \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" returns successfully" Jul 10 05:47:25.466625 systemd[1]: cri-containerd-d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a.scope: Deactivated successfully. Jul 10 05:47:25.467203 containerd[1580]: time="2025-07-10T05:47:25.467133538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" id:\"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" pid:3129 exited_at:{seconds:1752126445 nanos:466607951}" Jul 10 05:47:25.467203 containerd[1580]: time="2025-07-10T05:47:25.467162763Z" level=info msg="received exit event container_id:\"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" id:\"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" pid:3129 exited_at:{seconds:1752126445 nanos:466607951}" Jul 10 05:47:25.488638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a-rootfs.mount: Deactivated successfully. Jul 10 05:47:25.663127 kubelet[2708]: E0710 05:47:25.662978 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:25.959872 kubelet[2708]: E0710 05:47:25.959744 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:26.065341 kubelet[2708]: E0710 05:47:26.065305 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:26.065341 kubelet[2708]: E0710 05:47:26.065329 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:26.068265 containerd[1580]: time="2025-07-10T05:47:26.068213237Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 05:47:26.083386 containerd[1580]: time="2025-07-10T05:47:26.081617931Z" level=info msg="Container 642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:26.092653 containerd[1580]: time="2025-07-10T05:47:26.092604631Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\"" Jul 10 05:47:26.096639 containerd[1580]: time="2025-07-10T05:47:26.096603042Z" level=info msg="StartContainer for \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\"" Jul 10 05:47:26.097703 containerd[1580]: time="2025-07-10T05:47:26.097676216Z" level=info msg="connecting to shim 642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" protocol=ttrpc version=3 Jul 10 05:47:26.127498 systemd[1]: Started cri-containerd-642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa.scope - libcontainer container 642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa. Jul 10 05:47:26.158927 containerd[1580]: time="2025-07-10T05:47:26.158885185Z" level=info msg="StartContainer for \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" returns successfully" Jul 10 05:47:26.175155 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 05:47:26.175657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:47:26.176002 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:47:26.177616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 05:47:26.178788 systemd[1]: cri-containerd-642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa.scope: Deactivated successfully. Jul 10 05:47:26.181264 containerd[1580]: time="2025-07-10T05:47:26.181200583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" id:\"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" pid:3175 exited_at:{seconds:1752126446 nanos:180761601}" Jul 10 05:47:26.181483 containerd[1580]: time="2025-07-10T05:47:26.181245118Z" level=info msg="received exit event container_id:\"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" id:\"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" pid:3175 exited_at:{seconds:1752126446 nanos:180761601}" Jul 10 05:47:26.209841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 05:47:26.364531 update_engine[1567]: I20250710 05:47:26.364444 1567 update_attempter.cc:509] Updating boot flags... Jul 10 05:47:26.826868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008382102.mount: Deactivated successfully. Jul 10 05:47:26.896951 kubelet[2708]: E0710 05:47:26.896912 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:27.069886 kubelet[2708]: E0710 05:47:27.069840 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:27.072518 containerd[1580]: time="2025-07-10T05:47:27.072469176Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 05:47:27.087769 containerd[1580]: time="2025-07-10T05:47:27.086374532Z" level=info msg="Container b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:27.091629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741734535.mount: Deactivated successfully. Jul 10 05:47:27.101081 containerd[1580]: time="2025-07-10T05:47:27.101031431Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\"" Jul 10 05:47:27.103299 containerd[1580]: time="2025-07-10T05:47:27.103257238Z" level=info msg="StartContainer for \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\"" Jul 10 05:47:27.104759 containerd[1580]: time="2025-07-10T05:47:27.104727283Z" level=info msg="connecting to shim b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" protocol=ttrpc version=3 Jul 10 05:47:27.125572 systemd[1]: Started cri-containerd-b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da.scope - libcontainer container b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da. Jul 10 05:47:27.179505 systemd[1]: cri-containerd-b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da.scope: Deactivated successfully. Jul 10 05:47:27.181688 containerd[1580]: time="2025-07-10T05:47:27.181645857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" id:\"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" pid:3252 exited_at:{seconds:1752126447 nanos:181219408}" Jul 10 05:47:27.181849 containerd[1580]: time="2025-07-10T05:47:27.181694659Z" level=info msg="received exit event container_id:\"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" id:\"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" pid:3252 exited_at:{seconds:1752126447 nanos:181219408}" Jul 10 05:47:27.184025 containerd[1580]: time="2025-07-10T05:47:27.183957235Z" level=info msg="StartContainer for \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" returns successfully" Jul 10 05:47:27.425071 containerd[1580]: time="2025-07-10T05:47:27.424908323Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:27.426905 containerd[1580]: time="2025-07-10T05:47:27.426859970Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 05:47:27.428447 containerd[1580]: time="2025-07-10T05:47:27.428411730Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 05:47:27.429641 containerd[1580]: time="2025-07-10T05:47:27.429589061Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.078002751s" Jul 10 05:47:27.429641 containerd[1580]: time="2025-07-10T05:47:27.429621863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 05:47:27.433099 containerd[1580]: time="2025-07-10T05:47:27.433040881Z" level=info msg="CreateContainer within sandbox \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 05:47:27.444853 containerd[1580]: time="2025-07-10T05:47:27.444793608Z" level=info msg="Container 8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:27.454104 containerd[1580]: time="2025-07-10T05:47:27.454062389Z" level=info msg="CreateContainer within sandbox \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\"" Jul 10 05:47:27.454553 containerd[1580]: time="2025-07-10T05:47:27.454512913Z" level=info msg="StartContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\"" Jul 10 05:47:27.455434 containerd[1580]: time="2025-07-10T05:47:27.455403640Z" level=info msg="connecting to shim 8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea" address="unix:///run/containerd/s/549a77ed9f7a3e95a1e3eea329c14c2defbfd5ee49c9c3b8761bc5c30e6e6066" protocol=ttrpc version=3 Jul 10 05:47:27.478577 systemd[1]: Started cri-containerd-8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea.scope - libcontainer container 8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea. Jul 10 05:47:27.514598 containerd[1580]: time="2025-07-10T05:47:27.514538801Z" level=info msg="StartContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" returns successfully" Jul 10 05:47:28.073731 kubelet[2708]: E0710 05:47:28.073617 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:28.078915 kubelet[2708]: E0710 05:47:28.078864 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:28.080853 containerd[1580]: time="2025-07-10T05:47:28.080792955Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 05:47:28.096865 containerd[1580]: time="2025-07-10T05:47:28.096812968Z" level=info msg="Container b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:28.113234 containerd[1580]: time="2025-07-10T05:47:28.112029460Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\"" Jul 10 05:47:28.117057 containerd[1580]: time="2025-07-10T05:47:28.117008557Z" level=info msg="StartContainer for \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\"" Jul 10 05:47:28.117587 kubelet[2708]: I0710 05:47:28.117349 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vk8f4" podStartSLOduration=1.6305036849999999 podStartE2EDuration="11.117317372s" podCreationTimestamp="2025-07-10 05:47:17 +0000 UTC" firstStartedPulling="2025-07-10 05:47:17.94371274 +0000 UTC m=+6.040424910" lastFinishedPulling="2025-07-10 05:47:27.430526427 +0000 UTC m=+15.527238597" observedRunningTime="2025-07-10 05:47:28.090982769 +0000 UTC m=+16.187694939" watchObservedRunningTime="2025-07-10 05:47:28.117317372 +0000 UTC m=+16.214029542" Jul 10 05:47:28.118084 containerd[1580]: time="2025-07-10T05:47:28.118041804Z" level=info msg="connecting to shim b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" protocol=ttrpc version=3 Jul 10 05:47:28.157530 systemd[1]: Started cri-containerd-b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5.scope - libcontainer container b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5. Jul 10 05:47:28.193629 systemd[1]: cri-containerd-b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5.scope: Deactivated successfully. Jul 10 05:47:28.194260 containerd[1580]: time="2025-07-10T05:47:28.194202610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" id:\"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" pid:3330 exited_at:{seconds:1752126448 nanos:193813353}" Jul 10 05:47:28.201629 containerd[1580]: time="2025-07-10T05:47:28.201567344Z" level=info msg="received exit event container_id:\"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" id:\"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" pid:3330 exited_at:{seconds:1752126448 nanos:193813353}" Jul 10 05:47:28.205135 containerd[1580]: time="2025-07-10T05:47:28.204729742Z" level=info msg="StartContainer for \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" returns successfully" Jul 10 05:47:29.084263 kubelet[2708]: E0710 05:47:29.084201 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:29.084263 kubelet[2708]: E0710 05:47:29.084221 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:29.086591 containerd[1580]: time="2025-07-10T05:47:29.086551431Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 05:47:29.101973 containerd[1580]: time="2025-07-10T05:47:29.101835173Z" level=info msg="Container 2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:29.111693 containerd[1580]: time="2025-07-10T05:47:29.111638445Z" level=info msg="CreateContainer within sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\"" Jul 10 05:47:29.112272 containerd[1580]: time="2025-07-10T05:47:29.112235845Z" level=info msg="StartContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\"" Jul 10 05:47:29.113433 containerd[1580]: time="2025-07-10T05:47:29.113402874Z" level=info msg="connecting to shim 2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d" address="unix:///run/containerd/s/8a2aede03f4609989772eaa720ceded18783ee882cb411d97a34cdde2eb09377" protocol=ttrpc version=3 Jul 10 05:47:29.149537 systemd[1]: Started cri-containerd-2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d.scope - libcontainer container 2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d. Jul 10 05:47:29.188726 containerd[1580]: time="2025-07-10T05:47:29.188650865Z" level=info msg="StartContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" returns successfully" Jul 10 05:47:29.260109 containerd[1580]: time="2025-07-10T05:47:29.259985422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" id:\"c4fe3babf40733676eca565d408870f9c3d2aafaa7ce18ed22fb04c880267c69\" pid:3401 exited_at:{seconds:1752126449 nanos:259655718}" Jul 10 05:47:29.331873 kubelet[2708]: I0710 05:47:29.331756 2708 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 05:47:29.370191 systemd[1]: Created slice kubepods-burstable-poda5bf12a7_937d_48d3_b3ae_c164831c8ca8.slice - libcontainer container kubepods-burstable-poda5bf12a7_937d_48d3_b3ae_c164831c8ca8.slice. Jul 10 05:47:29.383902 systemd[1]: Created slice kubepods-burstable-pod9975fbbb_ea9a_4b31_85ec_1b0a301e3edb.slice - libcontainer container kubepods-burstable-pod9975fbbb_ea9a_4b31_85ec_1b0a301e3edb.slice. Jul 10 05:47:29.417280 kubelet[2708]: I0710 05:47:29.417209 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fnk\" (UniqueName: \"kubernetes.io/projected/a5bf12a7-937d-48d3-b3ae-c164831c8ca8-kube-api-access-x8fnk\") pod \"coredns-668d6bf9bc-vgj8m\" (UID: \"a5bf12a7-937d-48d3-b3ae-c164831c8ca8\") " pod="kube-system/coredns-668d6bf9bc-vgj8m" Jul 10 05:47:29.417280 kubelet[2708]: I0710 05:47:29.417268 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bf12a7-937d-48d3-b3ae-c164831c8ca8-config-volume\") pod \"coredns-668d6bf9bc-vgj8m\" (UID: \"a5bf12a7-937d-48d3-b3ae-c164831c8ca8\") " pod="kube-system/coredns-668d6bf9bc-vgj8m" Jul 10 05:47:29.417280 kubelet[2708]: I0710 05:47:29.417293 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqbxn\" (UniqueName: \"kubernetes.io/projected/9975fbbb-ea9a-4b31-85ec-1b0a301e3edb-kube-api-access-nqbxn\") pod \"coredns-668d6bf9bc-zh2ks\" (UID: \"9975fbbb-ea9a-4b31-85ec-1b0a301e3edb\") " pod="kube-system/coredns-668d6bf9bc-zh2ks" Jul 10 05:47:29.417604 kubelet[2708]: I0710 05:47:29.417318 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9975fbbb-ea9a-4b31-85ec-1b0a301e3edb-config-volume\") pod \"coredns-668d6bf9bc-zh2ks\" (UID: \"9975fbbb-ea9a-4b31-85ec-1b0a301e3edb\") " pod="kube-system/coredns-668d6bf9bc-zh2ks" Jul 10 05:47:29.679444 kubelet[2708]: E0710 05:47:29.679323 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:29.680266 containerd[1580]: time="2025-07-10T05:47:29.680224896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgj8m,Uid:a5bf12a7-937d-48d3-b3ae-c164831c8ca8,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:29.690384 kubelet[2708]: E0710 05:47:29.690333 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:29.691667 containerd[1580]: time="2025-07-10T05:47:29.691396215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zh2ks,Uid:9975fbbb-ea9a-4b31-85ec-1b0a301e3edb,Namespace:kube-system,Attempt:0,}" Jul 10 05:47:30.135130 kubelet[2708]: E0710 05:47:30.135095 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:31.136404 kubelet[2708]: E0710 05:47:31.136350 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:31.405431 systemd-networkd[1491]: cilium_host: Link UP Jul 10 05:47:31.405607 systemd-networkd[1491]: cilium_net: Link UP Jul 10 05:47:31.406227 systemd-networkd[1491]: cilium_net: Gained carrier Jul 10 05:47:31.406450 systemd-networkd[1491]: cilium_host: Gained carrier Jul 10 05:47:31.511622 systemd-networkd[1491]: cilium_vxlan: Link UP Jul 10 05:47:31.511636 systemd-networkd[1491]: cilium_vxlan: Gained carrier Jul 10 05:47:31.612637 systemd-networkd[1491]: cilium_net: Gained IPv6LL Jul 10 05:47:31.720393 kernel: NET: Registered PF_ALG protocol family Jul 10 05:47:32.137996 kubelet[2708]: E0710 05:47:32.137962 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:32.367963 systemd-networkd[1491]: lxc_health: Link UP Jul 10 05:47:32.369275 systemd-networkd[1491]: lxc_health: Gained carrier Jul 10 05:47:32.436554 systemd-networkd[1491]: cilium_host: Gained IPv6LL Jul 10 05:47:32.716933 kernel: eth0: renamed from tmp78729 Jul 10 05:47:32.716449 systemd-networkd[1491]: lxc8270eaff32a8: Link UP Jul 10 05:47:32.716761 systemd-networkd[1491]: lxc8270eaff32a8: Gained carrier Jul 10 05:47:32.732122 systemd-networkd[1491]: lxc8d76e4cadf49: Link UP Jul 10 05:47:32.742435 kernel: eth0: renamed from tmp04b16 Jul 10 05:47:32.742403 systemd-networkd[1491]: lxc8d76e4cadf49: Gained carrier Jul 10 05:47:32.948593 systemd-networkd[1491]: cilium_vxlan: Gained IPv6LL Jul 10 05:47:33.786133 kubelet[2708]: E0710 05:47:33.786093 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:33.799939 kubelet[2708]: I0710 05:47:33.799822 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4mhtt" podStartSLOduration=9.316407279 podStartE2EDuration="16.799792939s" podCreationTimestamp="2025-07-10 05:47:17 +0000 UTC" firstStartedPulling="2025-07-10 05:47:17.867577048 +0000 UTC m=+5.964289218" lastFinishedPulling="2025-07-10 05:47:25.350962707 +0000 UTC m=+13.447674878" observedRunningTime="2025-07-10 05:47:30.150130565 +0000 UTC m=+18.246842745" watchObservedRunningTime="2025-07-10 05:47:33.799792939 +0000 UTC m=+21.896505119" Jul 10 05:47:33.844524 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 10 05:47:34.100553 systemd-networkd[1491]: lxc8270eaff32a8: Gained IPv6LL Jul 10 05:47:34.140828 kubelet[2708]: E0710 05:47:34.140797 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:34.804535 systemd-networkd[1491]: lxc8d76e4cadf49: Gained IPv6LL Jul 10 05:47:35.142395 kubelet[2708]: E0710 05:47:35.142248 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:36.046393 containerd[1580]: time="2025-07-10T05:47:36.046146012Z" level=info msg="connecting to shim 78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30" address="unix:///run/containerd/s/cc859045ede99f950e33a9445f07802e0a3e4d881d949aed8adee9c4369deb03" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:36.049118 containerd[1580]: time="2025-07-10T05:47:36.049093802Z" level=info msg="connecting to shim 04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356" address="unix:///run/containerd/s/42b431d67332f163a1f9db72e13d776ae5573aab6a01b9c7b2e1a087908ac75d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:47:36.080493 systemd[1]: Started cri-containerd-04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356.scope - libcontainer container 04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356. Jul 10 05:47:36.081993 systemd[1]: Started cri-containerd-78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30.scope - libcontainer container 78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30. Jul 10 05:47:36.093927 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:47:36.095721 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 05:47:36.129997 containerd[1580]: time="2025-07-10T05:47:36.129951461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zh2ks,Uid:9975fbbb-ea9a-4b31-85ec-1b0a301e3edb,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356\"" Jul 10 05:47:36.130818 containerd[1580]: time="2025-07-10T05:47:36.130785274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgj8m,Uid:a5bf12a7-937d-48d3-b3ae-c164831c8ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30\"" Jul 10 05:47:36.133140 kubelet[2708]: E0710 05:47:36.133099 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:36.133593 kubelet[2708]: E0710 05:47:36.133567 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:36.135495 containerd[1580]: time="2025-07-10T05:47:36.135466363Z" level=info msg="CreateContainer within sandbox \"78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 05:47:36.135960 containerd[1580]: time="2025-07-10T05:47:36.135912645Z" level=info msg="CreateContainer within sandbox \"04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 05:47:36.146309 containerd[1580]: time="2025-07-10T05:47:36.146206049Z" level=info msg="Container b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:36.154404 containerd[1580]: time="2025-07-10T05:47:36.154352525Z" level=info msg="Container d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:47:36.156867 containerd[1580]: time="2025-07-10T05:47:36.156834777Z" level=info msg="CreateContainer within sandbox \"78729d458a4b134665876af523852a19c05877880cc6dd1422adaac1b947ef30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993\"" Jul 10 05:47:36.157245 containerd[1580]: time="2025-07-10T05:47:36.157209874Z" level=info msg="StartContainer for \"b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993\"" Jul 10 05:47:36.157983 containerd[1580]: time="2025-07-10T05:47:36.157961882Z" level=info msg="connecting to shim b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993" address="unix:///run/containerd/s/cc859045ede99f950e33a9445f07802e0a3e4d881d949aed8adee9c4369deb03" protocol=ttrpc version=3 Jul 10 05:47:36.161087 containerd[1580]: time="2025-07-10T05:47:36.161042864Z" level=info msg="CreateContainer within sandbox \"04b16af6357c79f5f6db73da5f881b42734106141acdf64dae7ea57847f72356\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd\"" Jul 10 05:47:36.162138 containerd[1580]: time="2025-07-10T05:47:36.161569678Z" level=info msg="StartContainer for \"d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd\"" Jul 10 05:47:36.173050 containerd[1580]: time="2025-07-10T05:47:36.173013733Z" level=info msg="connecting to shim d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd" address="unix:///run/containerd/s/42b431d67332f163a1f9db72e13d776ae5573aab6a01b9c7b2e1a087908ac75d" protocol=ttrpc version=3 Jul 10 05:47:36.178507 systemd[1]: Started cri-containerd-b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993.scope - libcontainer container b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993. Jul 10 05:47:36.194481 systemd[1]: Started cri-containerd-d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd.scope - libcontainer container d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd. Jul 10 05:47:36.220178 containerd[1580]: time="2025-07-10T05:47:36.220083253Z" level=info msg="StartContainer for \"b713b4eac8893a4805fb51f816c03b7178c08d40267bc35b2ac1df57f4c58993\" returns successfully" Jul 10 05:47:36.231049 containerd[1580]: time="2025-07-10T05:47:36.230942344Z" level=info msg="StartContainer for \"d4af87cfa0892f65fc5afbb23ab6f0094cbf2989b904dd44ef501ce7cfac2cbd\" returns successfully" Jul 10 05:47:37.152881 kubelet[2708]: E0710 05:47:37.152776 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:37.155181 kubelet[2708]: E0710 05:47:37.155157 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:37.166064 kubelet[2708]: I0710 05:47:37.165733 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vgj8m" podStartSLOduration=20.165712398 podStartE2EDuration="20.165712398s" podCreationTimestamp="2025-07-10 05:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:37.16557029 +0000 UTC m=+25.262282460" watchObservedRunningTime="2025-07-10 05:47:37.165712398 +0000 UTC m=+25.262424568" Jul 10 05:47:37.186600 kubelet[2708]: I0710 05:47:37.186529 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zh2ks" podStartSLOduration=20.186502867 podStartE2EDuration="20.186502867s" podCreationTimestamp="2025-07-10 05:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:47:37.184381278 +0000 UTC m=+25.281093448" watchObservedRunningTime="2025-07-10 05:47:37.186502867 +0000 UTC m=+25.283215027" Jul 10 05:47:38.156404 kubelet[2708]: E0710 05:47:38.156347 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:38.156820 kubelet[2708]: E0710 05:47:38.156630 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:39.157896 kubelet[2708]: E0710 05:47:39.157856 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:39.157896 kubelet[2708]: E0710 05:47:39.157891 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:47:40.659341 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:49362.service - OpenSSH per-connection server daemon (10.0.0.1:49362). Jul 10 05:47:40.719020 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 49362 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:47:40.720553 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:47:40.725263 systemd-logind[1554]: New session 8 of user core. Jul 10 05:47:40.735484 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 05:47:40.865665 sshd[4051]: Connection closed by 10.0.0.1 port 49362 Jul 10 05:47:40.865981 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jul 10 05:47:40.870746 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:49362.service: Deactivated successfully. Jul 10 05:47:40.872832 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 05:47:40.873821 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Jul 10 05:47:40.875153 systemd-logind[1554]: Removed session 8. Jul 10 05:47:45.881920 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:49366.service - OpenSSH per-connection server daemon (10.0.0.1:49366). Jul 10 05:47:45.940089 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:47:45.941751 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:47:45.946051 systemd-logind[1554]: New session 9 of user core. Jul 10 05:47:45.963819 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 05:47:46.080741 sshd[4070]: Connection closed by 10.0.0.1 port 49366 Jul 10 05:47:46.081085 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Jul 10 05:47:46.085531 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:49366.service: Deactivated successfully. Jul 10 05:47:46.087594 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 05:47:46.088305 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Jul 10 05:47:46.089352 systemd-logind[1554]: Removed session 9. Jul 10 05:47:51.104553 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:57936.service - OpenSSH per-connection server daemon (10.0.0.1:57936). Jul 10 05:47:51.162106 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 57936 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:47:51.163985 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:47:51.168864 systemd-logind[1554]: New session 10 of user core. Jul 10 05:47:51.179506 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 05:47:51.290384 sshd[4090]: Connection closed by 10.0.0.1 port 57936 Jul 10 05:47:51.290752 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jul 10 05:47:51.295482 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:57936.service: Deactivated successfully. Jul 10 05:47:51.297693 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 05:47:51.298640 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Jul 10 05:47:51.300177 systemd-logind[1554]: Removed session 10. Jul 10 05:47:56.309250 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:57946.service - OpenSSH per-connection server daemon (10.0.0.1:57946). Jul 10 05:47:56.374210 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:47:56.376035 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:47:56.380331 systemd-logind[1554]: New session 11 of user core. Jul 10 05:47:56.394502 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 05:47:56.506883 sshd[4109]: Connection closed by 10.0.0.1 port 57946 Jul 10 05:47:56.507275 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jul 10 05:47:56.512086 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:57946.service: Deactivated successfully. Jul 10 05:47:56.514019 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 05:47:56.514890 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Jul 10 05:47:56.516053 systemd-logind[1554]: Removed session 11. Jul 10 05:48:01.526449 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). Jul 10 05:48:01.588938 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:01.590258 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:01.594601 systemd-logind[1554]: New session 12 of user core. Jul 10 05:48:01.603501 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 05:48:01.712565 sshd[4126]: Connection closed by 10.0.0.1 port 45080 Jul 10 05:48:01.712903 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:01.725027 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:45080.service: Deactivated successfully. Jul 10 05:48:01.727026 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 05:48:01.727977 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Jul 10 05:48:01.730683 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:45092.service - OpenSSH per-connection server daemon (10.0.0.1:45092). Jul 10 05:48:01.731560 systemd-logind[1554]: Removed session 12. Jul 10 05:48:01.789392 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 45092 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:01.790623 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:01.795104 systemd-logind[1554]: New session 13 of user core. Jul 10 05:48:01.809495 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 05:48:01.956864 sshd[4144]: Connection closed by 10.0.0.1 port 45092 Jul 10 05:48:01.957708 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:01.971146 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:45092.service: Deactivated successfully. Jul 10 05:48:01.973065 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 05:48:01.980567 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Jul 10 05:48:01.992069 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:45098.service - OpenSSH per-connection server daemon (10.0.0.1:45098). Jul 10 05:48:01.994655 systemd-logind[1554]: Removed session 13. Jul 10 05:48:02.047766 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 45098 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:02.049120 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:02.053867 systemd-logind[1554]: New session 14 of user core. Jul 10 05:48:02.063496 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 05:48:02.173241 sshd[4158]: Connection closed by 10.0.0.1 port 45098 Jul 10 05:48:02.173627 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:02.177713 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:45098.service: Deactivated successfully. Jul 10 05:48:02.179724 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 05:48:02.181724 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Jul 10 05:48:02.182751 systemd-logind[1554]: Removed session 14. Jul 10 05:48:07.187210 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:45114.service - OpenSSH per-connection server daemon (10.0.0.1:45114). Jul 10 05:48:07.240892 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 45114 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:07.242487 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:07.246713 systemd-logind[1554]: New session 15 of user core. Jul 10 05:48:07.256505 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 05:48:07.361787 sshd[4175]: Connection closed by 10.0.0.1 port 45114 Jul 10 05:48:07.362157 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:07.366176 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:45114.service: Deactivated successfully. Jul 10 05:48:07.368290 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 05:48:07.369181 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Jul 10 05:48:07.370483 systemd-logind[1554]: Removed session 15. Jul 10 05:48:12.378240 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:58876.service - OpenSSH per-connection server daemon (10.0.0.1:58876). Jul 10 05:48:12.429966 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 58876 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:12.431614 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:12.436028 systemd-logind[1554]: New session 16 of user core. Jul 10 05:48:12.446503 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 05:48:12.555472 sshd[4193]: Connection closed by 10.0.0.1 port 58876 Jul 10 05:48:12.555847 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:12.567327 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:58876.service: Deactivated successfully. Jul 10 05:48:12.569469 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 05:48:12.570252 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Jul 10 05:48:12.573174 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Jul 10 05:48:12.573803 systemd-logind[1554]: Removed session 16. Jul 10 05:48:12.631790 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:12.633197 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:12.637344 systemd-logind[1554]: New session 17 of user core. Jul 10 05:48:12.646491 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 05:48:12.893804 sshd[4209]: Connection closed by 10.0.0.1 port 58880 Jul 10 05:48:12.894402 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:12.905471 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:58880.service: Deactivated successfully. Jul 10 05:48:12.907673 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 05:48:12.908477 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Jul 10 05:48:12.911210 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:58890.service - OpenSSH per-connection server daemon (10.0.0.1:58890). Jul 10 05:48:12.912211 systemd-logind[1554]: Removed session 17. Jul 10 05:48:12.969669 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 58890 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:12.971172 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:12.975826 systemd-logind[1554]: New session 18 of user core. Jul 10 05:48:12.990481 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 05:48:13.740050 sshd[4224]: Connection closed by 10.0.0.1 port 58890 Jul 10 05:48:13.740409 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:13.755788 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:58890.service: Deactivated successfully. Jul 10 05:48:13.758875 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 05:48:13.760483 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Jul 10 05:48:13.762744 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:58900.service - OpenSSH per-connection server daemon (10.0.0.1:58900). Jul 10 05:48:13.763750 systemd-logind[1554]: Removed session 18. Jul 10 05:48:13.809503 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 58900 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:13.810889 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:13.815240 systemd-logind[1554]: New session 19 of user core. Jul 10 05:48:13.827509 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 05:48:14.026869 sshd[4246]: Connection closed by 10.0.0.1 port 58900 Jul 10 05:48:14.028605 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:14.040280 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:58900.service: Deactivated successfully. Jul 10 05:48:14.042394 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 05:48:14.043285 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Jul 10 05:48:14.045674 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:58914.service - OpenSSH per-connection server daemon (10.0.0.1:58914). Jul 10 05:48:14.046293 systemd-logind[1554]: Removed session 19. Jul 10 05:48:14.104104 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 58914 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:14.105548 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:14.111039 systemd-logind[1554]: New session 20 of user core. Jul 10 05:48:14.118539 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 05:48:14.225704 sshd[4260]: Connection closed by 10.0.0.1 port 58914 Jul 10 05:48:14.226069 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:14.229923 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:58914.service: Deactivated successfully. Jul 10 05:48:14.231795 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 05:48:14.232544 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Jul 10 05:48:14.233670 systemd-logind[1554]: Removed session 20. Jul 10 05:48:19.242545 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:58920.service - OpenSSH per-connection server daemon (10.0.0.1:58920). Jul 10 05:48:19.289033 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 58920 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:19.290765 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:19.295500 systemd-logind[1554]: New session 21 of user core. Jul 10 05:48:19.305485 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 05:48:19.423504 sshd[4280]: Connection closed by 10.0.0.1 port 58920 Jul 10 05:48:19.423903 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:19.429079 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:58920.service: Deactivated successfully. Jul 10 05:48:19.431127 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 05:48:19.432120 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Jul 10 05:48:19.433491 systemd-logind[1554]: Removed session 21. Jul 10 05:48:24.436271 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:53418.service - OpenSSH per-connection server daemon (10.0.0.1:53418). Jul 10 05:48:24.488837 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 53418 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:24.490536 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:24.494801 systemd-logind[1554]: New session 22 of user core. Jul 10 05:48:24.505486 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 05:48:24.617284 sshd[4296]: Connection closed by 10.0.0.1 port 53418 Jul 10 05:48:24.617668 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:24.622695 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:53418.service: Deactivated successfully. Jul 10 05:48:24.624773 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 05:48:24.625539 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Jul 10 05:48:24.626708 systemd-logind[1554]: Removed session 22. Jul 10 05:48:29.630312 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:46374.service - OpenSSH per-connection server daemon (10.0.0.1:46374). Jul 10 05:48:29.681572 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:29.683208 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:29.687504 systemd-logind[1554]: New session 23 of user core. Jul 10 05:48:29.695487 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 05:48:29.807877 sshd[4313]: Connection closed by 10.0.0.1 port 46374 Jul 10 05:48:29.808212 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:29.813151 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:46374.service: Deactivated successfully. Jul 10 05:48:29.815558 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 05:48:29.816493 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Jul 10 05:48:29.817854 systemd-logind[1554]: Removed session 23. Jul 10 05:48:32.034620 kubelet[2708]: E0710 05:48:32.034570 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:34.030086 kubelet[2708]: E0710 05:48:34.030041 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:34.030589 kubelet[2708]: E0710 05:48:34.030204 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:34.820790 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:46380.service - OpenSSH per-connection server daemon (10.0.0.1:46380). Jul 10 05:48:34.877527 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 46380 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:34.878813 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:34.883118 systemd-logind[1554]: New session 24 of user core. Jul 10 05:48:34.889487 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 05:48:34.996051 sshd[4329]: Connection closed by 10.0.0.1 port 46380 Jul 10 05:48:34.996443 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:35.007182 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:46380.service: Deactivated successfully. Jul 10 05:48:35.009078 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 05:48:35.009921 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Jul 10 05:48:35.012737 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:46392.service - OpenSSH per-connection server daemon (10.0.0.1:46392). Jul 10 05:48:35.013421 systemd-logind[1554]: Removed session 24. Jul 10 05:48:35.069525 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:35.071330 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:35.075474 systemd-logind[1554]: New session 25 of user core. Jul 10 05:48:35.085479 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 05:48:36.443551 containerd[1580]: time="2025-07-10T05:48:36.443499341Z" level=info msg="StopContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" with timeout 30 (s)" Jul 10 05:48:36.450696 containerd[1580]: time="2025-07-10T05:48:36.450661751Z" level=info msg="Stop container \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" with signal terminated" Jul 10 05:48:36.462056 systemd[1]: cri-containerd-8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea.scope: Deactivated successfully. Jul 10 05:48:36.466583 containerd[1580]: time="2025-07-10T05:48:36.466461629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" id:\"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" pid:3297 exited_at:{seconds:1752126516 nanos:465459225}" Jul 10 05:48:36.467005 containerd[1580]: time="2025-07-10T05:48:36.466288899Z" level=info msg="received exit event container_id:\"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" id:\"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" pid:3297 exited_at:{seconds:1752126516 nanos:465459225}" Jul 10 05:48:36.476518 containerd[1580]: time="2025-07-10T05:48:36.476452118Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 05:48:36.476817 containerd[1580]: time="2025-07-10T05:48:36.476658992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" id:\"bd2b78482bdecdbaceb25c2ee24d2f603090fefd406edb3a124ba2d59fa72374\" pid:4366 exited_at:{seconds:1752126516 nanos:475533404}" Jul 10 05:48:36.478930 containerd[1580]: time="2025-07-10T05:48:36.478906472Z" level=info msg="StopContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" with timeout 2 (s)" Jul 10 05:48:36.479331 containerd[1580]: time="2025-07-10T05:48:36.479288781Z" level=info msg="Stop container \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" with signal terminated" Jul 10 05:48:36.487481 systemd-networkd[1491]: lxc_health: Link DOWN Jul 10 05:48:36.487491 systemd-networkd[1491]: lxc_health: Lost carrier Jul 10 05:48:36.494551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea-rootfs.mount: Deactivated successfully. Jul 10 05:48:36.508982 systemd[1]: cri-containerd-2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d.scope: Deactivated successfully. Jul 10 05:48:36.509395 systemd[1]: cri-containerd-2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d.scope: Consumed 6.464s CPU time, 126.5M memory peak, 236K read from disk, 13.3M written to disk. Jul 10 05:48:36.509728 containerd[1580]: time="2025-07-10T05:48:36.509693304Z" level=info msg="received exit event container_id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" pid:3370 exited_at:{seconds:1752126516 nanos:509394343}" Jul 10 05:48:36.509921 containerd[1580]: time="2025-07-10T05:48:36.509877446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" id:\"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" pid:3370 exited_at:{seconds:1752126516 nanos:509394343}" Jul 10 05:48:36.532944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d-rootfs.mount: Deactivated successfully. Jul 10 05:48:36.648170 containerd[1580]: time="2025-07-10T05:48:36.648108856Z" level=info msg="StopContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" returns successfully" Jul 10 05:48:36.649219 containerd[1580]: time="2025-07-10T05:48:36.649167085Z" level=info msg="StopContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" returns successfully" Jul 10 05:48:36.661379 containerd[1580]: time="2025-07-10T05:48:36.661319961Z" level=info msg="StopPodSandbox for \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\"" Jul 10 05:48:36.661439 containerd[1580]: time="2025-07-10T05:48:36.661415904Z" level=info msg="Container to stop \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.661696 containerd[1580]: time="2025-07-10T05:48:36.661648408Z" level=info msg="StopPodSandbox for \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\"" Jul 10 05:48:36.661809 containerd[1580]: time="2025-07-10T05:48:36.661754161Z" level=info msg="Container to stop \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.661809 containerd[1580]: time="2025-07-10T05:48:36.661774900Z" level=info msg="Container to stop \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.661809 containerd[1580]: time="2025-07-10T05:48:36.661785320Z" level=info msg="Container to stop \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.661809 containerd[1580]: time="2025-07-10T05:48:36.661794918Z" level=info msg="Container to stop \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.661809 containerd[1580]: time="2025-07-10T05:48:36.661804065Z" level=info msg="Container to stop \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 05:48:36.669113 systemd[1]: cri-containerd-a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df.scope: Deactivated successfully. Jul 10 05:48:36.670148 systemd[1]: cri-containerd-4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270.scope: Deactivated successfully. Jul 10 05:48:36.670415 containerd[1580]: time="2025-07-10T05:48:36.670332914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" pid:2867 exit_status:137 exited_at:{seconds:1752126516 nanos:670074160}" Jul 10 05:48:36.691904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270-rootfs.mount: Deactivated successfully. Jul 10 05:48:36.695300 containerd[1580]: time="2025-07-10T05:48:36.695209917Z" level=info msg="shim disconnected" id=4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270 namespace=k8s.io Jul 10 05:48:36.695300 containerd[1580]: time="2025-07-10T05:48:36.695240055Z" level=warning msg="cleaning up after shim disconnected" id=4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270 namespace=k8s.io Jul 10 05:48:36.695740 containerd[1580]: time="2025-07-10T05:48:36.695247969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 05:48:36.703158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df-rootfs.mount: Deactivated successfully. Jul 10 05:48:36.708683 containerd[1580]: time="2025-07-10T05:48:36.708629080Z" level=info msg="shim disconnected" id=a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df namespace=k8s.io Jul 10 05:48:36.708683 containerd[1580]: time="2025-07-10T05:48:36.708672042Z" level=warning msg="cleaning up after shim disconnected" id=a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df namespace=k8s.io Jul 10 05:48:36.708815 containerd[1580]: time="2025-07-10T05:48:36.708691689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 05:48:36.728343 containerd[1580]: time="2025-07-10T05:48:36.728291729Z" level=error msg="Failed to handle event container_id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" pid:2867 exit_status:137 exited_at:{seconds:1752126516 nanos:670074160} for a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 10 05:48:36.728524 containerd[1580]: time="2025-07-10T05:48:36.728425896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" id:\"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" pid:2823 exit_status:137 exited_at:{seconds:1752126516 nanos:671512485}" Jul 10 05:48:36.730278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df-shm.mount: Deactivated successfully. Jul 10 05:48:36.730440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270-shm.mount: Deactivated successfully. Jul 10 05:48:36.737634 containerd[1580]: time="2025-07-10T05:48:36.737578966Z" level=info msg="received exit event sandbox_id:\"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" exit_status:137 exited_at:{seconds:1752126516 nanos:671512485}" Jul 10 05:48:36.737878 containerd[1580]: time="2025-07-10T05:48:36.737763839Z" level=info msg="received exit event sandbox_id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" exit_status:137 exited_at:{seconds:1752126516 nanos:670074160}" Jul 10 05:48:36.747419 containerd[1580]: time="2025-07-10T05:48:36.747348844Z" level=info msg="TearDown network for sandbox \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" successfully" Jul 10 05:48:36.747419 containerd[1580]: time="2025-07-10T05:48:36.747403968Z" level=info msg="StopPodSandbox for \"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" returns successfully" Jul 10 05:48:36.751414 containerd[1580]: time="2025-07-10T05:48:36.751284666Z" level=info msg="TearDown network for sandbox \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" successfully" Jul 10 05:48:36.751414 containerd[1580]: time="2025-07-10T05:48:36.751311457Z" level=info msg="StopPodSandbox for \"4225ed1185b0ddefc6096b9e80fbd9059beea9bb8a12187ba5f1432539fe9270\" returns successfully" Jul 10 05:48:36.806983 kubelet[2708]: I0710 05:48:36.806920 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-kernel\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.806983 kubelet[2708]: I0710 05:48:36.806968 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3109a114-bf52-4057-9feb-a423c1a9b834-clustermesh-secrets\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.806983 kubelet[2708]: I0710 05:48:36.806990 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-kube-api-access-g98rk\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807008 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-etc-cni-netd\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807026 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-config-path\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807041 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mh7\" (UniqueName: \"kubernetes.io/projected/9b893876-18df-4210-ac5a-888dfd8f36fc-kube-api-access-q7mh7\") pod \"9b893876-18df-4210-ac5a-888dfd8f36fc\" (UID: \"9b893876-18df-4210-ac5a-888dfd8f36fc\") " Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807057 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-bpf-maps\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807060 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.807564 kubelet[2708]: I0710 05:48:36.807071 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-net\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807103 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807129 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-run\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807154 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b893876-18df-4210-ac5a-888dfd8f36fc-cilium-config-path\") pod \"9b893876-18df-4210-ac5a-888dfd8f36fc\" (UID: \"9b893876-18df-4210-ac5a-888dfd8f36fc\") " Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807170 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cni-path\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807197 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-cgroup\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807707 kubelet[2708]: I0710 05:48:36.807215 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-hubble-tls\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807229 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-lib-modules\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807245 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-xtables-lock\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807259 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-hostproc\") pod \"3109a114-bf52-4057-9feb-a423c1a9b834\" (UID: \"3109a114-bf52-4057-9feb-a423c1a9b834\") " Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807296 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807306 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.807844 kubelet[2708]: I0710 05:48:36.807324 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-hostproc" (OuterVolumeSpecName: "hostproc") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.807974 kubelet[2708]: I0710 05:48:36.807341 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.807974 kubelet[2708]: I0710 05:48:36.807380 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811472 kubelet[2708]: I0710 05:48:36.811440 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b893876-18df-4210-ac5a-888dfd8f36fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b893876-18df-4210-ac5a-888dfd8f36fc" (UID: "9b893876-18df-4210-ac5a-888dfd8f36fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 05:48:36.811713 kubelet[2708]: I0710 05:48:36.811440 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811713 kubelet[2708]: I0710 05:48:36.811450 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cni-path" (OuterVolumeSpecName: "cni-path") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811713 kubelet[2708]: I0710 05:48:36.811462 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811713 kubelet[2708]: I0710 05:48:36.811649 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811713 kubelet[2708]: I0710 05:48:36.811718 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 05:48:36.811873 kubelet[2708]: I0710 05:48:36.811778 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 05:48:36.812177 kubelet[2708]: I0710 05:48:36.812142 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-kube-api-access-g98rk" (OuterVolumeSpecName: "kube-api-access-g98rk") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "kube-api-access-g98rk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 05:48:36.812383 kubelet[2708]: I0710 05:48:36.812339 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109a114-bf52-4057-9feb-a423c1a9b834-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 05:48:36.814518 kubelet[2708]: I0710 05:48:36.814491 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b893876-18df-4210-ac5a-888dfd8f36fc-kube-api-access-q7mh7" (OuterVolumeSpecName: "kube-api-access-q7mh7") pod "9b893876-18df-4210-ac5a-888dfd8f36fc" (UID: "9b893876-18df-4210-ac5a-888dfd8f36fc"). InnerVolumeSpecName "kube-api-access-q7mh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 05:48:36.815115 kubelet[2708]: I0710 05:48:36.815082 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3109a114-bf52-4057-9feb-a423c1a9b834" (UID: "3109a114-bf52-4057-9feb-a423c1a9b834"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 05:48:36.907457 kubelet[2708]: I0710 05:48:36.907427 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907457 kubelet[2708]: I0710 05:48:36.907448 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7mh7\" (UniqueName: \"kubernetes.io/projected/9b893876-18df-4210-ac5a-888dfd8f36fc-kube-api-access-q7mh7\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907457 kubelet[2708]: I0710 05:48:36.907461 2708 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907471 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907482 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b893876-18df-4210-ac5a-888dfd8f36fc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907489 2708 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907497 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907506 2708 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907513 2708 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907523 2708 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907556 kubelet[2708]: I0710 05:48:36.907534 2708 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907737 kubelet[2708]: I0710 05:48:36.907541 2708 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3109a114-bf52-4057-9feb-a423c1a9b834-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907737 kubelet[2708]: I0710 05:48:36.907550 2708 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3109a114-bf52-4057-9feb-a423c1a9b834-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:36.907737 kubelet[2708]: I0710 05:48:36.907557 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g98rk\" (UniqueName: \"kubernetes.io/projected/3109a114-bf52-4057-9feb-a423c1a9b834-kube-api-access-g98rk\") on node \"localhost\" DevicePath \"\"" Jul 10 05:48:37.095604 kubelet[2708]: E0710 05:48:37.095557 2708 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 05:48:37.285634 kubelet[2708]: I0710 05:48:37.285598 2708 scope.go:117] "RemoveContainer" containerID="8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea" Jul 10 05:48:37.288200 containerd[1580]: time="2025-07-10T05:48:37.288109169Z" level=info msg="RemoveContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\"" Jul 10 05:48:37.294185 containerd[1580]: time="2025-07-10T05:48:37.293567934Z" level=info msg="RemoveContainer for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" returns successfully" Jul 10 05:48:37.294841 kubelet[2708]: I0710 05:48:37.294809 2708 scope.go:117] "RemoveContainer" containerID="8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea" Jul 10 05:48:37.295631 systemd[1]: Removed slice kubepods-besteffort-pod9b893876_18df_4210_ac5a_888dfd8f36fc.slice - libcontainer container kubepods-besteffort-pod9b893876_18df_4210_ac5a_888dfd8f36fc.slice. Jul 10 05:48:37.300534 systemd[1]: Removed slice kubepods-burstable-pod3109a114_bf52_4057_9feb_a423c1a9b834.slice - libcontainer container kubepods-burstable-pod3109a114_bf52_4057_9feb_a423c1a9b834.slice. Jul 10 05:48:37.300827 systemd[1]: kubepods-burstable-pod3109a114_bf52_4057_9feb_a423c1a9b834.slice: Consumed 6.583s CPU time, 126.9M memory peak, 240K read from disk, 13.3M written to disk. Jul 10 05:48:37.308863 containerd[1580]: time="2025-07-10T05:48:37.295035584Z" level=error msg="ContainerStatus for \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\": not found" Jul 10 05:48:37.310059 kubelet[2708]: E0710 05:48:37.310024 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\": not found" containerID="8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea" Jul 10 05:48:37.310115 kubelet[2708]: I0710 05:48:37.310054 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea"} err="failed to get container status \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ffd4ca4953b9d793a254b2738b3bcbb12c64d14fcfb4913bf1a2941c11535ea\": not found" Jul 10 05:48:37.310149 kubelet[2708]: I0710 05:48:37.310118 2708 scope.go:117] "RemoveContainer" containerID="2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d" Jul 10 05:48:37.312031 containerd[1580]: time="2025-07-10T05:48:37.311986553Z" level=info msg="RemoveContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\"" Jul 10 05:48:37.316765 containerd[1580]: time="2025-07-10T05:48:37.316631906Z" level=info msg="RemoveContainer for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" returns successfully" Jul 10 05:48:37.317196 kubelet[2708]: I0710 05:48:37.317158 2708 scope.go:117] "RemoveContainer" containerID="b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5" Jul 10 05:48:37.321980 containerd[1580]: time="2025-07-10T05:48:37.321944321Z" level=info msg="RemoveContainer for \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\"" Jul 10 05:48:37.326454 containerd[1580]: time="2025-07-10T05:48:37.326411543Z" level=info msg="RemoveContainer for \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" returns successfully" Jul 10 05:48:37.326593 kubelet[2708]: I0710 05:48:37.326557 2708 scope.go:117] "RemoveContainer" containerID="b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da" Jul 10 05:48:37.328587 containerd[1580]: time="2025-07-10T05:48:37.328530997Z" level=info msg="RemoveContainer for \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\"" Jul 10 05:48:37.332549 containerd[1580]: time="2025-07-10T05:48:37.332515920Z" level=info msg="RemoveContainer for \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" returns successfully" Jul 10 05:48:37.332676 kubelet[2708]: I0710 05:48:37.332652 2708 scope.go:117] "RemoveContainer" containerID="642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa" Jul 10 05:48:37.334183 containerd[1580]: time="2025-07-10T05:48:37.334139878Z" level=info msg="RemoveContainer for \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\"" Jul 10 05:48:37.337429 containerd[1580]: time="2025-07-10T05:48:37.337393265Z" level=info msg="RemoveContainer for \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" returns successfully" Jul 10 05:48:37.337548 kubelet[2708]: I0710 05:48:37.337527 2708 scope.go:117] "RemoveContainer" containerID="d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a" Jul 10 05:48:37.338965 containerd[1580]: time="2025-07-10T05:48:37.338898998Z" level=info msg="RemoveContainer for \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\"" Jul 10 05:48:37.342451 containerd[1580]: time="2025-07-10T05:48:37.342424353Z" level=info msg="RemoveContainer for \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" returns successfully" Jul 10 05:48:37.342614 kubelet[2708]: I0710 05:48:37.342588 2708 scope.go:117] "RemoveContainer" containerID="2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d" Jul 10 05:48:37.342817 containerd[1580]: time="2025-07-10T05:48:37.342785543Z" level=error msg="ContainerStatus for \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\": not found" Jul 10 05:48:37.342968 kubelet[2708]: E0710 05:48:37.342932 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\": not found" containerID="2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d" Jul 10 05:48:37.343015 kubelet[2708]: I0710 05:48:37.342961 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d"} err="failed to get container status \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b46306714b57c8b44c76516d9413262c8ec1ea3442b195b58c0434817a3323d\": not found" Jul 10 05:48:37.343015 kubelet[2708]: I0710 05:48:37.342986 2708 scope.go:117] "RemoveContainer" containerID="b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5" Jul 10 05:48:37.343160 containerd[1580]: time="2025-07-10T05:48:37.343122926Z" level=error msg="ContainerStatus for \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\": not found" Jul 10 05:48:37.343268 kubelet[2708]: E0710 05:48:37.343245 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\": not found" containerID="b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5" Jul 10 05:48:37.343306 kubelet[2708]: I0710 05:48:37.343265 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5"} err="failed to get container status \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b521a580e8694fda838b9b82c02e2d7cd342cae4e8e3be6487ca7ae182991ad5\": not found" Jul 10 05:48:37.343306 kubelet[2708]: I0710 05:48:37.343278 2708 scope.go:117] "RemoveContainer" containerID="b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da" Jul 10 05:48:37.343520 containerd[1580]: time="2025-07-10T05:48:37.343473866Z" level=error msg="ContainerStatus for \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\": not found" Jul 10 05:48:37.343663 kubelet[2708]: E0710 05:48:37.343635 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\": not found" containerID="b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da" Jul 10 05:48:37.343701 kubelet[2708]: I0710 05:48:37.343670 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da"} err="failed to get container status \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\": rpc error: code = NotFound desc = an error occurred when try to find container \"b792a2e57d24679890a025145934dd403d4d89a86d0c3306e556f249c79006da\": not found" Jul 10 05:48:37.343701 kubelet[2708]: I0710 05:48:37.343696 2708 scope.go:117] "RemoveContainer" containerID="642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa" Jul 10 05:48:37.343887 containerd[1580]: time="2025-07-10T05:48:37.343856987Z" level=error msg="ContainerStatus for \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\": not found" Jul 10 05:48:37.344046 kubelet[2708]: E0710 05:48:37.344018 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\": not found" containerID="642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa" Jul 10 05:48:37.344083 kubelet[2708]: I0710 05:48:37.344055 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa"} err="failed to get container status \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"642ec53c804157c584213ce96ef64613ba0f319a5e9d020165f877576e25c5aa\": not found" Jul 10 05:48:37.344110 kubelet[2708]: I0710 05:48:37.344083 2708 scope.go:117] "RemoveContainer" containerID="d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a" Jul 10 05:48:37.344272 containerd[1580]: time="2025-07-10T05:48:37.344241952Z" level=error msg="ContainerStatus for \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\": not found" Jul 10 05:48:37.344386 kubelet[2708]: E0710 05:48:37.344343 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\": not found" containerID="d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a" Jul 10 05:48:37.344386 kubelet[2708]: I0710 05:48:37.344378 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a"} err="failed to get container status \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d71772dc3e186d1940d43fed604465ce601d8a08cc003756937a5697f5ac3c3a\": not found" Jul 10 05:48:37.494383 systemd[1]: var-lib-kubelet-pods-9b893876\x2d18df\x2d4210\x2dac5a\x2d888dfd8f36fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7mh7.mount: Deactivated successfully. Jul 10 05:48:37.494503 systemd[1]: var-lib-kubelet-pods-3109a114\x2dbf52\x2d4057\x2d9feb\x2da423c1a9b834-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg98rk.mount: Deactivated successfully. Jul 10 05:48:37.494586 systemd[1]: var-lib-kubelet-pods-3109a114\x2dbf52\x2d4057\x2d9feb\x2da423c1a9b834-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 05:48:37.494660 systemd[1]: var-lib-kubelet-pods-3109a114\x2dbf52\x2d4057\x2d9feb\x2da423c1a9b834-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 05:48:37.912721 containerd[1580]: time="2025-07-10T05:48:37.912648145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" id:\"a99e240bf24ee408c6ccc0d377c258afa9e203436269738ced11bdd7e41f43df\" pid:2867 exit_status:137 exited_at:{seconds:1752126516 nanos:670074160}" Jul 10 05:48:38.032376 kubelet[2708]: I0710 05:48:38.032323 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3109a114-bf52-4057-9feb-a423c1a9b834" path="/var/lib/kubelet/pods/3109a114-bf52-4057-9feb-a423c1a9b834/volumes" Jul 10 05:48:38.033165 kubelet[2708]: I0710 05:48:38.033135 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b893876-18df-4210-ac5a-888dfd8f36fc" path="/var/lib/kubelet/pods/9b893876-18df-4210-ac5a-888dfd8f36fc/volumes" Jul 10 05:48:38.379845 sshd[4346]: Connection closed by 10.0.0.1 port 46392 Jul 10 05:48:38.380418 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:38.394246 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:46392.service: Deactivated successfully. Jul 10 05:48:38.396194 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 05:48:38.397052 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Jul 10 05:48:38.399960 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:46404.service - OpenSSH per-connection server daemon (10.0.0.1:46404). Jul 10 05:48:38.400705 systemd-logind[1554]: Removed session 25. Jul 10 05:48:38.460509 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 46404 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:38.461854 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:38.466528 systemd-logind[1554]: New session 26 of user core. Jul 10 05:48:38.472484 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 05:48:38.977723 sshd[4500]: Connection closed by 10.0.0.1 port 46404 Jul 10 05:48:38.979811 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:38.989526 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:46404.service: Deactivated successfully. Jul 10 05:48:38.993039 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 05:48:38.994935 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Jul 10 05:48:39.000675 systemd[1]: Started sshd@26-10.0.0.135:22-10.0.0.1:46418.service - OpenSSH per-connection server daemon (10.0.0.1:46418). Jul 10 05:48:39.003163 systemd-logind[1554]: Removed session 26. Jul 10 05:48:39.025910 kubelet[2708]: I0710 05:48:39.025853 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="3109a114-bf52-4057-9feb-a423c1a9b834" containerName="cilium-agent" Jul 10 05:48:39.025910 kubelet[2708]: I0710 05:48:39.025892 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="9b893876-18df-4210-ac5a-888dfd8f36fc" containerName="cilium-operator" Jul 10 05:48:39.041846 systemd[1]: Created slice kubepods-burstable-podd91af325_4b08_4ebe_8b41_767cebded684.slice - libcontainer container kubepods-burstable-podd91af325_4b08_4ebe_8b41_767cebded684.slice. Jul 10 05:48:39.065222 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 46418 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:39.068143 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:39.076829 systemd-logind[1554]: New session 27 of user core. Jul 10 05:48:39.087552 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 05:48:39.120093 kubelet[2708]: I0710 05:48:39.120038 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-cilium-run\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120093 kubelet[2708]: I0710 05:48:39.120080 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d91af325-4b08-4ebe-8b41-767cebded684-cilium-config-path\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120093 kubelet[2708]: I0710 05:48:39.120099 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d91af325-4b08-4ebe-8b41-767cebded684-cilium-ipsec-secrets\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120116 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d91af325-4b08-4ebe-8b41-767cebded684-hubble-tls\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120131 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8qs\" (UniqueName: \"kubernetes.io/projected/d91af325-4b08-4ebe-8b41-767cebded684-kube-api-access-rp8qs\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120147 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-cni-path\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120161 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-etc-cni-netd\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120178 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-host-proc-sys-net\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120568 kubelet[2708]: I0710 05:48:39.120254 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-xtables-lock\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120270 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d91af325-4b08-4ebe-8b41-767cebded684-clustermesh-secrets\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120289 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-host-proc-sys-kernel\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120304 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-bpf-maps\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120318 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-hostproc\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120333 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-cilium-cgroup\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.120700 kubelet[2708]: I0710 05:48:39.120350 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d91af325-4b08-4ebe-8b41-767cebded684-lib-modules\") pod \"cilium-cbkrk\" (UID: \"d91af325-4b08-4ebe-8b41-767cebded684\") " pod="kube-system/cilium-cbkrk" Jul 10 05:48:39.142441 sshd[4514]: Connection closed by 10.0.0.1 port 46418 Jul 10 05:48:39.142873 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:39.158143 systemd[1]: sshd@26-10.0.0.135:22-10.0.0.1:46418.service: Deactivated successfully. Jul 10 05:48:39.160077 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 05:48:39.160892 systemd-logind[1554]: Session 27 logged out. Waiting for processes to exit. Jul 10 05:48:39.164184 systemd[1]: Started sshd@27-10.0.0.135:22-10.0.0.1:46420.service - OpenSSH per-connection server daemon (10.0.0.1:46420). Jul 10 05:48:39.164917 systemd-logind[1554]: Removed session 27. Jul 10 05:48:39.224905 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 46420 ssh2: RSA SHA256:eUYNNY6hpy0te1hkYaNcUaQ+Yf3rBt3mlqkZwaM1gM0 Jul 10 05:48:39.226472 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 05:48:39.242244 systemd-logind[1554]: New session 28 of user core. Jul 10 05:48:39.256596 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 05:48:39.348104 kubelet[2708]: E0710 05:48:39.347940 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:39.348933 containerd[1580]: time="2025-07-10T05:48:39.348877146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbkrk,Uid:d91af325-4b08-4ebe-8b41-767cebded684,Namespace:kube-system,Attempt:0,}" Jul 10 05:48:39.378100 containerd[1580]: time="2025-07-10T05:48:39.378030648Z" level=info msg="connecting to shim 2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" namespace=k8s.io protocol=ttrpc version=3 Jul 10 05:48:39.409526 systemd[1]: Started cri-containerd-2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785.scope - libcontainer container 2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785. Jul 10 05:48:39.436670 containerd[1580]: time="2025-07-10T05:48:39.436629199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbkrk,Uid:d91af325-4b08-4ebe-8b41-767cebded684,Namespace:kube-system,Attempt:0,} returns sandbox id \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\"" Jul 10 05:48:39.437512 kubelet[2708]: E0710 05:48:39.437479 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:39.443240 containerd[1580]: time="2025-07-10T05:48:39.443200235Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 05:48:39.449399 containerd[1580]: time="2025-07-10T05:48:39.449330890Z" level=info msg="Container 89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:48:39.456890 containerd[1580]: time="2025-07-10T05:48:39.456837119Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\"" Jul 10 05:48:39.457436 containerd[1580]: time="2025-07-10T05:48:39.457309971Z" level=info msg="StartContainer for \"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\"" Jul 10 05:48:39.458382 containerd[1580]: time="2025-07-10T05:48:39.458073736Z" level=info msg="connecting to shim 89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" protocol=ttrpc version=3 Jul 10 05:48:39.485522 systemd[1]: Started cri-containerd-89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2.scope - libcontainer container 89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2. Jul 10 05:48:39.519221 containerd[1580]: time="2025-07-10T05:48:39.519093773Z" level=info msg="StartContainer for \"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\" returns successfully" Jul 10 05:48:39.527313 systemd[1]: cri-containerd-89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2.scope: Deactivated successfully. Jul 10 05:48:39.528541 containerd[1580]: time="2025-07-10T05:48:39.528502628Z" level=info msg="received exit event container_id:\"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\" id:\"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\" pid:4595 exited_at:{seconds:1752126519 nanos:528255216}" Jul 10 05:48:39.528738 containerd[1580]: time="2025-07-10T05:48:39.528704533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\" id:\"89a73d7a365a8d0a71f6f74fd8b491256c1915fe5dd9203089d37023c462f5d2\" pid:4595 exited_at:{seconds:1752126519 nanos:528255216}" Jul 10 05:48:40.301120 kubelet[2708]: E0710 05:48:40.301082 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:40.304068 containerd[1580]: time="2025-07-10T05:48:40.304006143Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 05:48:40.312381 containerd[1580]: time="2025-07-10T05:48:40.312316196Z" level=info msg="Container 776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:48:40.319581 containerd[1580]: time="2025-07-10T05:48:40.319532837Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\"" Jul 10 05:48:40.320069 containerd[1580]: time="2025-07-10T05:48:40.320045534Z" level=info msg="StartContainer for \"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\"" Jul 10 05:48:40.320972 containerd[1580]: time="2025-07-10T05:48:40.320888009Z" level=info msg="connecting to shim 776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" protocol=ttrpc version=3 Jul 10 05:48:40.348496 systemd[1]: Started cri-containerd-776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7.scope - libcontainer container 776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7. Jul 10 05:48:40.377829 containerd[1580]: time="2025-07-10T05:48:40.377783960Z" level=info msg="StartContainer for \"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\" returns successfully" Jul 10 05:48:40.384173 systemd[1]: cri-containerd-776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7.scope: Deactivated successfully. Jul 10 05:48:40.385006 containerd[1580]: time="2025-07-10T05:48:40.384906752Z" level=info msg="received exit event container_id:\"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\" id:\"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\" pid:4642 exited_at:{seconds:1752126520 nanos:384614926}" Jul 10 05:48:40.385308 containerd[1580]: time="2025-07-10T05:48:40.384996945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\" id:\"776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7\" pid:4642 exited_at:{seconds:1752126520 nanos:384614926}" Jul 10 05:48:40.404911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-776120a072b16ddca720da29b4bb121f5bca250b78780a28fce6e61fd3ccd0b7-rootfs.mount: Deactivated successfully. Jul 10 05:48:41.305545 kubelet[2708]: E0710 05:48:41.305507 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:41.308089 containerd[1580]: time="2025-07-10T05:48:41.308035318Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 05:48:41.340480 containerd[1580]: time="2025-07-10T05:48:41.340422706Z" level=info msg="Container 7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:48:41.344262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031521377.mount: Deactivated successfully. Jul 10 05:48:41.348708 containerd[1580]: time="2025-07-10T05:48:41.348670506Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\"" Jul 10 05:48:41.349154 containerd[1580]: time="2025-07-10T05:48:41.349121655Z" level=info msg="StartContainer for \"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\"" Jul 10 05:48:41.350509 containerd[1580]: time="2025-07-10T05:48:41.350482757Z" level=info msg="connecting to shim 7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" protocol=ttrpc version=3 Jul 10 05:48:41.372602 systemd[1]: Started cri-containerd-7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac.scope - libcontainer container 7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac. Jul 10 05:48:41.446383 systemd[1]: cri-containerd-7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac.scope: Deactivated successfully. Jul 10 05:48:41.447824 containerd[1580]: time="2025-07-10T05:48:41.447780878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\" id:\"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\" pid:4687 exited_at:{seconds:1752126521 nanos:446836100}" Jul 10 05:48:41.452349 containerd[1580]: time="2025-07-10T05:48:41.452314003Z" level=info msg="received exit event container_id:\"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\" id:\"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\" pid:4687 exited_at:{seconds:1752126521 nanos:446836100}" Jul 10 05:48:41.461140 containerd[1580]: time="2025-07-10T05:48:41.461102152Z" level=info msg="StartContainer for \"7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac\" returns successfully" Jul 10 05:48:41.474062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7169db7bad3c2b29928ec661975fba4b70d3dedeff9527efababf8b0ac121fac-rootfs.mount: Deactivated successfully. Jul 10 05:48:42.096384 kubelet[2708]: E0710 05:48:42.096313 2708 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 05:48:42.309779 kubelet[2708]: E0710 05:48:42.309732 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:42.311554 containerd[1580]: time="2025-07-10T05:48:42.311505271Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 05:48:42.451394 containerd[1580]: time="2025-07-10T05:48:42.451253746Z" level=info msg="Container 46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:48:42.454148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567776766.mount: Deactivated successfully. Jul 10 05:48:42.461588 containerd[1580]: time="2025-07-10T05:48:42.461546773Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\"" Jul 10 05:48:42.462374 containerd[1580]: time="2025-07-10T05:48:42.462303765Z" level=info msg="StartContainer for \"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\"" Jul 10 05:48:42.463657 containerd[1580]: time="2025-07-10T05:48:42.463625691Z" level=info msg="connecting to shim 46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" protocol=ttrpc version=3 Jul 10 05:48:42.486518 systemd[1]: Started cri-containerd-46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52.scope - libcontainer container 46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52. Jul 10 05:48:42.516255 systemd[1]: cri-containerd-46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52.scope: Deactivated successfully. Jul 10 05:48:42.516397 containerd[1580]: time="2025-07-10T05:48:42.516335868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\" id:\"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\" pid:4725 exited_at:{seconds:1752126522 nanos:516088477}" Jul 10 05:48:42.517622 containerd[1580]: time="2025-07-10T05:48:42.517585939Z" level=info msg="received exit event container_id:\"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\" id:\"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\" pid:4725 exited_at:{seconds:1752126522 nanos:516088477}" Jul 10 05:48:42.525680 containerd[1580]: time="2025-07-10T05:48:42.525637449Z" level=info msg="StartContainer for \"46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52\" returns successfully" Jul 10 05:48:42.539963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46ab56e9808d6391c3cb3a6f8003705e2bed18403ec172e33e32cb96d6b0df52-rootfs.mount: Deactivated successfully. Jul 10 05:48:43.314506 kubelet[2708]: E0710 05:48:43.314473 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:43.316437 containerd[1580]: time="2025-07-10T05:48:43.316351317Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 05:48:43.330287 containerd[1580]: time="2025-07-10T05:48:43.330239279Z" level=info msg="Container 65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8: CDI devices from CRI Config.CDIDevices: []" Jul 10 05:48:43.337932 containerd[1580]: time="2025-07-10T05:48:43.337882016Z" level=info msg="CreateContainer within sandbox \"2197f402d3118a210e1abf813279bc431fa64f235aafcde6bad34999cec19785\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\"" Jul 10 05:48:43.338482 containerd[1580]: time="2025-07-10T05:48:43.338432022Z" level=info msg="StartContainer for \"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\"" Jul 10 05:48:43.339638 containerd[1580]: time="2025-07-10T05:48:43.339612619Z" level=info msg="connecting to shim 65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8" address="unix:///run/containerd/s/ac6fcfe75b48b6a96d97baca833b94539b0d7a8d40947414f4555dcccd3b05db" protocol=ttrpc version=3 Jul 10 05:48:43.358583 systemd[1]: Started cri-containerd-65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8.scope - libcontainer container 65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8. Jul 10 05:48:43.398201 containerd[1580]: time="2025-07-10T05:48:43.398147486Z" level=info msg="StartContainer for \"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" returns successfully" Jul 10 05:48:43.465087 containerd[1580]: time="2025-07-10T05:48:43.465004972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"bc5317d77ea214a19a1cb3f0859cbdd70d6bf675aa62120de4f1331e6c68dcd2\" pid:4792 exited_at:{seconds:1752126523 nanos:464688540}" Jul 10 05:48:43.827457 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 05:48:44.029723 kubelet[2708]: E0710 05:48:44.029655 2708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vgj8m" podUID="a5bf12a7-937d-48d3-b3ae-c164831c8ca8" Jul 10 05:48:44.044072 kubelet[2708]: I0710 05:48:44.044019 2708 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T05:48:44Z","lastTransitionTime":"2025-07-10T05:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 05:48:44.320333 kubelet[2708]: E0710 05:48:44.320295 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:44.334331 kubelet[2708]: I0710 05:48:44.334217 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbkrk" podStartSLOduration=5.334202081 podStartE2EDuration="5.334202081s" podCreationTimestamp="2025-07-10 05:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 05:48:44.333773495 +0000 UTC m=+92.430485665" watchObservedRunningTime="2025-07-10 05:48:44.334202081 +0000 UTC m=+92.430914251" Jul 10 05:48:45.348320 kubelet[2708]: E0710 05:48:45.348267 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:45.639738 containerd[1580]: time="2025-07-10T05:48:45.639599179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"c1480482517a2816317acc808c86eec37f0bda697cafcec007e61b7703406fca\" pid:4951 exit_status:1 exited_at:{seconds:1752126525 nanos:638283126}" Jul 10 05:48:46.030689 kubelet[2708]: E0710 05:48:46.030408 2708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-vgj8m" podUID="a5bf12a7-937d-48d3-b3ae-c164831c8ca8" Jul 10 05:48:46.913570 systemd-networkd[1491]: lxc_health: Link UP Jul 10 05:48:46.913868 systemd-networkd[1491]: lxc_health: Gained carrier Jul 10 05:48:47.030794 kubelet[2708]: E0710 05:48:47.030737 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:47.350927 kubelet[2708]: E0710 05:48:47.350879 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:47.747988 containerd[1580]: time="2025-07-10T05:48:47.747844091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"5d59720115260ef4b5227f4e93cfd80dc591bb9d7347e5a8e5da7c7444ab08df\" pid:5318 exited_at:{seconds:1752126527 nanos:747522360}" Jul 10 05:48:48.031052 kubelet[2708]: E0710 05:48:48.030922 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:48.328801 kubelet[2708]: E0710 05:48:48.328763 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:48.533751 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 10 05:48:49.888821 containerd[1580]: time="2025-07-10T05:48:49.888753077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"64953f216fd9f9020459843b7c8a6c2e4d14b114e19f714a5e235d96defbcc23\" pid:5352 exited_at:{seconds:1752126529 nanos:888066843}" Jul 10 05:48:51.030794 kubelet[2708]: E0710 05:48:51.030743 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 05:48:51.990882 containerd[1580]: time="2025-07-10T05:48:51.990824636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"29221457bfbe2975c47becff80a57871b3cc1605d6f4d8402a28647078475f79\" pid:5385 exited_at:{seconds:1752126531 nanos:990342662}" Jul 10 05:48:54.076962 containerd[1580]: time="2025-07-10T05:48:54.076902092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"65078c302d7217da9693973bba6b9d6b9c79773a9ed1115b1d02cc67f4a8cea8\" id:\"667d004bc5941f8788abb52a56ac4b5e46eaa0ac2f5f619de9fbc9409f7e1ab5\" pid:5409 exited_at:{seconds:1752126534 nanos:76512203}" Jul 10 05:48:54.083974 sshd[4528]: Connection closed by 10.0.0.1 port 46420 Jul 10 05:48:54.114650 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Jul 10 05:48:54.118801 systemd[1]: sshd@27-10.0.0.135:22-10.0.0.1:46420.service: Deactivated successfully. Jul 10 05:48:54.120863 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 05:48:54.121781 systemd-logind[1554]: Session 28 logged out. Waiting for processes to exit. Jul 10 05:48:54.122961 systemd-logind[1554]: Removed session 28.