Jul 15 00:09:58.400560 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 22:12:05 -00 2025 Jul 15 00:09:58.400596 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 15 00:09:58.400612 kernel: BIOS-provided physical RAM map: Jul 15 00:09:58.400622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 00:09:58.400632 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 00:09:58.400640 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 00:09:58.400652 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 00:09:58.400662 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 00:09:58.400671 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 15 00:09:58.400680 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 15 00:09:58.400689 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 15 00:09:58.400701 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 15 00:09:58.400716 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 15 00:09:58.400737 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 15 00:09:58.400750 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 15 00:09:58.400760 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 00:09:58.400774 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 15 00:09:58.400784 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 15 00:09:58.400793 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 15 00:09:58.400803 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 15 00:09:58.400812 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 15 00:09:58.400821 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 00:09:58.400831 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 15 00:09:58.400840 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 00:09:58.400850 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 15 00:09:58.400881 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 00:09:58.400892 kernel: NX (Execute Disable) protection: active Jul 15 00:09:58.400905 kernel: APIC: Static calls initialized Jul 15 00:09:58.400914 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 15 00:09:58.400924 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 15 00:09:58.400935 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 15 00:09:58.400945 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 15 00:09:58.400954 kernel: extended physical RAM map: Jul 15 00:09:58.400964 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 00:09:58.400974 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 00:09:58.400985 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 00:09:58.400994 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 00:09:58.401004 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 00:09:58.401014 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 15 00:09:58.401027 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 15 00:09:58.401041 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jul 15 00:09:58.401051 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jul 15 00:09:58.401061 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jul 15 00:09:58.401071 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jul 15 00:09:58.401080 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jul 15 00:09:58.401098 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 15 00:09:58.401109 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 15 00:09:58.401119 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 15 00:09:58.401130 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 15 00:09:58.401140 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 00:09:58.401151 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 15 00:09:58.401161 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 15 00:09:58.401171 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 15 00:09:58.401181 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 15 00:09:58.401195 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 15 00:09:58.401206 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 00:09:58.401216 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 15 00:09:58.401227 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 00:09:58.401242 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 15 00:09:58.401252 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 00:09:58.401262 kernel: efi: EFI v2.7 by EDK II Jul 15 00:09:58.401272 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jul 15 00:09:58.401283 kernel: random: crng init done Jul 15 00:09:58.401293 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 15 00:09:58.401303 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 15 00:09:58.401313 kernel: secureboot: Secure boot disabled Jul 15 00:09:58.401327 kernel: SMBIOS 2.8 present. Jul 15 00:09:58.401338 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 15 00:09:58.401348 kernel: Hypervisor detected: KVM Jul 15 00:09:58.401358 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 00:09:58.401369 kernel: kvm-clock: using sched offset of 6315716554 cycles Jul 15 00:09:58.401380 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 00:09:58.401390 kernel: tsc: Detected 2794.750 MHz processor Jul 15 00:09:58.401401 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 00:09:58.401422 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 00:09:58.401436 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 15 00:09:58.401451 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 00:09:58.401462 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 00:09:58.401472 kernel: Using GB pages for direct mapping Jul 15 00:09:58.401484 kernel: ACPI: Early table checksum verification disabled Jul 15 00:09:58.401495 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 15 00:09:58.401505 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 15 00:09:58.401520 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401536 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401558 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 15 00:09:58.401581 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401591 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401602 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401612 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 00:09:58.401623 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 15 00:09:58.401635 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 15 00:09:58.401645 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 15 00:09:58.401655 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 15 00:09:58.401665 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 15 00:09:58.401678 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 15 00:09:58.401689 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 15 00:09:58.401700 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 15 00:09:58.401710 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 15 00:09:58.401721 kernel: No NUMA configuration found Jul 15 00:09:58.403813 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 15 00:09:58.403828 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jul 15 00:09:58.403881 kernel: Zone ranges: Jul 15 00:09:58.403894 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 00:09:58.403914 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 15 00:09:58.403926 kernel: Normal empty Jul 15 00:09:58.403944 kernel: Movable zone start for each node Jul 15 00:09:58.403955 kernel: Early memory node ranges Jul 15 00:09:58.403966 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 00:09:58.403976 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 15 00:09:58.403987 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 15 00:09:58.403998 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 15 00:09:58.404009 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 15 00:09:58.404023 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 15 00:09:58.404034 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jul 15 00:09:58.404045 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jul 15 00:09:58.404057 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 15 00:09:58.404067 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 00:09:58.404078 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 00:09:58.404099 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 15 00:09:58.404113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 00:09:58.404123 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 15 00:09:58.404134 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 15 00:09:58.404145 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 00:09:58.404156 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 15 00:09:58.404170 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 15 00:09:58.404181 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 00:09:58.404192 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 00:09:58.404204 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 00:09:58.404215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 00:09:58.404231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 00:09:58.404242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 00:09:58.404254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 00:09:58.404265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 00:09:58.404276 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 00:09:58.404288 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 00:09:58.404299 kernel: TSC deadline timer available Jul 15 00:09:58.404311 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 00:09:58.404323 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 00:09:58.404337 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 00:09:58.404349 kernel: kvm-guest: setup PV sched yield Jul 15 00:09:58.404360 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 15 00:09:58.404372 kernel: Booting paravirtualized kernel on KVM Jul 15 00:09:58.404382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 00:09:58.404394 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 00:09:58.404406 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 15 00:09:58.404418 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 15 00:09:58.404429 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 00:09:58.404443 kernel: kvm-guest: PV spinlocks enabled Jul 15 00:09:58.404454 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 00:09:58.404467 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 15 00:09:58.404479 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 00:09:58.404491 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 00:09:58.404507 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 00:09:58.404519 kernel: Fallback order for Node 0: 0 Jul 15 00:09:58.404530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jul 15 00:09:58.404545 kernel: Policy zone: DMA32 Jul 15 00:09:58.404557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 00:09:58.404568 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 177824K reserved, 0K cma-reserved) Jul 15 00:09:58.404579 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 00:09:58.404590 kernel: ftrace: allocating 37940 entries in 149 pages Jul 15 00:09:58.404601 kernel: ftrace: allocated 149 pages with 4 groups Jul 15 00:09:58.404612 kernel: Dynamic Preempt: voluntary Jul 15 00:09:58.404623 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 00:09:58.404642 kernel: rcu: RCU event tracing is enabled. Jul 15 00:09:58.404657 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 00:09:58.404669 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 00:09:58.404681 kernel: Rude variant of Tasks RCU enabled. Jul 15 00:09:58.404693 kernel: Tracing variant of Tasks RCU enabled. Jul 15 00:09:58.404703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 00:09:58.404714 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 00:09:58.404734 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 00:09:58.404746 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 00:09:58.404756 kernel: Console: colour dummy device 80x25 Jul 15 00:09:58.404770 kernel: printk: console [ttyS0] enabled Jul 15 00:09:58.404781 kernel: ACPI: Core revision 20230628 Jul 15 00:09:58.404792 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 00:09:58.404804 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 00:09:58.404816 kernel: x2apic enabled Jul 15 00:09:58.404827 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 00:09:58.404838 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 00:09:58.404849 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 00:09:58.404894 kernel: kvm-guest: setup PV IPIs Jul 15 00:09:58.404908 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 00:09:58.404920 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 00:09:58.404931 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 00:09:58.404943 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 00:09:58.404954 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 00:09:58.404965 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 00:09:58.404977 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 00:09:58.404989 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 00:09:58.405000 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 00:09:58.405015 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 00:09:58.405026 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 00:09:58.405037 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 00:09:58.405048 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 00:09:58.405060 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 00:09:58.405072 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 00:09:58.405083 kernel: x86/bugs: return thunk changed Jul 15 00:09:58.405097 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 00:09:58.405113 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 00:09:58.405124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 00:09:58.405136 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 00:09:58.405147 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 00:09:58.405158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 00:09:58.405169 kernel: Freeing SMP alternatives memory: 32K Jul 15 00:09:58.405180 kernel: pid_max: default: 32768 minimum: 301 Jul 15 00:09:58.405192 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 15 00:09:58.405203 kernel: landlock: Up and running. Jul 15 00:09:58.405218 kernel: SELinux: Initializing. Jul 15 00:09:58.405229 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 00:09:58.405239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 00:09:58.405251 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 00:09:58.405262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 00:09:58.405274 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 00:09:58.405285 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 00:09:58.405296 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 00:09:58.405307 kernel: ... version: 0 Jul 15 00:09:58.405322 kernel: ... bit width: 48 Jul 15 00:09:58.405333 kernel: ... generic registers: 6 Jul 15 00:09:58.405344 kernel: ... value mask: 0000ffffffffffff Jul 15 00:09:58.405354 kernel: ... max period: 00007fffffffffff Jul 15 00:09:58.405366 kernel: ... fixed-purpose events: 0 Jul 15 00:09:58.405378 kernel: ... event mask: 000000000000003f Jul 15 00:09:58.405389 kernel: signal: max sigframe size: 1776 Jul 15 00:09:58.405400 kernel: rcu: Hierarchical SRCU implementation. Jul 15 00:09:58.405412 kernel: rcu: Max phase no-delay instances is 400. Jul 15 00:09:58.405427 kernel: smp: Bringing up secondary CPUs ... Jul 15 00:09:58.405437 kernel: smpboot: x86: Booting SMP configuration: Jul 15 00:09:58.405448 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 00:09:58.405460 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 00:09:58.405472 kernel: smpboot: Max logical packages: 1 Jul 15 00:09:58.405483 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 00:09:58.405495 kernel: devtmpfs: initialized Jul 15 00:09:58.405506 kernel: x86/mm: Memory block size: 128MB Jul 15 00:09:58.405518 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 15 00:09:58.405529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 15 00:09:58.405544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 15 00:09:58.405555 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 15 00:09:58.405566 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jul 15 00:09:58.405577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 15 00:09:58.405589 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 00:09:58.405601 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 00:09:58.405612 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 00:09:58.405624 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 00:09:58.405639 kernel: audit: initializing netlink subsys (disabled) Jul 15 00:09:58.405650 kernel: audit: type=2000 audit(1752538195.295:1): state=initialized audit_enabled=0 res=1 Jul 15 00:09:58.405661 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 00:09:58.405671 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 00:09:58.405682 kernel: cpuidle: using governor menu Jul 15 00:09:58.405694 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 00:09:58.405705 kernel: dca service started, version 1.12.1 Jul 15 00:09:58.405717 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 15 00:09:58.405737 kernel: PCI: Using configuration type 1 for base access Jul 15 00:09:58.405753 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 00:09:58.405775 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 00:09:58.405787 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 00:09:58.405799 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 00:09:58.405817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 00:09:58.405830 kernel: ACPI: Added _OSI(Module Device) Jul 15 00:09:58.405842 kernel: ACPI: Added _OSI(Processor Device) Jul 15 00:09:58.405870 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 00:09:58.405883 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 00:09:58.405897 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 15 00:09:58.405908 kernel: ACPI: Interpreter enabled Jul 15 00:09:58.405919 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 00:09:58.405934 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 00:09:58.405946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 00:09:58.405957 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 00:09:58.405968 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 00:09:58.405979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 00:09:58.406339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 00:09:58.406530 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 00:09:58.406703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 00:09:58.406721 kernel: PCI host bridge to bus 0000:00 Jul 15 00:09:58.406970 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 00:09:58.407131 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 00:09:58.407283 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 00:09:58.408270 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 15 00:09:58.408430 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 15 00:09:58.408563 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 15 00:09:58.408697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 00:09:58.408901 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 00:09:58.409102 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 00:09:58.409251 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 15 00:09:58.409401 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 15 00:09:58.409544 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 15 00:09:58.409693 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 15 00:09:58.410852 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 00:09:58.411089 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 00:09:58.411259 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 15 00:09:58.411430 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 15 00:09:58.411581 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jul 15 00:09:58.412389 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 00:09:58.412568 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 15 00:09:58.412736 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 15 00:09:58.412913 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jul 15 00:09:58.413091 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 00:09:58.413264 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 15 00:09:58.413427 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 15 00:09:58.413588 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 15 00:09:58.415904 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 15 00:09:58.416158 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 00:09:58.416312 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 00:09:58.416478 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 00:09:58.416638 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 15 00:09:58.416794 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 15 00:09:58.416975 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 00:09:58.417183 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 15 00:09:58.417200 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 00:09:58.417211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 00:09:58.417222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 00:09:58.417238 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 00:09:58.417249 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 00:09:58.417259 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 00:09:58.417270 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 00:09:58.417280 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 00:09:58.417290 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 00:09:58.417300 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 00:09:58.417311 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 00:09:58.417321 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 00:09:58.417335 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 00:09:58.417346 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 00:09:58.417356 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 00:09:58.417366 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 00:09:58.417377 kernel: iommu: Default domain type: Translated Jul 15 00:09:58.417387 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 00:09:58.417397 kernel: efivars: Registered efivars operations Jul 15 00:09:58.417408 kernel: PCI: Using ACPI for IRQ routing Jul 15 00:09:58.417418 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 00:09:58.417428 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 15 00:09:58.417441 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 15 00:09:58.417451 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jul 15 00:09:58.417461 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jul 15 00:09:58.417472 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 15 00:09:58.417482 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 15 00:09:58.417492 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jul 15 00:09:58.417502 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 15 00:09:58.417659 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 00:09:58.417827 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 00:09:58.417995 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 00:09:58.418009 kernel: vgaarb: loaded Jul 15 00:09:58.418020 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 00:09:58.418030 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 00:09:58.418040 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 00:09:58.418051 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 00:09:58.418062 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 00:09:58.418072 kernel: pnp: PnP ACPI init Jul 15 00:09:58.418829 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 15 00:09:58.418847 kernel: pnp: PnP ACPI: found 6 devices Jul 15 00:09:58.418874 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 00:09:58.418907 kernel: NET: Registered PF_INET protocol family Jul 15 00:09:58.418918 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 00:09:58.418953 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 00:09:58.418966 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 00:09:58.418977 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 00:09:58.418993 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 00:09:58.419004 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 00:09:58.419015 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 00:09:58.419026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 00:09:58.419036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 00:09:58.419047 kernel: NET: Registered PF_XDP protocol family Jul 15 00:09:58.419222 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 15 00:09:58.419372 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 15 00:09:58.419522 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 00:09:58.419658 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 00:09:58.420876 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 00:09:58.421017 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 15 00:09:58.421155 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 15 00:09:58.421290 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 15 00:09:58.421304 kernel: PCI: CLS 0 bytes, default 64 Jul 15 00:09:58.421315 kernel: Initialise system trusted keyrings Jul 15 00:09:58.421332 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 00:09:58.421342 kernel: Key type asymmetric registered Jul 15 00:09:58.421353 kernel: Asymmetric key parser 'x509' registered Jul 15 00:09:58.421363 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 15 00:09:58.421373 kernel: io scheduler mq-deadline registered Jul 15 00:09:58.421384 kernel: io scheduler kyber registered Jul 15 00:09:58.421394 kernel: io scheduler bfq registered Jul 15 00:09:58.421405 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 00:09:58.421416 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 00:09:58.421430 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 00:09:58.421441 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 00:09:58.421452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 00:09:58.421466 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 00:09:58.421476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 00:09:58.421487 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 00:09:58.421503 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 00:09:58.421514 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 00:09:58.421679 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 00:09:58.421831 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 00:09:58.421983 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T00:09:57 UTC (1752538197) Jul 15 00:09:58.422120 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 15 00:09:58.422134 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 00:09:58.422145 kernel: efifb: probing for efifb Jul 15 00:09:58.422159 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 15 00:09:58.422170 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 15 00:09:58.422181 kernel: efifb: scrolling: redraw Jul 15 00:09:58.422191 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 00:09:58.422202 kernel: Console: switching to colour frame buffer device 160x50 Jul 15 00:09:58.422213 kernel: fb0: EFI VGA frame buffer device Jul 15 00:09:58.422223 kernel: pstore: Using crash dump compression: deflate Jul 15 00:09:58.422234 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 00:09:58.422244 kernel: NET: Registered PF_INET6 protocol family Jul 15 00:09:58.422258 kernel: Segment Routing with IPv6 Jul 15 00:09:58.422268 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 00:09:58.422278 kernel: NET: Registered PF_PACKET protocol family Jul 15 00:09:58.422289 kernel: Key type dns_resolver registered Jul 15 00:09:58.422299 kernel: IPI shorthand broadcast: enabled Jul 15 00:09:58.422309 kernel: sched_clock: Marking stable (2439002310, 226938214)->(2864241723, -198301199) Jul 15 00:09:58.422320 kernel: registered taskstats version 1 Jul 15 00:09:58.422330 kernel: Loading compiled-in X.509 certificates Jul 15 00:09:58.422341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: bf6496aa5b6cd4d87ec52e2500e1924de07ec31a' Jul 15 00:09:58.422354 kernel: Key type .fscrypt registered Jul 15 00:09:58.422365 kernel: Key type fscrypt-provisioning registered Jul 15 00:09:58.422375 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 00:09:58.422386 kernel: ima: Allocated hash algorithm: sha1 Jul 15 00:09:58.422396 kernel: ima: No architecture policies found Jul 15 00:09:58.422407 kernel: clk: Disabling unused clocks Jul 15 00:09:58.422417 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 15 00:09:58.422428 kernel: Write protecting the kernel read-only data: 38912k Jul 15 00:09:58.422439 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 15 00:09:58.422453 kernel: Run /init as init process Jul 15 00:09:58.422466 kernel: with arguments: Jul 15 00:09:58.422477 kernel: /init Jul 15 00:09:58.422487 kernel: with environment: Jul 15 00:09:58.422498 kernel: HOME=/ Jul 15 00:09:58.422508 kernel: TERM=linux Jul 15 00:09:58.422518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 00:09:58.422530 systemd[1]: Successfully made /usr/ read-only. Jul 15 00:09:58.422547 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 00:09:58.422560 systemd[1]: Detected virtualization kvm. Jul 15 00:09:58.422570 systemd[1]: Detected architecture x86-64. Jul 15 00:09:58.422581 systemd[1]: Running in initrd. Jul 15 00:09:58.422592 systemd[1]: No hostname configured, using default hostname. Jul 15 00:09:58.422604 systemd[1]: Hostname set to . Jul 15 00:09:58.422615 systemd[1]: Initializing machine ID from VM UUID. Jul 15 00:09:58.422626 systemd[1]: Queued start job for default target initrd.target. Jul 15 00:09:58.422640 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 00:09:58.422652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 00:09:58.422664 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 00:09:58.422676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 00:09:58.422687 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 00:09:58.422699 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 00:09:58.422712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 00:09:58.424149 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 00:09:58.424166 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 00:09:58.424177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 00:09:58.424189 systemd[1]: Reached target paths.target - Path Units. Jul 15 00:09:58.424200 systemd[1]: Reached target slices.target - Slice Units. Jul 15 00:09:58.424211 systemd[1]: Reached target swap.target - Swaps. Jul 15 00:09:58.424222 systemd[1]: Reached target timers.target - Timer Units. Jul 15 00:09:58.424233 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 00:09:58.424252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 00:09:58.424263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 00:09:58.424274 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 00:09:58.424285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 00:09:58.424296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 00:09:58.424308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 00:09:58.424319 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 00:09:58.424330 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 00:09:58.424341 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 00:09:58.424355 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 00:09:58.424366 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 00:09:58.424377 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 00:09:58.424389 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 00:09:58.424400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:09:58.424411 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 00:09:58.424422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 00:09:58.424437 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 00:09:58.424498 systemd-journald[194]: Collecting audit messages is disabled. Jul 15 00:09:58.424531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 00:09:58.424543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:09:58.424555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 00:09:58.424567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 00:09:58.424578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 00:09:58.424591 systemd-journald[194]: Journal started Jul 15 00:09:58.424626 systemd-journald[194]: Runtime Journal (/run/log/journal/0a770b986fd24f4f8712990623f5b785) is 6M, max 48.2M, 42.2M free. Jul 15 00:09:58.424146 systemd-modules-load[195]: Inserted module 'overlay' Jul 15 00:09:58.432823 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 00:09:58.435268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 00:09:58.459523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 00:09:58.468087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 00:09:58.492854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 00:09:58.506395 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 00:09:58.507605 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 00:09:58.513150 kernel: Bridge firewalling registered Jul 15 00:09:58.513071 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 15 00:09:58.520937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 00:09:58.541223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 00:09:58.544417 dracut-cmdline[224]: dracut-dracut-053 Jul 15 00:09:58.549972 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b3329440486f6df07adec8acfff793e63e5f00f2c50d9ad5ef23b1b049ec0ca0 Jul 15 00:09:58.572483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 00:09:58.582536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 00:09:58.645622 systemd-resolved[249]: Positive Trust Anchors: Jul 15 00:09:58.645653 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 00:09:58.645699 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 00:09:58.650392 systemd-resolved[249]: Defaulting to hostname 'linux'. Jul 15 00:09:58.652471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 00:09:58.654228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 00:09:58.786339 kernel: SCSI subsystem initialized Jul 15 00:09:58.799479 kernel: Loading iSCSI transport class v2.0-870. Jul 15 00:09:58.829938 kernel: iscsi: registered transport (tcp) Jul 15 00:09:58.875198 kernel: iscsi: registered transport (qla4xxx) Jul 15 00:09:58.875289 kernel: QLogic iSCSI HBA Driver Jul 15 00:09:59.008324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 00:09:59.035200 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 00:09:59.095923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 00:09:59.096019 kernel: device-mapper: uevent: version 1.0.3 Jul 15 00:09:59.099750 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 15 00:09:59.174783 kernel: raid6: avx2x4 gen() 17677 MB/s Jul 15 00:09:59.191743 kernel: raid6: avx2x2 gen() 16535 MB/s Jul 15 00:09:59.209942 kernel: raid6: avx2x1 gen() 13801 MB/s Jul 15 00:09:59.210037 kernel: raid6: using algorithm avx2x4 gen() 17677 MB/s Jul 15 00:09:59.228768 kernel: raid6: .... xor() 5184 MB/s, rmw enabled Jul 15 00:09:59.228897 kernel: raid6: using avx2x2 recovery algorithm Jul 15 00:09:59.260349 kernel: xor: automatically using best checksumming function avx Jul 15 00:09:59.622214 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 00:09:59.677738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 00:09:59.703515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 00:09:59.738357 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jul 15 00:09:59.754196 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 00:09:59.773230 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 00:09:59.820022 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 15 00:09:59.925539 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 00:09:59.934239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 00:10:00.047727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 00:10:00.074219 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 00:10:00.109048 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 00:10:00.123076 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 00:10:00.126229 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 00:10:00.129205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 00:10:00.139219 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 00:10:00.140081 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 00:10:00.159718 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 00:10:00.165077 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 00:10:00.165128 kernel: GPT:9289727 != 19775487 Jul 15 00:10:00.165143 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 00:10:00.165171 kernel: GPT:9289727 != 19775487 Jul 15 00:10:00.165186 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 00:10:00.165209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 00:10:00.164381 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 00:10:00.189931 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 00:10:00.193007 kernel: libata version 3.00 loaded. Jul 15 00:10:00.209299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 00:10:00.209545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 00:10:00.222209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 00:10:00.225781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 00:10:00.249570 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 00:10:00.226189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:10:00.231634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:10:00.273897 kernel: BTRFS: device fsid 0f48c447-00ea-47e7-98df-4bdb6058b27c devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (474) Jul 15 00:10:00.299892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (463) Jul 15 00:10:00.299970 kernel: AES CTR mode by8 optimization enabled Jul 15 00:10:00.299992 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 00:10:00.302024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:10:00.305993 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 00:10:00.306027 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 00:10:00.306302 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 00:10:00.324636 kernel: scsi host0: ahci Jul 15 00:10:00.327961 kernel: scsi host1: ahci Jul 15 00:10:00.330906 kernel: scsi host2: ahci Jul 15 00:10:00.336207 kernel: scsi host3: ahci Jul 15 00:10:00.336541 kernel: scsi host4: ahci Jul 15 00:10:00.339741 kernel: scsi host5: ahci Jul 15 00:10:00.340175 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 15 00:10:00.340195 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 15 00:10:00.340212 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 15 00:10:00.343712 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 15 00:10:00.343804 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 15 00:10:00.343822 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 15 00:10:00.378456 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 00:10:00.400591 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 00:10:00.401055 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 00:10:00.468207 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 00:10:00.490259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 00:10:00.509221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 00:10:00.510836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 00:10:00.510952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:10:00.513186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:10:00.523251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:10:00.528146 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 00:10:00.546894 disk-uuid[559]: Primary Header is updated. Jul 15 00:10:00.546894 disk-uuid[559]: Secondary Entries is updated. Jul 15 00:10:00.546894 disk-uuid[559]: Secondary Header is updated. Jul 15 00:10:00.557462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 00:10:00.567465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:10:00.579916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 00:10:00.638626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 00:10:00.663196 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 00:10:00.663282 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 00:10:00.663301 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 00:10:00.663317 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 00:10:00.674748 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 00:10:00.674829 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 00:10:00.674845 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 00:10:00.674875 kernel: ata3.00: applying bridge limits Jul 15 00:10:00.676325 kernel: ata3.00: configured for UDMA/100 Jul 15 00:10:00.698836 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 00:10:00.836561 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 00:10:00.837845 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 00:10:00.853881 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 00:10:01.588892 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 00:10:01.590651 disk-uuid[562]: The operation has completed successfully. Jul 15 00:10:01.711180 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 00:10:01.711381 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 00:10:01.760543 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 00:10:01.766262 sh[601]: Success Jul 15 00:10:01.805071 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 00:10:02.027351 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 00:10:02.047970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 00:10:02.056439 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 00:10:02.094477 kernel: BTRFS info (device dm-0): first mount of filesystem 0f48c447-00ea-47e7-98df-4bdb6058b27c Jul 15 00:10:02.094553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 00:10:02.094570 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 15 00:10:02.097062 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 15 00:10:02.097102 kernel: BTRFS info (device dm-0): using free space tree Jul 15 00:10:02.131974 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 00:10:02.135328 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 00:10:02.176231 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 00:10:02.187516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 00:10:02.263732 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 15 00:10:02.263822 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 00:10:02.263839 kernel: BTRFS info (device vda6): using free space tree Jul 15 00:10:02.280932 kernel: BTRFS info (device vda6): auto enabling async discard Jul 15 00:10:02.298799 kernel: BTRFS info (device vda6): last unmount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 15 00:10:02.316048 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 00:10:02.340725 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 00:10:02.757525 ignition[692]: Ignition 2.20.0 Jul 15 00:10:02.758237 ignition[692]: Stage: fetch-offline Jul 15 00:10:02.758305 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:02.758320 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:02.758448 ignition[692]: parsed url from cmdline: "" Jul 15 00:10:02.758453 ignition[692]: no config URL provided Jul 15 00:10:02.758460 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 00:10:02.758471 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jul 15 00:10:02.758510 ignition[692]: op(1): [started] loading QEMU firmware config module Jul 15 00:10:02.758520 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 00:10:02.778795 ignition[692]: op(1): [finished] loading QEMU firmware config module Jul 15 00:10:02.792524 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 00:10:02.818190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 00:10:02.840157 ignition[692]: parsing config with SHA512: d3804aa0f39ccbf33bc2e82f1619f1fb18ac721980b59ea3621d47d84b6b7ebbf28373a5db9be0fbcedf9687807d1cc2f243ab167fcec358f055af60418e9a9a Jul 15 00:10:02.863490 unknown[692]: fetched base config from "system" Jul 15 00:10:02.864402 unknown[692]: fetched user config from "qemu" Jul 15 00:10:02.865055 ignition[692]: fetch-offline: fetch-offline passed Jul 15 00:10:02.865210 ignition[692]: Ignition finished successfully Jul 15 00:10:02.873746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 00:10:02.905623 systemd-networkd[786]: lo: Link UP Jul 15 00:10:02.905638 systemd-networkd[786]: lo: Gained carrier Jul 15 00:10:02.909642 systemd-networkd[786]: Enumeration completed Jul 15 00:10:02.910955 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 00:10:02.911367 systemd[1]: Reached target network.target - Network. Jul 15 00:10:02.911415 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 00:10:02.911941 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 00:10:02.911947 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 00:10:02.914107 systemd-networkd[786]: eth0: Link UP Jul 15 00:10:02.914112 systemd-networkd[786]: eth0: Gained carrier Jul 15 00:10:02.914128 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 00:10:02.931883 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 00:10:02.943981 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 00:10:03.092739 ignition[790]: Ignition 2.20.0 Jul 15 00:10:03.092761 ignition[790]: Stage: kargs Jul 15 00:10:03.093009 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:03.093022 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:03.094283 ignition[790]: kargs: kargs passed Jul 15 00:10:03.094351 ignition[790]: Ignition finished successfully Jul 15 00:10:03.121507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 00:10:03.146181 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 00:10:03.187118 ignition[798]: Ignition 2.20.0 Jul 15 00:10:03.187135 ignition[798]: Stage: disks Jul 15 00:10:03.187398 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:03.187415 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:03.188438 ignition[798]: disks: disks passed Jul 15 00:10:03.188495 ignition[798]: Ignition finished successfully Jul 15 00:10:03.199169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 00:10:03.202036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 00:10:03.204649 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 00:10:03.206297 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 00:10:03.210982 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 00:10:03.213574 systemd[1]: Reached target basic.target - Basic System. Jul 15 00:10:03.230279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 00:10:03.266264 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 15 00:10:03.283890 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 00:10:03.376622 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 00:10:03.660907 kernel: EXT4-fs (vda9): mounted filesystem e62201b2-5386-4e48-beed-7080f52a14be r/w with ordered data mode. Quota mode: none. Jul 15 00:10:03.662761 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 00:10:03.668911 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 00:10:03.687088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 00:10:03.699730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 00:10:03.707117 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 00:10:03.709762 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 00:10:03.722802 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 00:10:03.748441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (816) Jul 15 00:10:03.748481 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 15 00:10:03.748501 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 00:10:03.748534 kernel: BTRFS info (device vda6): using free space tree Jul 15 00:10:03.742578 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 00:10:03.765626 kernel: BTRFS info (device vda6): auto enabling async discard Jul 15 00:10:03.767209 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 00:10:03.789079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 00:10:03.890615 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 00:10:04.021265 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jul 15 00:10:04.033999 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 00:10:04.047796 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 00:10:04.393361 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 00:10:04.412118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 00:10:04.422323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 00:10:04.429069 systemd-networkd[786]: eth0: Gained IPv6LL Jul 15 00:10:04.460557 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 00:10:04.464607 kernel: BTRFS info (device vda6): last unmount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 15 00:10:04.549798 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 00:10:04.585617 ignition[930]: INFO : Ignition 2.20.0 Jul 15 00:10:04.585617 ignition[930]: INFO : Stage: mount Jul 15 00:10:04.585617 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:04.585617 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:04.595143 ignition[930]: INFO : mount: mount passed Jul 15 00:10:04.595143 ignition[930]: INFO : Ignition finished successfully Jul 15 00:10:04.601051 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 00:10:04.622870 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 00:10:04.680488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 00:10:04.692932 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (942) Jul 15 00:10:04.701204 kernel: BTRFS info (device vda6): first mount of filesystem 59c6f3f1-8270-4370-81df-d46ae9629c2e Jul 15 00:10:04.701298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 00:10:04.701316 kernel: BTRFS info (device vda6): using free space tree Jul 15 00:10:04.732959 kernel: BTRFS info (device vda6): auto enabling async discard Jul 15 00:10:04.735841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 00:10:04.864714 ignition[958]: INFO : Ignition 2.20.0 Jul 15 00:10:04.864714 ignition[958]: INFO : Stage: files Jul 15 00:10:04.876898 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:04.876898 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:04.876898 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jul 15 00:10:04.876898 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 00:10:04.876898 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 00:10:04.898612 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 00:10:04.898612 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 00:10:04.905404 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 00:10:04.905404 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 00:10:04.905404 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 00:10:04.899056 unknown[958]: wrote ssh authorized keys file for user: core Jul 15 00:10:04.979166 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 00:10:05.653240 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 00:10:05.659996 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 00:10:05.659996 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 00:10:05.904260 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 00:10:06.373083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 00:10:06.373083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 00:10:06.379701 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 00:10:06.955125 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 00:10:08.761565 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 00:10:08.761565 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 00:10:08.824135 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 00:10:08.824135 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 00:10:08.836162 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 00:10:09.078751 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 00:10:09.091465 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 00:10:09.091465 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 00:10:09.091465 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 00:10:09.091465 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 00:10:09.091465 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 00:10:09.091465 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 00:10:09.091465 ignition[958]: INFO : files: files passed Jul 15 00:10:09.091465 ignition[958]: INFO : Ignition finished successfully Jul 15 00:10:09.106494 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 00:10:09.120175 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 00:10:09.128445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 00:10:09.130951 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 00:10:09.131115 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 00:10:09.173433 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 00:10:09.177888 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 00:10:09.177888 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 00:10:09.185555 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 00:10:09.193135 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 00:10:09.196880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 00:10:09.211201 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 00:10:09.264503 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 00:10:09.264845 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 00:10:09.273525 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 00:10:09.276328 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 00:10:09.281120 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 00:10:09.301702 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 00:10:09.343994 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 00:10:09.360195 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 00:10:09.386503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 00:10:09.394256 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 00:10:09.398074 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 00:10:09.408267 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 00:10:09.409903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 00:10:09.427251 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 00:10:09.431120 systemd[1]: Stopped target basic.target - Basic System. Jul 15 00:10:09.435988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 00:10:09.440407 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 00:10:09.440809 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 00:10:09.451267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 00:10:09.454051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 00:10:09.463777 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 00:10:09.465573 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 00:10:09.468631 systemd[1]: Stopped target swap.target - Swaps. Jul 15 00:10:09.471386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 00:10:09.471624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 00:10:09.478802 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 00:10:09.480500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 00:10:09.487728 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 00:10:09.487949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 00:10:09.490916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 00:10:09.491147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 00:10:09.502897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 00:10:09.503147 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 00:10:09.509438 systemd[1]: Stopped target paths.target - Path Units. Jul 15 00:10:09.512026 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 00:10:09.520042 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 00:10:09.525382 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 00:10:09.526711 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 00:10:09.530334 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 00:10:09.530527 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 00:10:09.535973 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 00:10:09.537129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 00:10:09.540649 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 00:10:09.540877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 00:10:09.544422 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 00:10:09.545907 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 00:10:09.569183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 00:10:09.574825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 00:10:09.583329 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 00:10:09.614575 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 00:10:09.620564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 00:10:09.620944 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 00:10:09.628546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 00:10:09.629070 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 00:10:09.646425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 00:10:09.661682 ignition[1014]: INFO : Ignition 2.20.0 Jul 15 00:10:09.661682 ignition[1014]: INFO : Stage: umount Jul 15 00:10:09.661682 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 00:10:09.661682 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 00:10:09.661682 ignition[1014]: INFO : umount: umount passed Jul 15 00:10:09.661682 ignition[1014]: INFO : Ignition finished successfully Jul 15 00:10:09.646614 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 00:10:09.653168 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 00:10:09.653322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 00:10:09.657902 systemd[1]: Stopped target network.target - Network. Jul 15 00:10:09.669285 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 00:10:09.669430 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 00:10:09.682428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 00:10:09.682536 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 00:10:09.690780 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 00:10:09.690904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 00:10:09.712525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 00:10:09.712648 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 00:10:09.718637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 00:10:09.729834 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 00:10:09.744533 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 00:10:09.745558 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 00:10:09.745744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 00:10:09.752460 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 00:10:09.752845 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 00:10:09.753071 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 00:10:09.771182 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 00:10:09.771654 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 00:10:09.772236 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 00:10:09.810599 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 00:10:09.810690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 00:10:09.817336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 00:10:09.817460 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 00:10:09.857121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 00:10:09.858324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 00:10:09.858465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 00:10:09.860071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 00:10:09.860140 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 00:10:09.874812 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 00:10:09.874952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 00:10:09.879346 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 00:10:09.879455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 00:10:09.904338 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 00:10:09.914086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 00:10:09.914187 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 00:10:09.939614 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 00:10:09.939783 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 00:10:09.952370 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 00:10:09.953741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 00:10:09.963043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 00:10:09.963127 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 00:10:09.972623 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 00:10:09.972701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 00:10:09.975314 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 00:10:09.975421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 00:10:09.976322 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 00:10:09.978435 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 00:10:09.986147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 00:10:09.986265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 00:10:10.011922 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 00:10:10.014848 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 00:10:10.014959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 00:10:10.018419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 00:10:10.018499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:10:10.026743 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 00:10:10.026925 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 00:10:10.027681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 00:10:10.027850 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 00:10:10.044161 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 00:10:10.073644 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 00:10:10.094623 systemd[1]: Switching root. Jul 15 00:10:10.135290 systemd-journald[194]: Journal stopped Jul 15 00:10:12.785640 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jul 15 00:10:12.785720 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 00:10:12.785737 kernel: SELinux: policy capability open_perms=1 Jul 15 00:10:12.785757 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 00:10:12.785776 kernel: SELinux: policy capability always_check_network=0 Jul 15 00:10:12.785789 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 00:10:12.785807 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 00:10:12.785820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 00:10:12.785833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 00:10:12.785846 kernel: audit: type=1403 audit(1752538210.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 00:10:12.785874 systemd[1]: Successfully loaded SELinux policy in 75.963ms. Jul 15 00:10:12.785903 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 29.846ms. Jul 15 00:10:12.785919 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 00:10:12.785933 systemd[1]: Detected virtualization kvm. Jul 15 00:10:12.785947 systemd[1]: Detected architecture x86-64. Jul 15 00:10:12.785960 systemd[1]: Detected first boot. Jul 15 00:10:12.785975 systemd[1]: Initializing machine ID from VM UUID. Jul 15 00:10:12.785990 zram_generator::config[1062]: No configuration found. Jul 15 00:10:12.786008 kernel: Guest personality initialized and is inactive Jul 15 00:10:12.786029 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 00:10:12.786045 kernel: Initialized host personality Jul 15 00:10:12.786058 kernel: NET: Registered PF_VSOCK protocol family Jul 15 00:10:12.786071 systemd[1]: Populated /etc with preset unit settings. Jul 15 00:10:12.786086 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 00:10:12.786189 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 00:10:12.786206 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 00:10:12.786223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 00:10:12.786239 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 00:10:12.786253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 00:10:12.786270 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 00:10:12.786284 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 00:10:12.786298 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 00:10:12.787238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 00:10:12.787254 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 00:10:12.787268 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 00:10:12.787284 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 00:10:12.787298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 00:10:12.787320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 00:10:12.787339 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 00:10:12.787360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 00:10:12.787374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 00:10:12.787388 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 00:10:12.787403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 00:10:12.787418 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 00:10:12.787434 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 00:10:12.787530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 00:10:12.787544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 00:10:12.787558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 00:10:12.787660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 00:10:12.787680 systemd[1]: Reached target slices.target - Slice Units. Jul 15 00:10:12.787695 systemd[1]: Reached target swap.target - Swaps. Jul 15 00:10:12.787708 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 00:10:12.787722 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 00:10:12.787737 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 00:10:12.787755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 00:10:12.787769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 00:10:12.787788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 00:10:12.787802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 00:10:12.787816 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 00:10:12.787829 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 00:10:12.787844 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 00:10:12.787874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:12.787891 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 00:10:12.787911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 00:10:12.787927 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 00:10:12.787945 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 00:10:12.787962 systemd[1]: Reached target machines.target - Containers. Jul 15 00:10:12.787977 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 00:10:12.787991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 00:10:12.788005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 00:10:12.788021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 00:10:12.788041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 00:10:12.788057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 00:10:12.788073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 00:10:12.788088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 00:10:12.788102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 00:10:12.788116 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 00:10:12.788130 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 00:10:12.788144 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 00:10:12.788158 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 00:10:12.788175 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 00:10:12.788189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 00:10:12.788203 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 00:10:12.788217 kernel: fuse: init (API version 7.39) Jul 15 00:10:12.788231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 00:10:12.788245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 00:10:12.788259 kernel: loop: module loaded Jul 15 00:10:12.788279 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 00:10:12.788296 kernel: ACPI: bus type drm_connector registered Jul 15 00:10:12.788370 systemd-journald[1137]: Collecting audit messages is disabled. Jul 15 00:10:12.788396 systemd-journald[1137]: Journal started Jul 15 00:10:12.788426 systemd-journald[1137]: Runtime Journal (/run/log/journal/0a770b986fd24f4f8712990623f5b785) is 6M, max 48.2M, 42.2M free. Jul 15 00:10:11.988772 systemd[1]: Queued start job for default target multi-user.target. Jul 15 00:10:12.006737 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 00:10:12.007526 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 00:10:13.003796 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 00:10:13.016655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 00:10:13.019523 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 00:10:13.019596 systemd[1]: Stopped verity-setup.service. Jul 15 00:10:13.022944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:13.031114 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 00:10:13.033392 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 00:10:13.034987 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 00:10:13.036602 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 00:10:13.038002 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 00:10:13.039599 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 00:10:13.041192 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 00:10:13.042893 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 00:10:13.044947 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 00:10:13.047354 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 00:10:13.047660 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 00:10:13.049713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 00:10:13.050038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 00:10:13.052188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 00:10:13.052505 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 00:10:13.054680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 00:10:13.056031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 00:10:13.058947 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 00:10:13.064615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 00:10:13.071231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 00:10:13.072505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 00:10:13.078653 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 00:10:13.081833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 00:10:13.084680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 00:10:13.087399 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 00:10:13.119816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 00:10:13.132047 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 00:10:13.137749 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 00:10:13.139504 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 00:10:13.141343 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 00:10:13.145854 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 00:10:13.150792 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 00:10:13.156087 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 00:10:13.157689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 00:10:13.162095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 00:10:13.168107 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 00:10:13.172096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 00:10:13.173930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 00:10:13.179767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 00:10:13.184248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 00:10:13.195264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 00:10:13.243112 systemd-journald[1137]: Time spent on flushing to /var/log/journal/0a770b986fd24f4f8712990623f5b785 is 127.925ms for 1058 entries. Jul 15 00:10:13.243112 systemd-journald[1137]: System Journal (/var/log/journal/0a770b986fd24f4f8712990623f5b785) is 8M, max 195.6M, 187.6M free. Jul 15 00:10:13.471244 systemd-journald[1137]: Received client request to flush runtime journal. Jul 15 00:10:13.471370 kernel: loop0: detected capacity change from 0 to 147912 Jul 15 00:10:13.204979 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 00:10:13.257476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 00:10:13.259979 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 00:10:13.262063 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 00:10:13.265499 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 00:10:13.289121 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 15 00:10:13.322585 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 15 00:10:13.341513 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 00:10:13.350004 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 00:10:13.417299 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 00:10:13.423836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 00:10:13.475435 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 00:10:13.502061 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 00:10:13.512174 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 00:10:13.513333 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 00:10:13.565005 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 00:10:13.575416 kernel: loop1: detected capacity change from 0 to 138176 Jul 15 00:10:13.580112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 00:10:13.659441 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 15 00:10:13.660024 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 15 00:10:13.675815 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 00:10:13.713083 kernel: loop2: detected capacity change from 0 to 221472 Jul 15 00:10:13.826908 kernel: loop3: detected capacity change from 0 to 147912 Jul 15 00:10:14.060908 kernel: loop4: detected capacity change from 0 to 138176 Jul 15 00:10:14.191625 kernel: loop5: detected capacity change from 0 to 221472 Jul 15 00:10:14.262305 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 00:10:14.267583 (sd-merge)[1207]: Merged extensions into '/usr'. Jul 15 00:10:14.287182 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 00:10:14.287209 systemd[1]: Reloading... Jul 15 00:10:14.464806 zram_generator::config[1232]: No configuration found. Jul 15 00:10:14.665438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 00:10:14.772399 systemd[1]: Reloading finished in 483 ms. Jul 15 00:10:14.817970 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 00:10:14.851277 systemd[1]: Starting ensure-sysext.service... Jul 15 00:10:14.880260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 00:10:14.981746 systemd[1]: Reload requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Jul 15 00:10:14.981774 systemd[1]: Reloading... Jul 15 00:10:15.134135 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 00:10:15.135552 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 00:10:15.137515 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 00:10:15.138342 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 15 00:10:15.138463 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 15 00:10:15.148314 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 00:10:15.149269 systemd-tmpfiles[1272]: Skipping /boot Jul 15 00:10:15.170901 zram_generator::config[1302]: No configuration found. Jul 15 00:10:15.172591 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 00:10:15.172815 systemd-tmpfiles[1272]: Skipping /boot Jul 15 00:10:15.174963 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 00:10:15.493291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 00:10:15.593027 systemd[1]: Reloading finished in 610 ms. Jul 15 00:10:15.610087 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 00:10:15.612045 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 00:10:15.658392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 00:10:15.689516 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 00:10:15.693943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 00:10:15.699259 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 00:10:15.707656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 00:10:15.717141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 00:10:15.722061 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 00:10:15.727636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:15.727990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 00:10:15.738260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 00:10:15.745372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 00:10:15.750294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 00:10:15.754525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 00:10:15.754762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 00:10:15.764319 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 00:10:15.767794 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:15.773202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 00:10:15.773589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 00:10:15.778699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 00:10:15.779987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 00:10:15.790521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 00:10:15.790960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 00:10:15.795930 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 00:10:15.801247 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 15 00:10:15.808734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:15.809356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 00:10:15.824179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 00:10:15.828317 augenrules[1376]: No rules Jul 15 00:10:15.830533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 00:10:15.837983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 00:10:15.842138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 00:10:15.843373 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 00:10:15.847563 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 00:10:15.849493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:15.854126 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 00:10:15.854581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 00:10:15.862588 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 00:10:15.870320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 00:10:15.872643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 00:10:15.873015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 00:10:15.875458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 00:10:15.876128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 00:10:15.880182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 00:10:15.880987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 00:10:15.890962 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 00:10:15.896000 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 00:10:15.923900 systemd[1]: Finished ensure-sysext.service. Jul 15 00:10:15.930682 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 00:10:15.941963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:15.954517 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 00:10:15.958953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 00:10:15.977161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 00:10:15.984100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 00:10:16.013661 augenrules[1416]: /sbin/augenrules: No change Jul 15 00:10:15.993338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 00:10:16.016411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 00:10:16.018238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 00:10:16.018327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 00:10:16.030235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 00:10:16.037224 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 00:10:16.037825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 00:10:16.037890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 00:10:16.038916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 00:10:16.040363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 00:10:16.042354 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 00:10:16.043729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 00:10:16.045655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 00:10:16.047154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 00:10:16.049267 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 00:10:16.049563 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 00:10:16.070882 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 00:10:16.080081 augenrules[1444]: No rules Jul 15 00:10:16.070970 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 00:10:16.108340 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 00:10:16.108735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 00:10:16.113789 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 00:10:16.136216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1408) Jul 15 00:10:16.127112 systemd-resolved[1349]: Positive Trust Anchors: Jul 15 00:10:16.127135 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 00:10:16.127182 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 00:10:16.143088 systemd-resolved[1349]: Defaulting to hostname 'linux'. Jul 15 00:10:16.167635 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 00:10:16.169454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 00:10:16.385097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 00:10:16.390926 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 15 00:10:16.391355 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 00:10:16.391604 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 00:10:16.391934 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 00:10:16.424264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 00:10:16.544350 kernel: ACPI: button: Power Button [PWRF] Jul 15 00:10:16.554686 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 00:10:16.561095 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 00:10:16.591383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 00:10:16.591838 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 00:10:16.605078 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 00:10:16.622314 systemd-networkd[1435]: lo: Link UP Jul 15 00:10:16.622333 systemd-networkd[1435]: lo: Gained carrier Jul 15 00:10:16.625270 systemd-networkd[1435]: Enumeration completed Jul 15 00:10:16.625388 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 00:10:16.627029 systemd[1]: Reached target network.target - Network. Jul 15 00:10:16.629060 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 00:10:16.629068 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 00:10:16.633545 systemd-networkd[1435]: eth0: Link UP Jul 15 00:10:16.633552 systemd-networkd[1435]: eth0: Gained carrier Jul 15 00:10:16.633581 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 00:10:16.677986 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 00:10:16.686173 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 00:10:16.729074 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 00:10:16.729983 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Jul 15 00:10:16.731010 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 00:10:16.731080 systemd-timesyncd[1438]: Initial clock synchronization to Tue 2025-07-15 00:10:16.578189 UTC. Jul 15 00:10:16.783339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 00:10:16.788203 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 00:10:16.837892 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 00:10:16.882682 kernel: kvm_amd: TSC scaling supported Jul 15 00:10:16.882785 kernel: kvm_amd: Nested Virtualization enabled Jul 15 00:10:16.882803 kernel: kvm_amd: Nested Paging enabled Jul 15 00:10:16.887660 kernel: kvm_amd: LBR virtualization supported Jul 15 00:10:16.887762 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 00:10:16.888375 kernel: kvm_amd: Virtual GIF supported Jul 15 00:10:16.949483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 00:10:17.033438 kernel: EDAC MC: Ver: 3.0.0 Jul 15 00:10:17.070800 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 15 00:10:17.084307 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 15 00:10:17.103682 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 00:10:17.147632 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 15 00:10:17.151045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 00:10:17.154740 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 00:10:17.156351 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 00:10:17.158242 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 00:10:17.161649 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 00:10:17.165228 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 00:10:17.171491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 00:10:17.182747 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 00:10:17.182818 systemd[1]: Reached target paths.target - Path Units. Jul 15 00:10:17.187419 systemd[1]: Reached target timers.target - Timer Units. Jul 15 00:10:17.190233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 00:10:17.194111 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 00:10:17.200538 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 00:10:17.205693 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 00:10:17.211164 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 00:10:17.224988 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 00:10:17.227924 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 00:10:17.240154 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 15 00:10:17.244242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 00:10:17.252673 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 00:10:17.258435 systemd[1]: Reached target basic.target - Basic System. Jul 15 00:10:17.260043 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 00:10:17.260092 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 00:10:17.275462 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 00:10:17.303238 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 00:10:17.300306 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 00:10:17.310248 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 00:10:17.322946 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 00:10:17.327159 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 00:10:17.332072 jq[1484]: false Jul 15 00:10:17.332369 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 00:10:17.348010 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 00:10:17.354569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 00:10:17.359493 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 00:10:17.365333 dbus-daemon[1483]: [system] SELinux support is enabled Jul 15 00:10:17.374025 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 00:10:17.380728 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 00:10:17.383606 extend-filesystems[1485]: Found loop3 Jul 15 00:10:17.384609 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 00:10:17.390794 extend-filesystems[1485]: Found loop4 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found loop5 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found sr0 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda1 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda2 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda3 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found usr Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda4 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda6 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda7 Jul 15 00:10:17.390794 extend-filesystems[1485]: Found vda9 Jul 15 00:10:17.390794 extend-filesystems[1485]: Checking size of /dev/vda9 Jul 15 00:10:17.386327 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 00:10:17.403101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 00:10:17.408545 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 00:10:17.413328 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 15 00:10:17.422040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 00:10:17.424105 jq[1501]: true Jul 15 00:10:17.422404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 00:10:17.422875 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 00:10:17.423193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 00:10:17.430042 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 00:10:17.430532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 00:10:17.434175 update_engine[1495]: I20250715 00:10:17.434036 1495 main.cc:92] Flatcar Update Engine starting Jul 15 00:10:17.436596 update_engine[1495]: I20250715 00:10:17.436539 1495 update_check_scheduler.cc:74] Next update check in 11m42s Jul 15 00:10:17.448485 extend-filesystems[1485]: Resized partition /dev/vda9 Jul 15 00:10:17.460275 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Jul 15 00:10:17.467767 jq[1506]: true Jul 15 00:10:17.464522 (ntainerd)[1516]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 00:10:17.480046 tar[1505]: linux-amd64/helm Jul 15 00:10:17.476741 systemd[1]: Started update-engine.service - Update Engine. Jul 15 00:10:17.479954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 00:10:17.480003 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 00:10:17.484159 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 00:10:17.484186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 00:10:17.504186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 00:10:17.504878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1390) Jul 15 00:10:17.590908 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 00:10:17.835448 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 15 00:10:17.849182 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 00:10:17.852828 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 00:10:17.852920 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 00:10:17.854389 systemd-logind[1492]: New seat seat0. Jul 15 00:10:17.865608 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 00:10:17.879772 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 00:10:17.881234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 00:10:17.993550 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 00:10:17.884556 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 00:10:17.990657 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 00:10:17.998900 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 00:10:17.999377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:18.006357 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 00:10:18.074816 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 00:10:18.097620 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 00:10:18.127903 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 00:10:18.127903 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 00:10:18.127903 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 00:10:18.140060 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:60894.service - OpenSSH per-connection server daemon (10.0.0.1:60894). Jul 15 00:10:18.141650 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jul 15 00:10:18.149041 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 00:10:18.151511 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 00:10:18.277771 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 00:10:18.278207 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 00:10:18.372630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 00:10:18.427500 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 00:10:18.430706 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 00:10:18.431095 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 00:10:18.437686 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 00:10:18.478308 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 00:10:18.514491 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 00:10:18.525714 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 00:10:18.562931 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 00:10:18.625650 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jul 15 00:10:18.629323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 00:10:18.632324 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 00:10:18.902209 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 60894 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:18.905498 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:18.925636 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 00:10:19.083807 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 00:10:19.119615 systemd-logind[1492]: New session 1 of user core. Jul 15 00:10:19.229050 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 00:10:19.241742 containerd[1516]: time="2025-07-15T00:10:19.241621648Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 15 00:10:19.254198 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 00:10:19.294600 (systemd)[1592]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 00:10:19.298913 systemd-logind[1492]: New session c1 of user core. Jul 15 00:10:19.328705 containerd[1516]: time="2025-07-15T00:10:19.328618480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.346953 containerd[1516]: time="2025-07-15T00:10:19.346837783Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 00:10:19.346953 containerd[1516]: time="2025-07-15T00:10:19.346923840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 00:10:19.346953 containerd[1516]: time="2025-07-15T00:10:19.346951434Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 00:10:19.347277 containerd[1516]: time="2025-07-15T00:10:19.347238600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 15 00:10:19.347277 containerd[1516]: time="2025-07-15T00:10:19.347267705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347407 containerd[1516]: time="2025-07-15T00:10:19.347369982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347407 containerd[1516]: time="2025-07-15T00:10:19.347393201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347873 containerd[1516]: time="2025-07-15T00:10:19.347811613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347873 containerd[1516]: time="2025-07-15T00:10:19.347837596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347945 containerd[1516]: time="2025-07-15T00:10:19.347879149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 00:10:19.347945 containerd[1516]: time="2025-07-15T00:10:19.347893119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.348065 containerd[1516]: time="2025-07-15T00:10:19.348027768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.348405 containerd[1516]: time="2025-07-15T00:10:19.348368897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 00:10:19.348807 containerd[1516]: time="2025-07-15T00:10:19.348775629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 00:10:19.348918 containerd[1516]: time="2025-07-15T00:10:19.348899339Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 00:10:19.349140 containerd[1516]: time="2025-07-15T00:10:19.349119650Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 00:10:19.349288 containerd[1516]: time="2025-07-15T00:10:19.349269040Z" level=info msg="metadata content store policy set" policy=shared Jul 15 00:10:19.375751 containerd[1516]: time="2025-07-15T00:10:19.375690773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 00:10:19.376009 containerd[1516]: time="2025-07-15T00:10:19.375988346Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 00:10:19.376085 containerd[1516]: time="2025-07-15T00:10:19.376069073Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 15 00:10:19.380816 containerd[1516]: time="2025-07-15T00:10:19.380766519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 15 00:10:19.380977 containerd[1516]: time="2025-07-15T00:10:19.380952426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 00:10:19.381530 containerd[1516]: time="2025-07-15T00:10:19.381506157Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 00:10:19.381941 containerd[1516]: time="2025-07-15T00:10:19.381915346Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 00:10:19.382414 containerd[1516]: time="2025-07-15T00:10:19.382387868Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 15 00:10:19.382505 containerd[1516]: time="2025-07-15T00:10:19.382483876Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 15 00:10:19.382588 containerd[1516]: time="2025-07-15T00:10:19.382568651Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 15 00:10:19.382682 containerd[1516]: time="2025-07-15T00:10:19.382661776Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.382760 containerd[1516]: time="2025-07-15T00:10:19.382741229Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.382834 containerd[1516]: time="2025-07-15T00:10:19.382816220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.382935 containerd[1516]: time="2025-07-15T00:10:19.382916512Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.383028 containerd[1516]: time="2025-07-15T00:10:19.383004771Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.383194 containerd[1516]: time="2025-07-15T00:10:19.383170904Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.383282 containerd[1516]: time="2025-07-15T00:10:19.383261690Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.383380 containerd[1516]: time="2025-07-15T00:10:19.383359209Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 00:10:19.383474 containerd[1516]: time="2025-07-15T00:10:19.383453352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.383558 containerd[1516]: time="2025-07-15T00:10:19.383538244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.383654 containerd[1516]: time="2025-07-15T00:10:19.383632278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.383753 containerd[1516]: time="2025-07-15T00:10:19.383731239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.383949 containerd[1516]: time="2025-07-15T00:10:19.383924500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384128 containerd[1516]: time="2025-07-15T00:10:19.384104079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384217 containerd[1516]: time="2025-07-15T00:10:19.384196771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384325 containerd[1516]: time="2025-07-15T00:10:19.384300134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384418 containerd[1516]: time="2025-07-15T00:10:19.384396952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384506 containerd[1516]: time="2025-07-15T00:10:19.384486001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384589 containerd[1516]: time="2025-07-15T00:10:19.384569284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384674 containerd[1516]: time="2025-07-15T00:10:19.384653752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384758 containerd[1516]: time="2025-07-15T00:10:19.384738319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.384882 containerd[1516]: time="2025-07-15T00:10:19.384825640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 15 00:10:19.385013 containerd[1516]: time="2025-07-15T00:10:19.384990015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.385212 containerd[1516]: time="2025-07-15T00:10:19.385187551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.385292 containerd[1516]: time="2025-07-15T00:10:19.385274891Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387166347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387221405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387239847Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387254181Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387266956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387282920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387297511Z" level=info msg="NRI interface is disabled by configuration." Jul 15 00:10:19.387895 containerd[1516]: time="2025-07-15T00:10:19.387311421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 00:10:19.390313 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.388346755Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.388420078Z" level=info msg="Connect containerd service" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.388469479Z" level=info msg="using legacy CRI server" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.388478739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.388640507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.389519108Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.389839979Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.389934063Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.389989506Z" level=info msg="Start subscribing containerd event" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390021660Z" level=info msg="Start recovering state" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390088773Z" level=info msg="Start event monitor" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390110769Z" level=info msg="Start snapshots syncer" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390122467Z" level=info msg="Start cni network conf syncer for default" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390130622Z" level=info msg="Start streaming server" Jul 15 00:10:19.393245 containerd[1516]: time="2025-07-15T00:10:19.390641783Z" level=info msg="containerd successfully booted in 0.154831s" Jul 15 00:10:19.683013 systemd[1592]: Queued start job for default target default.target. Jul 15 00:10:19.702474 systemd[1592]: Created slice app.slice - User Application Slice. Jul 15 00:10:19.702517 systemd[1592]: Reached target paths.target - Paths. Jul 15 00:10:19.702588 systemd[1592]: Reached target timers.target - Timers. Jul 15 00:10:19.704987 systemd[1592]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 00:10:19.736297 systemd[1592]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 00:10:19.736505 systemd[1592]: Reached target sockets.target - Sockets. Jul 15 00:10:19.736577 systemd[1592]: Reached target basic.target - Basic System. Jul 15 00:10:19.736639 systemd[1592]: Reached target default.target - Main User Target. Jul 15 00:10:19.736686 systemd[1592]: Startup finished in 400ms. Jul 15 00:10:19.737392 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 00:10:19.791042 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 00:10:19.952386 tar[1505]: linux-amd64/LICENSE Jul 15 00:10:19.952386 tar[1505]: linux-amd64/README.md Jul 15 00:10:20.129821 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:60898.service - OpenSSH per-connection server daemon (10.0.0.1:60898). Jul 15 00:10:20.143076 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 00:10:20.237878 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:20.240490 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:20.258276 systemd-logind[1492]: New session 2 of user core. Jul 15 00:10:20.268182 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 00:10:20.343108 sshd[1610]: Connection closed by 10.0.0.1 port 60898 Jul 15 00:10:20.345649 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:20.378646 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:60898.service: Deactivated successfully. Jul 15 00:10:20.386499 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 00:10:20.392950 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jul 15 00:10:20.432687 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:35450.service - OpenSSH per-connection server daemon (10.0.0.1:35450). Jul 15 00:10:20.450238 systemd-logind[1492]: Removed session 2. Jul 15 00:10:20.504783 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 35450 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:20.509460 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:20.519073 systemd-logind[1492]: New session 3 of user core. Jul 15 00:10:20.531169 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 00:10:20.661431 sshd[1618]: Connection closed by 10.0.0.1 port 35450 Jul 15 00:10:20.666713 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:20.860563 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:35450.service: Deactivated successfully. Jul 15 00:10:20.864089 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 00:10:20.866397 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jul 15 00:10:20.870540 systemd-logind[1492]: Removed session 3. Jul 15 00:10:22.117127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:22.126124 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 00:10:22.129389 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 00:10:22.135697 systemd[1]: Startup finished in 2.674s (kernel) + 12.735s (initrd) + 11.548s (userspace) = 26.957s. Jul 15 00:10:24.048712 kubelet[1626]: E0715 00:10:24.048491 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 00:10:24.056898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 00:10:24.057198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 00:10:24.057741 systemd[1]: kubelet.service: Consumed 3.881s CPU time, 269.7M memory peak. Jul 15 00:10:30.626328 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:47408.service - OpenSSH per-connection server daemon (10.0.0.1:47408). Jul 15 00:10:30.712631 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:30.714677 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:30.728297 systemd-logind[1492]: New session 4 of user core. Jul 15 00:10:30.740497 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 00:10:30.828075 sshd[1643]: Connection closed by 10.0.0.1 port 47408 Jul 15 00:10:30.828467 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:30.858703 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:47408.service: Deactivated successfully. Jul 15 00:10:30.867950 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 00:10:30.880516 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Jul 15 00:10:30.903368 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:47424.service - OpenSSH per-connection server daemon (10.0.0.1:47424). Jul 15 00:10:30.905420 systemd-logind[1492]: Removed session 4. Jul 15 00:10:30.971261 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 47424 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:30.977054 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:30.994146 systemd-logind[1492]: New session 5 of user core. Jul 15 00:10:31.003251 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 00:10:31.063004 sshd[1651]: Connection closed by 10.0.0.1 port 47424 Jul 15 00:10:31.065283 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:31.095211 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:47424.service: Deactivated successfully. Jul 15 00:10:31.103406 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 00:10:31.113752 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Jul 15 00:10:31.131998 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:47432.service - OpenSSH per-connection server daemon (10.0.0.1:47432). Jul 15 00:10:31.140040 systemd-logind[1492]: Removed session 5. Jul 15 00:10:31.190149 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 47432 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:31.191937 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:31.209234 systemd-logind[1492]: New session 6 of user core. Jul 15 00:10:31.218172 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 00:10:31.287653 sshd[1659]: Connection closed by 10.0.0.1 port 47432 Jul 15 00:10:31.288188 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:31.310565 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:47432.service: Deactivated successfully. Jul 15 00:10:31.317635 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 00:10:31.321041 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jul 15 00:10:31.334381 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:47446.service - OpenSSH per-connection server daemon (10.0.0.1:47446). Jul 15 00:10:31.336304 systemd-logind[1492]: Removed session 6. Jul 15 00:10:31.397008 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 47446 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:31.399316 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:31.414488 systemd-logind[1492]: New session 7 of user core. Jul 15 00:10:31.425200 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 00:10:31.541894 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 00:10:31.542471 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 00:10:31.580975 sudo[1668]: pam_unix(sudo:session): session closed for user root Jul 15 00:10:31.583411 sshd[1667]: Connection closed by 10.0.0.1 port 47446 Jul 15 00:10:31.585665 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:31.627934 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:47446.service: Deactivated successfully. Jul 15 00:10:31.636673 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 00:10:31.641080 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jul 15 00:10:31.657474 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:47448.service - OpenSSH per-connection server daemon (10.0.0.1:47448). Jul 15 00:10:31.663159 systemd-logind[1492]: Removed session 7. Jul 15 00:10:31.734105 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 47448 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:31.738579 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:31.748255 systemd-logind[1492]: New session 8 of user core. Jul 15 00:10:31.762221 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 00:10:31.829830 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 00:10:31.830338 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 00:10:31.837878 sudo[1678]: pam_unix(sudo:session): session closed for user root Jul 15 00:10:31.852557 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 00:10:31.853087 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 00:10:31.878530 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 00:10:31.926920 augenrules[1700]: No rules Jul 15 00:10:31.929330 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 00:10:31.929756 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 00:10:31.931345 sudo[1677]: pam_unix(sudo:session): session closed for user root Jul 15 00:10:31.935711 sshd[1676]: Connection closed by 10.0.0.1 port 47448 Jul 15 00:10:31.936183 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 15 00:10:31.951053 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:47448.service: Deactivated successfully. Jul 15 00:10:31.956743 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 00:10:31.962760 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Jul 15 00:10:31.972323 systemd-logind[1492]: Removed session 8. Jul 15 00:10:31.989672 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:47464.service - OpenSSH per-connection server daemon (10.0.0.1:47464). Jul 15 00:10:32.049910 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 47464 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:10:32.054222 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:10:32.062165 systemd-logind[1492]: New session 9 of user core. Jul 15 00:10:32.072188 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 00:10:32.136097 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 00:10:32.136589 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 00:10:32.689389 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 00:10:32.694027 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 00:10:33.174210 dockerd[1733]: time="2025-07-15T00:10:33.174118512Z" level=info msg="Starting up" Jul 15 00:10:33.312451 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport931675727-merged.mount: Deactivated successfully. Jul 15 00:10:33.511396 dockerd[1733]: time="2025-07-15T00:10:33.511219873Z" level=info msg="Loading containers: start." Jul 15 00:10:33.999899 kernel: Initializing XFRM netlink socket Jul 15 00:10:34.079263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 00:10:34.093687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:34.241127 systemd-networkd[1435]: docker0: Link UP Jul 15 00:10:34.387316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:34.396935 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 00:10:34.607916 kubelet[1885]: E0715 00:10:34.605806 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 00:10:34.627173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 00:10:34.627434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 00:10:34.628110 systemd[1]: kubelet.service: Consumed 369ms CPU time, 110.7M memory peak. Jul 15 00:10:34.643847 dockerd[1733]: time="2025-07-15T00:10:34.641643714Z" level=info msg="Loading containers: done." Jul 15 00:10:34.677314 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck320437015-merged.mount: Deactivated successfully. Jul 15 00:10:34.685645 dockerd[1733]: time="2025-07-15T00:10:34.683889995Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 00:10:34.685645 dockerd[1733]: time="2025-07-15T00:10:34.684030082Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 15 00:10:34.685645 dockerd[1733]: time="2025-07-15T00:10:34.684185037Z" level=info msg="Daemon has completed initialization" Jul 15 00:10:34.835855 dockerd[1733]: time="2025-07-15T00:10:34.835295791Z" level=info msg="API listen on /run/docker.sock" Jul 15 00:10:34.837460 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 00:10:35.946828 containerd[1516]: time="2025-07-15T00:10:35.946747843Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 00:10:36.826056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816249614.mount: Deactivated successfully. Jul 15 00:10:38.084342 containerd[1516]: time="2025-07-15T00:10:38.084268020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:38.084948 containerd[1516]: time="2025-07-15T00:10:38.084882481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 15 00:10:38.086305 containerd[1516]: time="2025-07-15T00:10:38.086270811Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:38.089395 containerd[1516]: time="2025-07-15T00:10:38.089353269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:38.090489 containerd[1516]: time="2025-07-15T00:10:38.090446356Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.143637848s" Jul 15 00:10:38.090557 containerd[1516]: time="2025-07-15T00:10:38.090492129Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 00:10:38.091162 containerd[1516]: time="2025-07-15T00:10:38.091094583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 00:10:39.418962 containerd[1516]: time="2025-07-15T00:10:39.418852215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:39.419787 containerd[1516]: time="2025-07-15T00:10:39.419700809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 15 00:10:39.421259 containerd[1516]: time="2025-07-15T00:10:39.421204088Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:39.425330 containerd[1516]: time="2025-07-15T00:10:39.424180521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:39.425385 containerd[1516]: time="2025-07-15T00:10:39.425345829Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.334219301s" Jul 15 00:10:39.425385 containerd[1516]: time="2025-07-15T00:10:39.425379248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 00:10:39.426015 containerd[1516]: time="2025-07-15T00:10:39.425983319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 00:10:40.678899 containerd[1516]: time="2025-07-15T00:10:40.678799533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:40.680268 containerd[1516]: time="2025-07-15T00:10:40.680221322Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 15 00:10:40.681988 containerd[1516]: time="2025-07-15T00:10:40.681822879Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:40.686050 containerd[1516]: time="2025-07-15T00:10:40.686013201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:40.687088 containerd[1516]: time="2025-07-15T00:10:40.687041359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.261023047s" Jul 15 00:10:40.687088 containerd[1516]: time="2025-07-15T00:10:40.687079277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 00:10:40.687784 containerd[1516]: time="2025-07-15T00:10:40.687754906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 00:10:41.785054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918662955.mount: Deactivated successfully. Jul 15 00:10:42.613412 containerd[1516]: time="2025-07-15T00:10:42.613318997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:42.614173 containerd[1516]: time="2025-07-15T00:10:42.614093685Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 15 00:10:42.615455 containerd[1516]: time="2025-07-15T00:10:42.615421236Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:42.617608 containerd[1516]: time="2025-07-15T00:10:42.617555383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:42.618138 containerd[1516]: time="2025-07-15T00:10:42.618099835Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.930306508s" Jul 15 00:10:42.618138 containerd[1516]: time="2025-07-15T00:10:42.618135658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 00:10:42.618712 containerd[1516]: time="2025-07-15T00:10:42.618685827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 00:10:43.092657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381112579.mount: Deactivated successfully. Jul 15 00:10:43.792374 containerd[1516]: time="2025-07-15T00:10:43.792307180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:43.793021 containerd[1516]: time="2025-07-15T00:10:43.792947361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 00:10:43.794276 containerd[1516]: time="2025-07-15T00:10:43.794204595Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:43.797040 containerd[1516]: time="2025-07-15T00:10:43.796984080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:43.798111 containerd[1516]: time="2025-07-15T00:10:43.798057987Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.17934048s" Jul 15 00:10:43.798167 containerd[1516]: time="2025-07-15T00:10:43.798109143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 00:10:43.798679 containerd[1516]: time="2025-07-15T00:10:43.798653511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 00:10:44.265327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175245651.mount: Deactivated successfully. Jul 15 00:10:44.272105 containerd[1516]: time="2025-07-15T00:10:44.272042108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:44.272852 containerd[1516]: time="2025-07-15T00:10:44.272776914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 00:10:44.274126 containerd[1516]: time="2025-07-15T00:10:44.274061878Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:44.276202 containerd[1516]: time="2025-07-15T00:10:44.276152285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:44.277002 containerd[1516]: time="2025-07-15T00:10:44.276966230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 478.282458ms" Jul 15 00:10:44.277051 containerd[1516]: time="2025-07-15T00:10:44.277008856Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 00:10:44.277611 containerd[1516]: time="2025-07-15T00:10:44.277587492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 00:10:44.745887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 00:10:44.759240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:44.792107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294713713.mount: Deactivated successfully. Jul 15 00:10:44.982774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:44.988036 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 00:10:45.189272 kubelet[2085]: E0715 00:10:45.189124 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 00:10:45.193666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 00:10:45.193950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 00:10:45.194465 systemd[1]: kubelet.service: Consumed 225ms CPU time, 111.3M memory peak. Jul 15 00:10:48.145085 containerd[1516]: time="2025-07-15T00:10:48.144990491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:48.308876 containerd[1516]: time="2025-07-15T00:10:48.308744760Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 00:10:48.395547 containerd[1516]: time="2025-07-15T00:10:48.395367216Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:48.442234 containerd[1516]: time="2025-07-15T00:10:48.442144050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:10:48.444093 containerd[1516]: time="2025-07-15T00:10:48.444039779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.166398437s" Jul 15 00:10:48.444200 containerd[1516]: time="2025-07-15T00:10:48.444092642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 00:10:51.036136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:51.036352 systemd[1]: kubelet.service: Consumed 225ms CPU time, 111.3M memory peak. Jul 15 00:10:51.059206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:51.091065 systemd[1]: Reload requested from client PID 2173 ('systemctl') (unit session-9.scope)... Jul 15 00:10:51.091084 systemd[1]: Reloading... Jul 15 00:10:51.186910 zram_generator::config[2217]: No configuration found. Jul 15 00:10:51.471571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 00:10:51.596936 systemd[1]: Reloading finished in 505 ms. Jul 15 00:10:51.658485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:51.663294 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 00:10:51.666904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:51.667907 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 00:10:51.668214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:51.668264 systemd[1]: kubelet.service: Consumed 171ms CPU time, 100.1M memory peak. Jul 15 00:10:51.671216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:51.846633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:51.851769 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 00:10:51.889646 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 00:10:51.889646 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 00:10:51.889646 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 00:10:51.890127 kubelet[2272]: I0715 00:10:51.889682 2272 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 00:10:52.097505 kubelet[2272]: I0715 00:10:52.097320 2272 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 00:10:52.097505 kubelet[2272]: I0715 00:10:52.097370 2272 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 00:10:52.097717 kubelet[2272]: I0715 00:10:52.097687 2272 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 00:10:52.125055 kubelet[2272]: E0715 00:10:52.124643 2272 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:52.126306 kubelet[2272]: I0715 00:10:52.126241 2272 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 00:10:52.137213 kubelet[2272]: E0715 00:10:52.137092 2272 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 00:10:52.137213 kubelet[2272]: I0715 00:10:52.137138 2272 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 00:10:52.145847 kubelet[2272]: I0715 00:10:52.144601 2272 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 00:10:52.145847 kubelet[2272]: I0715 00:10:52.144807 2272 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 00:10:52.145847 kubelet[2272]: I0715 00:10:52.145029 2272 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 00:10:52.145847 kubelet[2272]: I0715 00:10:52.145080 2272 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 00:10:52.146209 kubelet[2272]: I0715 00:10:52.145479 2272 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 00:10:52.146209 kubelet[2272]: I0715 00:10:52.145491 2272 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 00:10:52.146209 kubelet[2272]: I0715 00:10:52.145680 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 15 00:10:52.152420 kubelet[2272]: I0715 00:10:52.152321 2272 kubelet.go:408] "Attempting to sync node with API server" Jul 15 00:10:52.152420 kubelet[2272]: I0715 00:10:52.152428 2272 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 00:10:52.152654 kubelet[2272]: I0715 00:10:52.152522 2272 kubelet.go:314] "Adding apiserver pod source" Jul 15 00:10:52.152654 kubelet[2272]: I0715 00:10:52.152562 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 00:10:52.155245 kubelet[2272]: W0715 00:10:52.154365 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:52.155245 kubelet[2272]: E0715 00:10:52.154462 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:52.156084 kubelet[2272]: W0715 00:10:52.155961 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:52.156084 kubelet[2272]: E0715 00:10:52.156028 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:52.157980 kubelet[2272]: I0715 00:10:52.157220 2272 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 15 00:10:52.157980 kubelet[2272]: I0715 00:10:52.157801 2272 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 00:10:52.158488 kubelet[2272]: W0715 00:10:52.158442 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 00:10:52.160639 kubelet[2272]: I0715 00:10:52.160591 2272 server.go:1274] "Started kubelet" Jul 15 00:10:52.160750 kubelet[2272]: I0715 00:10:52.160709 2272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 00:10:52.161262 kubelet[2272]: I0715 00:10:52.161182 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 00:10:52.162204 kubelet[2272]: I0715 00:10:52.161669 2272 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 00:10:52.162204 kubelet[2272]: I0715 00:10:52.162031 2272 server.go:449] "Adding debug handlers to kubelet server" Jul 15 00:10:52.163000 kubelet[2272]: I0715 00:10:52.162440 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 00:10:52.166486 kubelet[2272]: I0715 00:10:52.165679 2272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 00:10:52.167796 kubelet[2272]: E0715 00:10:52.167449 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.167796 kubelet[2272]: I0715 00:10:52.167780 2272 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 00:10:52.168131 kubelet[2272]: I0715 00:10:52.168105 2272 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 00:10:52.168248 kubelet[2272]: I0715 00:10:52.168232 2272 reconciler.go:26] "Reconciler: start to sync state" Jul 15 00:10:52.168822 kubelet[2272]: W0715 00:10:52.168757 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:52.169440 kubelet[2272]: E0715 00:10:52.168833 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:52.169440 kubelet[2272]: E0715 00:10:52.169055 2272 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 00:10:52.169440 kubelet[2272]: E0715 00:10:52.169160 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Jul 15 00:10:52.171929 kubelet[2272]: I0715 00:10:52.171846 2272 factory.go:221] Registration of the containerd container factory successfully Jul 15 00:10:52.172083 kubelet[2272]: I0715 00:10:52.171939 2272 factory.go:221] Registration of the systemd container factory successfully Jul 15 00:10:52.172083 kubelet[2272]: I0715 00:10:52.172043 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 00:10:52.173668 kubelet[2272]: E0715 00:10:52.171751 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852444147501ac9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 00:10:52.160539337 +0000 UTC m=+0.304943505,LastTimestamp:2025-07-15 00:10:52.160539337 +0000 UTC m=+0.304943505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 00:10:52.192004 kubelet[2272]: I0715 00:10:52.191922 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 00:10:52.194773 kubelet[2272]: I0715 00:10:52.194220 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 00:10:52.194773 kubelet[2272]: I0715 00:10:52.194271 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 00:10:52.194773 kubelet[2272]: I0715 00:10:52.194302 2272 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 00:10:52.194773 kubelet[2272]: E0715 00:10:52.194355 2272 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 00:10:52.195759 kubelet[2272]: W0715 00:10:52.195704 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:52.195822 kubelet[2272]: E0715 00:10:52.195774 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:52.201731 kubelet[2272]: I0715 00:10:52.201697 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 00:10:52.201731 kubelet[2272]: I0715 00:10:52.201721 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 00:10:52.202017 kubelet[2272]: I0715 00:10:52.201748 2272 state_mem.go:36] "Initialized new in-memory state store" Jul 15 00:10:52.268241 kubelet[2272]: E0715 00:10:52.268122 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.294836 kubelet[2272]: E0715 00:10:52.294738 2272 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 00:10:52.369234 kubelet[2272]: E0715 00:10:52.368811 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.371273 kubelet[2272]: E0715 00:10:52.370565 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Jul 15 00:10:52.469781 kubelet[2272]: E0715 00:10:52.469673 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.495402 kubelet[2272]: E0715 00:10:52.495243 2272 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 00:10:52.571076 kubelet[2272]: E0715 00:10:52.570970 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.671828 kubelet[2272]: E0715 00:10:52.671602 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.691975 kubelet[2272]: I0715 00:10:52.691898 2272 policy_none.go:49] "None policy: Start" Jul 15 00:10:52.693512 kubelet[2272]: I0715 00:10:52.693041 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 00:10:52.693512 kubelet[2272]: I0715 00:10:52.693073 2272 state_mem.go:35] "Initializing new in-memory state store" Jul 15 00:10:52.723365 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 00:10:52.756083 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 00:10:52.761585 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 00:10:52.771957 kubelet[2272]: E0715 00:10:52.771815 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 00:10:52.772327 kubelet[2272]: E0715 00:10:52.772276 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Jul 15 00:10:52.775892 kubelet[2272]: I0715 00:10:52.775814 2272 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 00:10:52.776327 kubelet[2272]: I0715 00:10:52.776283 2272 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 00:10:52.776479 kubelet[2272]: I0715 00:10:52.776329 2272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 00:10:52.776989 kubelet[2272]: I0715 00:10:52.776829 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 00:10:52.787014 kubelet[2272]: E0715 00:10:52.785479 2272 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 00:10:52.878712 kubelet[2272]: I0715 00:10:52.878652 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:52.879896 kubelet[2272]: E0715 00:10:52.879798 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 15 00:10:52.927578 systemd[1]: Created slice kubepods-burstable-podbbfee2a0d8ae0565593e570ca7e09ef6.slice - libcontainer container kubepods-burstable-podbbfee2a0d8ae0565593e570ca7e09ef6.slice. Jul 15 00:10:52.960533 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 15 00:10:52.974334 kubelet[2272]: I0715 00:10:52.974132 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:52.974334 kubelet[2272]: I0715 00:10:52.974206 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:52.974334 kubelet[2272]: I0715 00:10:52.974236 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:52.974334 kubelet[2272]: I0715 00:10:52.974273 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:52.974334 kubelet[2272]: I0715 00:10:52.974300 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 00:10:52.975078 kubelet[2272]: I0715 00:10:52.974328 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:52.975078 kubelet[2272]: I0715 00:10:52.974348 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:52.975078 kubelet[2272]: I0715 00:10:52.974371 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:52.975078 kubelet[2272]: I0715 00:10:52.974389 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:52.989406 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 15 00:10:53.086530 kubelet[2272]: I0715 00:10:53.086436 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:53.088010 kubelet[2272]: E0715 00:10:53.087162 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 15 00:10:53.095344 kubelet[2272]: W0715 00:10:53.095230 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:53.095344 kubelet[2272]: E0715 00:10:53.095336 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:53.219787 kubelet[2272]: W0715 00:10:53.218191 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:53.219787 kubelet[2272]: E0715 00:10:53.218288 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:53.250333 kubelet[2272]: E0715 00:10:53.250244 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:53.251255 containerd[1516]: time="2025-07-15T00:10:53.251186306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bbfee2a0d8ae0565593e570ca7e09ef6,Namespace:kube-system,Attempt:0,}" Jul 15 00:10:53.284086 kubelet[2272]: E0715 00:10:53.282926 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:53.284225 containerd[1516]: time="2025-07-15T00:10:53.283555774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 00:10:53.297884 kubelet[2272]: E0715 00:10:53.295681 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:53.299243 containerd[1516]: time="2025-07-15T00:10:53.297077991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 00:10:53.492173 kubelet[2272]: I0715 00:10:53.491513 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:53.492173 kubelet[2272]: E0715 00:10:53.491918 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 15 00:10:53.574974 kubelet[2272]: E0715 00:10:53.574851 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Jul 15 00:10:53.725442 kubelet[2272]: W0715 00:10:53.725204 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:53.725442 kubelet[2272]: E0715 00:10:53.725365 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:53.768782 kubelet[2272]: W0715 00:10:53.768696 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Jul 15 00:10:53.768782 kubelet[2272]: E0715 00:10:53.768771 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:53.996834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450239910.mount: Deactivated successfully. Jul 15 00:10:54.004801 containerd[1516]: time="2025-07-15T00:10:54.004695020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 00:10:54.007967 containerd[1516]: time="2025-07-15T00:10:54.007783684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 15 00:10:54.012521 containerd[1516]: time="2025-07-15T00:10:54.011427939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 00:10:54.014113 containerd[1516]: time="2025-07-15T00:10:54.014025774Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 00:10:54.015319 containerd[1516]: time="2025-07-15T00:10:54.015100618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 15 00:10:54.019490 containerd[1516]: time="2025-07-15T00:10:54.019328728Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 00:10:54.021401 containerd[1516]: time="2025-07-15T00:10:54.021129876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 15 00:10:54.023675 containerd[1516]: time="2025-07-15T00:10:54.023599518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 00:10:54.024781 containerd[1516]: time="2025-07-15T00:10:54.024729516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 741.049498ms" Jul 15 00:10:54.031751 containerd[1516]: time="2025-07-15T00:10:54.031671591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 780.326905ms" Jul 15 00:10:54.032194 containerd[1516]: time="2025-07-15T00:10:54.032149345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.981194ms" Jul 15 00:10:54.181206 containerd[1516]: time="2025-07-15T00:10:54.180487601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:10:54.181206 containerd[1516]: time="2025-07-15T00:10:54.180563424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:10:54.181206 containerd[1516]: time="2025-07-15T00:10:54.180582631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.181206 containerd[1516]: time="2025-07-15T00:10:54.180686618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.182035 containerd[1516]: time="2025-07-15T00:10:54.180126838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:10:54.182035 containerd[1516]: time="2025-07-15T00:10:54.181737485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:10:54.182035 containerd[1516]: time="2025-07-15T00:10:54.181753676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.182035 containerd[1516]: time="2025-07-15T00:10:54.181824941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.183975 containerd[1516]: time="2025-07-15T00:10:54.182296563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:10:54.183975 containerd[1516]: time="2025-07-15T00:10:54.182340697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:10:54.183975 containerd[1516]: time="2025-07-15T00:10:54.182354573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.183975 containerd[1516]: time="2025-07-15T00:10:54.182412292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:10:54.213119 systemd[1]: Started cri-containerd-bdc32eb443ecd45235ddef1b7bc8bdd7dcafb2c77d06cb2525f4c4f13b5914c8.scope - libcontainer container bdc32eb443ecd45235ddef1b7bc8bdd7dcafb2c77d06cb2525f4c4f13b5914c8. Jul 15 00:10:54.219067 systemd[1]: Started cri-containerd-96021c56d688ec64ce90b37e84be7944981573dc4eda81633a2f911236c34b66.scope - libcontainer container 96021c56d688ec64ce90b37e84be7944981573dc4eda81633a2f911236c34b66. Jul 15 00:10:54.221665 systemd[1]: Started cri-containerd-b31b8de692de25202bffd49f7dea4125ac75b8f2730cdf4bcc5595dddb6888e3.scope - libcontainer container b31b8de692de25202bffd49f7dea4125ac75b8f2730cdf4bcc5595dddb6888e3. Jul 15 00:10:54.234952 kubelet[2272]: E0715 00:10:54.234850 2272 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Jul 15 00:10:54.269095 containerd[1516]: time="2025-07-15T00:10:54.269035569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bbfee2a0d8ae0565593e570ca7e09ef6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc32eb443ecd45235ddef1b7bc8bdd7dcafb2c77d06cb2525f4c4f13b5914c8\"" Jul 15 00:10:54.271200 kubelet[2272]: E0715 00:10:54.271088 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:54.273256 containerd[1516]: time="2025-07-15T00:10:54.273221509Z" level=info msg="CreateContainer within sandbox \"bdc32eb443ecd45235ddef1b7bc8bdd7dcafb2c77d06cb2525f4c4f13b5914c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 00:10:54.280772 containerd[1516]: time="2025-07-15T00:10:54.280380504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31b8de692de25202bffd49f7dea4125ac75b8f2730cdf4bcc5595dddb6888e3\"" Jul 15 00:10:54.281708 containerd[1516]: time="2025-07-15T00:10:54.281564044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"96021c56d688ec64ce90b37e84be7944981573dc4eda81633a2f911236c34b66\"" Jul 15 00:10:54.282028 kubelet[2272]: E0715 00:10:54.281979 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:54.282187 kubelet[2272]: E0715 00:10:54.282149 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:54.283758 containerd[1516]: time="2025-07-15T00:10:54.283725333Z" level=info msg="CreateContainer within sandbox \"b31b8de692de25202bffd49f7dea4125ac75b8f2730cdf4bcc5595dddb6888e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 00:10:54.283944 containerd[1516]: time="2025-07-15T00:10:54.283776670Z" level=info msg="CreateContainer within sandbox \"96021c56d688ec64ce90b37e84be7944981573dc4eda81633a2f911236c34b66\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 00:10:54.293787 kubelet[2272]: I0715 00:10:54.293741 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:54.294076 kubelet[2272]: E0715 00:10:54.294040 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Jul 15 00:10:54.364965 containerd[1516]: time="2025-07-15T00:10:54.364884070Z" level=info msg="CreateContainer within sandbox \"b31b8de692de25202bffd49f7dea4125ac75b8f2730cdf4bcc5595dddb6888e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bfba57b76530c5ab4d55a0c89b6d2efefcf0b07e18a456464d0abd5e9efc1d7\"" Jul 15 00:10:54.365687 containerd[1516]: time="2025-07-15T00:10:54.365647234Z" level=info msg="StartContainer for \"9bfba57b76530c5ab4d55a0c89b6d2efefcf0b07e18a456464d0abd5e9efc1d7\"" Jul 15 00:10:54.366428 containerd[1516]: time="2025-07-15T00:10:54.366373669Z" level=info msg="CreateContainer within sandbox \"bdc32eb443ecd45235ddef1b7bc8bdd7dcafb2c77d06cb2525f4c4f13b5914c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ecd888bc6eec63f4dcd872bfd5685a603d67854c950835c28b44f0d1088f7739\"" Jul 15 00:10:54.366771 containerd[1516]: time="2025-07-15T00:10:54.366743608Z" level=info msg="StartContainer for \"ecd888bc6eec63f4dcd872bfd5685a603d67854c950835c28b44f0d1088f7739\"" Jul 15 00:10:54.371123 containerd[1516]: time="2025-07-15T00:10:54.371059655Z" level=info msg="CreateContainer within sandbox \"96021c56d688ec64ce90b37e84be7944981573dc4eda81633a2f911236c34b66\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9c43c9ac00fe1742e5ae2bdbe48d07b23aca7d1cca323e894e398d14e09c4bf\"" Jul 15 00:10:54.373821 containerd[1516]: time="2025-07-15T00:10:54.373768760Z" level=info msg="StartContainer for \"f9c43c9ac00fe1742e5ae2bdbe48d07b23aca7d1cca323e894e398d14e09c4bf\"" Jul 15 00:10:54.405006 systemd[1]: Started cri-containerd-9bfba57b76530c5ab4d55a0c89b6d2efefcf0b07e18a456464d0abd5e9efc1d7.scope - libcontainer container 9bfba57b76530c5ab4d55a0c89b6d2efefcf0b07e18a456464d0abd5e9efc1d7. Jul 15 00:10:54.406892 systemd[1]: Started cri-containerd-ecd888bc6eec63f4dcd872bfd5685a603d67854c950835c28b44f0d1088f7739.scope - libcontainer container ecd888bc6eec63f4dcd872bfd5685a603d67854c950835c28b44f0d1088f7739. Jul 15 00:10:54.409009 systemd[1]: Started cri-containerd-f9c43c9ac00fe1742e5ae2bdbe48d07b23aca7d1cca323e894e398d14e09c4bf.scope - libcontainer container f9c43c9ac00fe1742e5ae2bdbe48d07b23aca7d1cca323e894e398d14e09c4bf. Jul 15 00:10:54.767338 containerd[1516]: time="2025-07-15T00:10:54.767243710Z" level=info msg="StartContainer for \"9bfba57b76530c5ab4d55a0c89b6d2efefcf0b07e18a456464d0abd5e9efc1d7\" returns successfully" Jul 15 00:10:54.767593 containerd[1516]: time="2025-07-15T00:10:54.767258478Z" level=info msg="StartContainer for \"f9c43c9ac00fe1742e5ae2bdbe48d07b23aca7d1cca323e894e398d14e09c4bf\" returns successfully" Jul 15 00:10:54.767593 containerd[1516]: time="2025-07-15T00:10:54.767262175Z" level=info msg="StartContainer for \"ecd888bc6eec63f4dcd872bfd5685a603d67854c950835c28b44f0d1088f7739\" returns successfully" Jul 15 00:10:55.207557 kubelet[2272]: E0715 00:10:55.207437 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:55.209683 kubelet[2272]: E0715 00:10:55.209648 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:55.211413 kubelet[2272]: E0715 00:10:55.211391 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:55.536078 kubelet[2272]: E0715 00:10:55.536002 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 00:10:55.895757 kubelet[2272]: I0715 00:10:55.895636 2272 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:55.897367 kubelet[2272]: E0715 00:10:55.897343 2272 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 15 00:10:55.901502 kubelet[2272]: I0715 00:10:55.901458 2272 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 00:10:56.157762 kubelet[2272]: I0715 00:10:56.157637 2272 apiserver.go:52] "Watching apiserver" Jul 15 00:10:56.169082 kubelet[2272]: I0715 00:10:56.169050 2272 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 00:10:56.215747 kubelet[2272]: E0715 00:10:56.215724 2272 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:56.215918 kubelet[2272]: E0715 00:10:56.215888 2272 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:57.790561 systemd[1]: Reload requested from client PID 2556 ('systemctl') (unit session-9.scope)... Jul 15 00:10:57.790576 systemd[1]: Reloading... Jul 15 00:10:57.868906 zram_generator::config[2603]: No configuration found. Jul 15 00:10:57.987147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 00:10:58.106098 systemd[1]: Reloading finished in 315 ms. Jul 15 00:10:58.129882 kubelet[2272]: I0715 00:10:58.129792 2272 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 00:10:58.130175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:58.155401 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 00:10:58.155748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:58.155810 systemd[1]: kubelet.service: Consumed 886ms CPU time, 133.3M memory peak. Jul 15 00:10:58.163315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 00:10:58.342443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 00:10:58.347708 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 00:10:58.396989 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 00:10:58.396989 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 00:10:58.396989 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 00:10:58.396989 kubelet[2645]: I0715 00:10:58.396614 2645 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 00:10:58.403888 kubelet[2645]: I0715 00:10:58.403806 2645 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 00:10:58.403888 kubelet[2645]: I0715 00:10:58.403850 2645 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 00:10:58.404253 kubelet[2645]: I0715 00:10:58.404224 2645 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 00:10:58.406786 kubelet[2645]: I0715 00:10:58.406604 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 00:10:58.409633 kubelet[2645]: I0715 00:10:58.409597 2645 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 00:10:58.413904 kubelet[2645]: E0715 00:10:58.412692 2645 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 00:10:58.413904 kubelet[2645]: I0715 00:10:58.412723 2645 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 00:10:58.417883 kubelet[2645]: I0715 00:10:58.417836 2645 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 00:10:58.418034 kubelet[2645]: I0715 00:10:58.418002 2645 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 00:10:58.418209 kubelet[2645]: I0715 00:10:58.418170 2645 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 00:10:58.418393 kubelet[2645]: I0715 00:10:58.418198 2645 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 00:10:58.418499 kubelet[2645]: I0715 00:10:58.418398 2645 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 00:10:58.418499 kubelet[2645]: I0715 00:10:58.418410 2645 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 00:10:58.418499 kubelet[2645]: I0715 00:10:58.418442 2645 state_mem.go:36] "Initialized new in-memory state store" Jul 15 00:10:58.418601 kubelet[2645]: I0715 00:10:58.418566 2645 kubelet.go:408] "Attempting to sync node with API server" Jul 15 00:10:58.418601 kubelet[2645]: I0715 00:10:58.418587 2645 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 00:10:58.418662 kubelet[2645]: I0715 00:10:58.418626 2645 kubelet.go:314] "Adding apiserver pod source" Jul 15 00:10:58.418662 kubelet[2645]: I0715 00:10:58.418638 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 00:10:58.419318 kubelet[2645]: I0715 00:10:58.419259 2645 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 15 00:10:58.421578 kubelet[2645]: I0715 00:10:58.419692 2645 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 00:10:58.421578 kubelet[2645]: I0715 00:10:58.420134 2645 server.go:1274] "Started kubelet" Jul 15 00:10:58.421578 kubelet[2645]: I0715 00:10:58.420259 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 00:10:58.421578 kubelet[2645]: I0715 00:10:58.420398 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 00:10:58.429473 kubelet[2645]: I0715 00:10:58.429433 2645 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 00:10:58.429473 kubelet[2645]: I0715 00:10:58.426432 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 00:10:58.429656 kubelet[2645]: I0715 00:10:58.426013 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 00:10:58.430248 kubelet[2645]: I0715 00:10:58.430207 2645 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 00:10:58.430350 kubelet[2645]: I0715 00:10:58.428823 2645 server.go:449] "Adding debug handlers to kubelet server" Jul 15 00:10:58.432077 kubelet[2645]: E0715 00:10:58.430420 2645 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 00:10:58.432565 kubelet[2645]: I0715 00:10:58.432531 2645 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 00:10:58.433074 kubelet[2645]: I0715 00:10:58.433036 2645 reconciler.go:26] "Reconciler: start to sync state" Jul 15 00:10:58.433813 kubelet[2645]: I0715 00:10:58.433144 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 00:10:58.437059 kubelet[2645]: I0715 00:10:58.437025 2645 factory.go:221] Registration of the containerd container factory successfully Jul 15 00:10:58.437059 kubelet[2645]: I0715 00:10:58.437053 2645 factory.go:221] Registration of the systemd container factory successfully Jul 15 00:10:58.443537 kubelet[2645]: I0715 00:10:58.443493 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 00:10:58.444766 kubelet[2645]: I0715 00:10:58.444745 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 00:10:58.444766 kubelet[2645]: I0715 00:10:58.444765 2645 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 00:10:58.444834 kubelet[2645]: I0715 00:10:58.444785 2645 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 00:10:58.444834 kubelet[2645]: E0715 00:10:58.444824 2645 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 00:10:58.466990 kubelet[2645]: I0715 00:10:58.466957 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 00:10:58.466990 kubelet[2645]: I0715 00:10:58.466980 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 00:10:58.466990 kubelet[2645]: I0715 00:10:58.467000 2645 state_mem.go:36] "Initialized new in-memory state store" Jul 15 00:10:58.468398 kubelet[2645]: I0715 00:10:58.467185 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 00:10:58.468398 kubelet[2645]: I0715 00:10:58.467199 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 00:10:58.468398 kubelet[2645]: I0715 00:10:58.467218 2645 policy_none.go:49] "None policy: Start" Jul 15 00:10:58.469190 kubelet[2645]: I0715 00:10:58.469163 2645 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 00:10:58.469236 kubelet[2645]: I0715 00:10:58.469196 2645 state_mem.go:35] "Initializing new in-memory state store" Jul 15 00:10:58.469373 kubelet[2645]: I0715 00:10:58.469354 2645 state_mem.go:75] "Updated machine memory state" Jul 15 00:10:58.474253 kubelet[2645]: I0715 00:10:58.474129 2645 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 00:10:58.474327 kubelet[2645]: I0715 00:10:58.474316 2645 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 00:10:58.474354 kubelet[2645]: I0715 00:10:58.474332 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 00:10:58.474591 kubelet[2645]: I0715 00:10:58.474576 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 00:10:58.581031 kubelet[2645]: I0715 00:10:58.580972 2645 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 00:10:58.589782 kubelet[2645]: I0715 00:10:58.589741 2645 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 00:10:58.589937 kubelet[2645]: I0715 00:10:58.589836 2645 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 00:10:58.634319 kubelet[2645]: I0715 00:10:58.634246 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:58.634319 kubelet[2645]: I0715 00:10:58.634280 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:58.634319 kubelet[2645]: I0715 00:10:58.634298 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:58.634319 kubelet[2645]: I0715 00:10:58.634314 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:58.634319 kubelet[2645]: I0715 00:10:58.634331 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 00:10:58.634632 kubelet[2645]: I0715 00:10:58.634344 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:58.634632 kubelet[2645]: I0715 00:10:58.634358 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbfee2a0d8ae0565593e570ca7e09ef6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bbfee2a0d8ae0565593e570ca7e09ef6\") " pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:58.634632 kubelet[2645]: I0715 00:10:58.634372 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:58.634632 kubelet[2645]: I0715 00:10:58.634393 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 00:10:58.794099 sudo[2684]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 00:10:58.794568 sudo[2684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 00:10:58.851549 kubelet[2645]: E0715 00:10:58.851505 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:58.851732 kubelet[2645]: E0715 00:10:58.851664 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:58.852602 kubelet[2645]: E0715 00:10:58.851820 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:59.294443 sudo[2684]: pam_unix(sudo:session): session closed for user root Jul 15 00:10:59.419145 kubelet[2645]: I0715 00:10:59.419099 2645 apiserver.go:52] "Watching apiserver" Jul 15 00:10:59.433292 kubelet[2645]: I0715 00:10:59.433243 2645 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 00:10:59.458891 kubelet[2645]: E0715 00:10:59.456290 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:59.464303 kubelet[2645]: E0715 00:10:59.464259 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 00:10:59.464517 kubelet[2645]: E0715 00:10:59.464489 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:59.465255 kubelet[2645]: E0715 00:10:59.464832 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 00:10:59.465255 kubelet[2645]: E0715 00:10:59.464968 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:10:59.486546 kubelet[2645]: I0715 00:10:59.486488 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.486457999 podStartE2EDuration="1.486457999s" podCreationTimestamp="2025-07-15 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:10:59.479368844 +0000 UTC m=+1.127924484" watchObservedRunningTime="2025-07-15 00:10:59.486457999 +0000 UTC m=+1.135013639" Jul 15 00:10:59.497376 kubelet[2645]: I0715 00:10:59.497321 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.497288448 podStartE2EDuration="1.497288448s" podCreationTimestamp="2025-07-15 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:10:59.486715645 +0000 UTC m=+1.135271285" watchObservedRunningTime="2025-07-15 00:10:59.497288448 +0000 UTC m=+1.145844088" Jul 15 00:10:59.497692 kubelet[2645]: I0715 00:10:59.497444 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.497438662 podStartE2EDuration="1.497438662s" podCreationTimestamp="2025-07-15 00:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:10:59.492767461 +0000 UTC m=+1.141323101" watchObservedRunningTime="2025-07-15 00:10:59.497438662 +0000 UTC m=+1.145994302" Jul 15 00:11:00.446663 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 15 00:11:00.448363 sshd[1711]: Connection closed by 10.0.0.1 port 47464 Jul 15 00:11:00.448783 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:00.453309 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:47464.service: Deactivated successfully. Jul 15 00:11:00.455974 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 00:11:00.456235 systemd[1]: session-9.scope: Consumed 4.788s CPU time, 248.8M memory peak. Jul 15 00:11:00.457726 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Jul 15 00:11:00.458138 kubelet[2645]: E0715 00:11:00.457754 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:00.458138 kubelet[2645]: E0715 00:11:00.457839 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:00.459232 systemd-logind[1492]: Removed session 9. Jul 15 00:11:00.706847 kubelet[2645]: E0715 00:11:00.706668 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:01.459534 kubelet[2645]: E0715 00:11:01.459484 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:03.180832 update_engine[1495]: I20250715 00:11:03.180722 1495 update_attempter.cc:509] Updating boot flags... Jul 15 00:11:03.306520 kubelet[2645]: I0715 00:11:03.306487 2645 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 00:11:03.306956 containerd[1516]: time="2025-07-15T00:11:03.306825635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 00:11:03.307197 kubelet[2645]: I0715 00:11:03.307029 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 00:11:03.434992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2732) Jul 15 00:11:03.493943 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2731) Jul 15 00:11:03.531896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2731) Jul 15 00:11:03.917003 systemd[1]: Created slice kubepods-besteffort-pode3aa7363_26c9_4e03_8d08_4f5108fef2ce.slice - libcontainer container kubepods-besteffort-pode3aa7363_26c9_4e03_8d08_4f5108fef2ce.slice. Jul 15 00:11:03.929910 systemd[1]: Created slice kubepods-burstable-pod51af13a5_f0af_4da4_a090_c2431676c9ef.slice - libcontainer container kubepods-burstable-pod51af13a5_f0af_4da4_a090_c2431676c9ef.slice. Jul 15 00:11:03.963707 kubelet[2645]: I0715 00:11:03.963646 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-run\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.963707 kubelet[2645]: I0715 00:11:03.963691 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-etc-cni-netd\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.963707 kubelet[2645]: I0715 00:11:03.963709 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-hostproc\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.963707 kubelet[2645]: I0715 00:11:03.963724 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3aa7363-26c9-4e03-8d08-4f5108fef2ce-kube-proxy\") pod \"kube-proxy-blqrh\" (UID: \"e3aa7363-26c9-4e03-8d08-4f5108fef2ce\") " pod="kube-system/kube-proxy-blqrh" Jul 15 00:11:03.964001 kubelet[2645]: I0715 00:11:03.963744 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-xtables-lock\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964001 kubelet[2645]: I0715 00:11:03.963840 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-net\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964001 kubelet[2645]: I0715 00:11:03.963914 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-lib-modules\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964001 kubelet[2645]: I0715 00:11:03.963932 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22p7j\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-kube-api-access-22p7j\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964001 kubelet[2645]: I0715 00:11:03.963951 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3aa7363-26c9-4e03-8d08-4f5108fef2ce-xtables-lock\") pod \"kube-proxy-blqrh\" (UID: \"e3aa7363-26c9-4e03-8d08-4f5108fef2ce\") " pod="kube-system/kube-proxy-blqrh" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.963968 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3aa7363-26c9-4e03-8d08-4f5108fef2ce-lib-modules\") pod \"kube-proxy-blqrh\" (UID: \"e3aa7363-26c9-4e03-8d08-4f5108fef2ce\") " pod="kube-system/kube-proxy-blqrh" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.963983 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-bpf-maps\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.963999 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-cgroup\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.964014 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cni-path\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.964030 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-config-path\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964198 kubelet[2645]: I0715 00:11:03.964056 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glrs\" (UniqueName: \"kubernetes.io/projected/e3aa7363-26c9-4e03-8d08-4f5108fef2ce-kube-api-access-5glrs\") pod \"kube-proxy-blqrh\" (UID: \"e3aa7363-26c9-4e03-8d08-4f5108fef2ce\") " pod="kube-system/kube-proxy-blqrh" Jul 15 00:11:03.964338 kubelet[2645]: I0715 00:11:03.964079 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51af13a5-f0af-4da4-a090-c2431676c9ef-clustermesh-secrets\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964338 kubelet[2645]: I0715 00:11:03.964104 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-kernel\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:03.964338 kubelet[2645]: I0715 00:11:03.964129 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-hubble-tls\") pod \"cilium-c7xpj\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " pod="kube-system/cilium-c7xpj" Jul 15 00:11:04.227772 kubelet[2645]: E0715 00:11:04.227598 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.229074 containerd[1516]: time="2025-07-15T00:11:04.229028463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blqrh,Uid:e3aa7363-26c9-4e03-8d08-4f5108fef2ce,Namespace:kube-system,Attempt:0,}" Jul 15 00:11:04.234434 kubelet[2645]: E0715 00:11:04.233257 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.234526 containerd[1516]: time="2025-07-15T00:11:04.233659383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7xpj,Uid:51af13a5-f0af-4da4-a090-c2431676c9ef,Namespace:kube-system,Attempt:0,}" Jul 15 00:11:04.326370 systemd[1]: Created slice kubepods-besteffort-pod959108f4_5e12_4dfc_bb30_42860c11dc8e.slice - libcontainer container kubepods-besteffort-pod959108f4_5e12_4dfc_bb30_42860c11dc8e.slice. Jul 15 00:11:04.328327 containerd[1516]: time="2025-07-15T00:11:04.328229411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:11:04.328679 containerd[1516]: time="2025-07-15T00:11:04.328367972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:11:04.328679 containerd[1516]: time="2025-07-15T00:11:04.328386498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.328679 containerd[1516]: time="2025-07-15T00:11:04.328532312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.338603 containerd[1516]: time="2025-07-15T00:11:04.338453465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:11:04.339642 containerd[1516]: time="2025-07-15T00:11:04.339563517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:11:04.339642 containerd[1516]: time="2025-07-15T00:11:04.339589257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.341080 containerd[1516]: time="2025-07-15T00:11:04.339770447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.357581 systemd[1]: Started cri-containerd-62f75f546f7527d3a2462e4068bcdbfe5f299cbfef97ba3af748f0d2a34a2ffe.scope - libcontainer container 62f75f546f7527d3a2462e4068bcdbfe5f299cbfef97ba3af748f0d2a34a2ffe. Jul 15 00:11:04.361167 systemd[1]: Started cri-containerd-917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6.scope - libcontainer container 917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6. Jul 15 00:11:04.366369 kubelet[2645]: I0715 00:11:04.366313 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/959108f4-5e12-4dfc-bb30-42860c11dc8e-cilium-config-path\") pod \"cilium-operator-5d85765b45-mzxd5\" (UID: \"959108f4-5e12-4dfc-bb30-42860c11dc8e\") " pod="kube-system/cilium-operator-5d85765b45-mzxd5" Jul 15 00:11:04.366369 kubelet[2645]: I0715 00:11:04.366351 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtvw\" (UniqueName: \"kubernetes.io/projected/959108f4-5e12-4dfc-bb30-42860c11dc8e-kube-api-access-9wtvw\") pod \"cilium-operator-5d85765b45-mzxd5\" (UID: \"959108f4-5e12-4dfc-bb30-42860c11dc8e\") " pod="kube-system/cilium-operator-5d85765b45-mzxd5" Jul 15 00:11:04.388914 containerd[1516]: time="2025-07-15T00:11:04.386486173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blqrh,Uid:e3aa7363-26c9-4e03-8d08-4f5108fef2ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"62f75f546f7527d3a2462e4068bcdbfe5f299cbfef97ba3af748f0d2a34a2ffe\"" Jul 15 00:11:04.389082 kubelet[2645]: E0715 00:11:04.387126 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.389184 containerd[1516]: time="2025-07-15T00:11:04.389143894Z" level=info msg="CreateContainer within sandbox \"62f75f546f7527d3a2462e4068bcdbfe5f299cbfef97ba3af748f0d2a34a2ffe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 00:11:04.392626 containerd[1516]: time="2025-07-15T00:11:04.392586223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7xpj,Uid:51af13a5-f0af-4da4-a090-c2431676c9ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\"" Jul 15 00:11:04.393926 kubelet[2645]: E0715 00:11:04.393899 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.395010 containerd[1516]: time="2025-07-15T00:11:04.394983282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 00:11:04.413158 containerd[1516]: time="2025-07-15T00:11:04.413106497Z" level=info msg="CreateContainer within sandbox \"62f75f546f7527d3a2462e4068bcdbfe5f299cbfef97ba3af748f0d2a34a2ffe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81a07892a4b24c706f5b7e6c8858bd6fc31c027f07bef6479c8d853a98a1f184\"" Jul 15 00:11:04.413752 containerd[1516]: time="2025-07-15T00:11:04.413649129Z" level=info msg="StartContainer for \"81a07892a4b24c706f5b7e6c8858bd6fc31c027f07bef6479c8d853a98a1f184\"" Jul 15 00:11:04.443044 systemd[1]: Started cri-containerd-81a07892a4b24c706f5b7e6c8858bd6fc31c027f07bef6479c8d853a98a1f184.scope - libcontainer container 81a07892a4b24c706f5b7e6c8858bd6fc31c027f07bef6479c8d853a98a1f184. Jul 15 00:11:04.485594 containerd[1516]: time="2025-07-15T00:11:04.485478137Z" level=info msg="StartContainer for \"81a07892a4b24c706f5b7e6c8858bd6fc31c027f07bef6479c8d853a98a1f184\" returns successfully" Jul 15 00:11:04.630602 kubelet[2645]: E0715 00:11:04.630526 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.631179 containerd[1516]: time="2025-07-15T00:11:04.631129240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mzxd5,Uid:959108f4-5e12-4dfc-bb30-42860c11dc8e,Namespace:kube-system,Attempt:0,}" Jul 15 00:11:04.660528 containerd[1516]: time="2025-07-15T00:11:04.659893386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:11:04.660528 containerd[1516]: time="2025-07-15T00:11:04.660487846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:11:04.660528 containerd[1516]: time="2025-07-15T00:11:04.660499959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.660748 containerd[1516]: time="2025-07-15T00:11:04.660576924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:04.685068 systemd[1]: Started cri-containerd-08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73.scope - libcontainer container 08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73. Jul 15 00:11:04.721750 containerd[1516]: time="2025-07-15T00:11:04.721692826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mzxd5,Uid:959108f4-5e12-4dfc-bb30-42860c11dc8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\"" Jul 15 00:11:04.722396 kubelet[2645]: E0715 00:11:04.722375 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:04.936339 kubelet[2645]: E0715 00:11:04.936279 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:05.473713 kubelet[2645]: E0715 00:11:05.473674 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:05.474138 kubelet[2645]: E0715 00:11:05.473916 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:05.494279 kubelet[2645]: I0715 00:11:05.494207 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-blqrh" podStartSLOduration=2.494185042 podStartE2EDuration="2.494185042s" podCreationTimestamp="2025-07-15 00:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:11:05.485960161 +0000 UTC m=+7.134515801" watchObservedRunningTime="2025-07-15 00:11:05.494185042 +0000 UTC m=+7.142740682" Jul 15 00:11:06.480556 kubelet[2645]: E0715 00:11:06.478672 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:06.480556 kubelet[2645]: E0715 00:11:06.480176 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:09.090895 kubelet[2645]: E0715 00:11:09.090845 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:10.710617 kubelet[2645]: E0715 00:11:10.710582 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:15.967346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788186256.mount: Deactivated successfully. Jul 15 00:11:18.343711 containerd[1516]: time="2025-07-15T00:11:18.343656900Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:11:18.344420 containerd[1516]: time="2025-07-15T00:11:18.344382615Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 00:11:18.345451 containerd[1516]: time="2025-07-15T00:11:18.345420046Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:11:18.346946 containerd[1516]: time="2025-07-15T00:11:18.346913454Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.951896999s" Jul 15 00:11:18.346946 containerd[1516]: time="2025-07-15T00:11:18.346941517Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 00:11:18.356125 containerd[1516]: time="2025-07-15T00:11:18.356081245Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 00:11:18.370453 containerd[1516]: time="2025-07-15T00:11:18.370423257Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 00:11:18.383031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675454004.mount: Deactivated successfully. Jul 15 00:11:18.384810 containerd[1516]: time="2025-07-15T00:11:18.384764136Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\"" Jul 15 00:11:18.388017 containerd[1516]: time="2025-07-15T00:11:18.387988369Z" level=info msg="StartContainer for \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\"" Jul 15 00:11:18.416008 systemd[1]: Started cri-containerd-bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6.scope - libcontainer container bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6. Jul 15 00:11:18.445881 containerd[1516]: time="2025-07-15T00:11:18.445812814Z" level=info msg="StartContainer for \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\" returns successfully" Jul 15 00:11:18.456101 systemd[1]: cri-containerd-bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6.scope: Deactivated successfully. Jul 15 00:11:18.456520 systemd[1]: cri-containerd-bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6.scope: Consumed 27ms CPU time, 7M memory peak, 3.2M written to disk. Jul 15 00:11:18.493819 kubelet[2645]: E0715 00:11:18.493773 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:19.188333 containerd[1516]: time="2025-07-15T00:11:19.188236834Z" level=info msg="shim disconnected" id=bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6 namespace=k8s.io Jul 15 00:11:19.188333 containerd[1516]: time="2025-07-15T00:11:19.188312207Z" level=warning msg="cleaning up after shim disconnected" id=bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6 namespace=k8s.io Jul 15 00:11:19.188333 containerd[1516]: time="2025-07-15T00:11:19.188325321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:11:19.380794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6-rootfs.mount: Deactivated successfully. Jul 15 00:11:19.496784 kubelet[2645]: E0715 00:11:19.496740 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:19.498423 containerd[1516]: time="2025-07-15T00:11:19.498368419Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 00:11:19.928331 containerd[1516]: time="2025-07-15T00:11:19.928218895Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\"" Jul 15 00:11:19.928890 containerd[1516]: time="2025-07-15T00:11:19.928796581Z" level=info msg="StartContainer for \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\"" Jul 15 00:11:19.958012 systemd[1]: Started cri-containerd-333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b.scope - libcontainer container 333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b. Jul 15 00:11:19.983655 containerd[1516]: time="2025-07-15T00:11:19.983614261Z" level=info msg="StartContainer for \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\" returns successfully" Jul 15 00:11:19.995917 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 00:11:19.996178 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 00:11:19.996528 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 00:11:20.004193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 00:11:20.004400 systemd[1]: cri-containerd-333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b.scope: Deactivated successfully. Jul 15 00:11:20.021977 containerd[1516]: time="2025-07-15T00:11:20.021601388Z" level=info msg="shim disconnected" id=333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b namespace=k8s.io Jul 15 00:11:20.021977 containerd[1516]: time="2025-07-15T00:11:20.021650972Z" level=warning msg="cleaning up after shim disconnected" id=333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b namespace=k8s.io Jul 15 00:11:20.021977 containerd[1516]: time="2025-07-15T00:11:20.021659809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:11:20.022993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 00:11:20.380621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b-rootfs.mount: Deactivated successfully. Jul 15 00:11:20.560741 kubelet[2645]: E0715 00:11:20.560707 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:20.563374 containerd[1516]: time="2025-07-15T00:11:20.563322739Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 00:11:21.294712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171319122.mount: Deactivated successfully. Jul 15 00:11:21.308237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264164346.mount: Deactivated successfully. Jul 15 00:11:21.311972 containerd[1516]: time="2025-07-15T00:11:21.311931944Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\"" Jul 15 00:11:21.313497 containerd[1516]: time="2025-07-15T00:11:21.312653049Z" level=info msg="StartContainer for \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\"" Jul 15 00:11:21.337994 systemd[1]: Started cri-containerd-ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf.scope - libcontainer container ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf. Jul 15 00:11:21.371161 containerd[1516]: time="2025-07-15T00:11:21.371038868Z" level=info msg="StartContainer for \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\" returns successfully" Jul 15 00:11:21.374023 systemd[1]: cri-containerd-ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf.scope: Deactivated successfully. Jul 15 00:11:21.403986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf-rootfs.mount: Deactivated successfully. Jul 15 00:11:21.437998 containerd[1516]: time="2025-07-15T00:11:21.437776649Z" level=info msg="shim disconnected" id=ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf namespace=k8s.io Jul 15 00:11:21.437998 containerd[1516]: time="2025-07-15T00:11:21.437833897Z" level=warning msg="cleaning up after shim disconnected" id=ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf namespace=k8s.io Jul 15 00:11:21.437998 containerd[1516]: time="2025-07-15T00:11:21.437843235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:11:21.570501 kubelet[2645]: E0715 00:11:21.569499 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:21.572768 containerd[1516]: time="2025-07-15T00:11:21.572724869Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 00:11:21.589555 containerd[1516]: time="2025-07-15T00:11:21.589508815Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\"" Jul 15 00:11:21.591362 containerd[1516]: time="2025-07-15T00:11:21.591297247Z" level=info msg="StartContainer for \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\"" Jul 15 00:11:21.628000 systemd[1]: Started cri-containerd-603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1.scope - libcontainer container 603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1. Jul 15 00:11:21.657216 systemd[1]: cri-containerd-603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1.scope: Deactivated successfully. Jul 15 00:11:21.658578 containerd[1516]: time="2025-07-15T00:11:21.658534737Z" level=info msg="StartContainer for \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\" returns successfully" Jul 15 00:11:21.890541 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:58296.service - OpenSSH per-connection server daemon (10.0.0.1:58296). Jul 15 00:11:21.922882 containerd[1516]: time="2025-07-15T00:11:21.922779814Z" level=info msg="shim disconnected" id=603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1 namespace=k8s.io Jul 15 00:11:21.922882 containerd[1516]: time="2025-07-15T00:11:21.922830920Z" level=warning msg="cleaning up after shim disconnected" id=603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1 namespace=k8s.io Jul 15 00:11:21.922882 containerd[1516]: time="2025-07-15T00:11:21.922839927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:11:21.931182 sshd[3303]: Accepted publickey for core from 10.0.0.1 port 58296 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:21.933956 sshd-session[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:21.939137 systemd-logind[1492]: New session 10 of user core. Jul 15 00:11:21.945011 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 00:11:21.949628 containerd[1516]: time="2025-07-15T00:11:21.949584579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:11:21.950329 containerd[1516]: time="2025-07-15T00:11:21.950282019Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 00:11:21.951444 containerd[1516]: time="2025-07-15T00:11:21.951420380Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 00:11:21.952801 containerd[1516]: time="2025-07-15T00:11:21.952761581Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.596653216s" Jul 15 00:11:21.952874 containerd[1516]: time="2025-07-15T00:11:21.952801646Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 00:11:21.954698 containerd[1516]: time="2025-07-15T00:11:21.954394701Z" level=info msg="CreateContainer within sandbox \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 00:11:21.967020 containerd[1516]: time="2025-07-15T00:11:21.966975284Z" level=info msg="CreateContainer within sandbox \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\"" Jul 15 00:11:21.967357 containerd[1516]: time="2025-07-15T00:11:21.967319922Z" level=info msg="StartContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\"" Jul 15 00:11:21.994095 systemd[1]: Started cri-containerd-25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e.scope - libcontainer container 25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e. Jul 15 00:11:22.024476 containerd[1516]: time="2025-07-15T00:11:22.024436039Z" level=info msg="StartContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" returns successfully" Jul 15 00:11:22.085010 sshd[3322]: Connection closed by 10.0.0.1 port 58296 Jul 15 00:11:22.085283 sshd-session[3303]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:22.090321 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Jul 15 00:11:22.092999 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:58296.service: Deactivated successfully. Jul 15 00:11:22.096374 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 00:11:22.098188 systemd-logind[1492]: Removed session 10. Jul 15 00:11:22.385419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1-rootfs.mount: Deactivated successfully. Jul 15 00:11:22.573807 kubelet[2645]: E0715 00:11:22.573762 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:22.576685 containerd[1516]: time="2025-07-15T00:11:22.576645217Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 00:11:22.582886 kubelet[2645]: E0715 00:11:22.582314 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:22.616132 containerd[1516]: time="2025-07-15T00:11:22.616073719Z" level=info msg="CreateContainer within sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\"" Jul 15 00:11:22.617887 containerd[1516]: time="2025-07-15T00:11:22.617522433Z" level=info msg="StartContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\"" Jul 15 00:11:22.683007 systemd[1]: Started cri-containerd-9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75.scope - libcontainer container 9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75. Jul 15 00:11:22.718440 containerd[1516]: time="2025-07-15T00:11:22.718387258Z" level=info msg="StartContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" returns successfully" Jul 15 00:11:22.845647 kubelet[2645]: I0715 00:11:22.845597 2645 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 00:11:22.861636 kubelet[2645]: I0715 00:11:22.861563 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mzxd5" podStartSLOduration=1.631137506 podStartE2EDuration="18.86154447s" podCreationTimestamp="2025-07-15 00:11:04 +0000 UTC" firstStartedPulling="2025-07-15 00:11:04.722991825 +0000 UTC m=+6.371547465" lastFinishedPulling="2025-07-15 00:11:21.953398789 +0000 UTC m=+23.601954429" observedRunningTime="2025-07-15 00:11:22.600295338 +0000 UTC m=+24.248850978" watchObservedRunningTime="2025-07-15 00:11:22.86154447 +0000 UTC m=+24.510100100" Jul 15 00:11:22.869467 kubelet[2645]: I0715 00:11:22.869431 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggcz\" (UniqueName: \"kubernetes.io/projected/1611c4af-125c-40d6-b898-c9e260bc7271-kube-api-access-mggcz\") pod \"coredns-7c65d6cfc9-5qz24\" (UID: \"1611c4af-125c-40d6-b898-c9e260bc7271\") " pod="kube-system/coredns-7c65d6cfc9-5qz24" Jul 15 00:11:22.869467 kubelet[2645]: I0715 00:11:22.869462 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1a6d60-5b7a-4739-b5f0-a0c30be912b8-config-volume\") pod \"coredns-7c65d6cfc9-4fdhr\" (UID: \"bf1a6d60-5b7a-4739-b5f0-a0c30be912b8\") " pod="kube-system/coredns-7c65d6cfc9-4fdhr" Jul 15 00:11:22.869467 kubelet[2645]: I0715 00:11:22.869485 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1611c4af-125c-40d6-b898-c9e260bc7271-config-volume\") pod \"coredns-7c65d6cfc9-5qz24\" (UID: \"1611c4af-125c-40d6-b898-c9e260bc7271\") " pod="kube-system/coredns-7c65d6cfc9-5qz24" Jul 15 00:11:22.869710 kubelet[2645]: I0715 00:11:22.869500 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrpl\" (UniqueName: \"kubernetes.io/projected/bf1a6d60-5b7a-4739-b5f0-a0c30be912b8-kube-api-access-4hrpl\") pod \"coredns-7c65d6cfc9-4fdhr\" (UID: \"bf1a6d60-5b7a-4739-b5f0-a0c30be912b8\") " pod="kube-system/coredns-7c65d6cfc9-4fdhr" Jul 15 00:11:22.871786 systemd[1]: Created slice kubepods-burstable-pod1611c4af_125c_40d6_b898_c9e260bc7271.slice - libcontainer container kubepods-burstable-pod1611c4af_125c_40d6_b898_c9e260bc7271.slice. Jul 15 00:11:22.877582 systemd[1]: Created slice kubepods-burstable-podbf1a6d60_5b7a_4739_b5f0_a0c30be912b8.slice - libcontainer container kubepods-burstable-podbf1a6d60_5b7a_4739_b5f0_a0c30be912b8.slice. Jul 15 00:11:23.174943 kubelet[2645]: E0715 00:11:23.174893 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:23.175515 containerd[1516]: time="2025-07-15T00:11:23.175484970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5qz24,Uid:1611c4af-125c-40d6-b898-c9e260bc7271,Namespace:kube-system,Attempt:0,}" Jul 15 00:11:23.180797 kubelet[2645]: E0715 00:11:23.180753 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:23.182195 containerd[1516]: time="2025-07-15T00:11:23.182144198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fdhr,Uid:bf1a6d60-5b7a-4739-b5f0-a0c30be912b8,Namespace:kube-system,Attempt:0,}" Jul 15 00:11:23.585568 kubelet[2645]: E0715 00:11:23.585531 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:23.586475 kubelet[2645]: E0715 00:11:23.585959 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:23.598575 kubelet[2645]: I0715 00:11:23.598491 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7xpj" podStartSLOduration=6.643381433 podStartE2EDuration="20.598469826s" podCreationTimestamp="2025-07-15 00:11:03 +0000 UTC" firstStartedPulling="2025-07-15 00:11:04.394503518 +0000 UTC m=+6.043059158" lastFinishedPulling="2025-07-15 00:11:18.349591911 +0000 UTC m=+19.998147551" observedRunningTime="2025-07-15 00:11:23.598337808 +0000 UTC m=+25.246893448" watchObservedRunningTime="2025-07-15 00:11:23.598469826 +0000 UTC m=+25.247025466" Jul 15 00:11:24.587624 kubelet[2645]: E0715 00:11:24.587587 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:25.589248 kubelet[2645]: E0715 00:11:25.589213 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:25.776139 systemd-networkd[1435]: cilium_host: Link UP Jul 15 00:11:25.776308 systemd-networkd[1435]: cilium_net: Link UP Jul 15 00:11:25.776487 systemd-networkd[1435]: cilium_net: Gained carrier Jul 15 00:11:25.776661 systemd-networkd[1435]: cilium_host: Gained carrier Jul 15 00:11:25.881444 systemd-networkd[1435]: cilium_vxlan: Link UP Jul 15 00:11:25.881453 systemd-networkd[1435]: cilium_vxlan: Gained carrier Jul 15 00:11:26.052135 systemd-networkd[1435]: cilium_net: Gained IPv6LL Jul 15 00:11:26.083926 kernel: NET: Registered PF_ALG protocol family Jul 15 00:11:26.412147 systemd-networkd[1435]: cilium_host: Gained IPv6LL Jul 15 00:11:26.757268 systemd-networkd[1435]: lxc_health: Link UP Jul 15 00:11:26.762101 systemd-networkd[1435]: lxc_health: Gained carrier Jul 15 00:11:27.099198 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:58300.service - OpenSSH per-connection server daemon (10.0.0.1:58300). Jul 15 00:11:27.140392 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 58300 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:27.141812 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:27.146186 systemd-logind[1492]: New session 11 of user core. Jul 15 00:11:27.153982 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 00:11:27.242049 systemd-networkd[1435]: lxc3943847b6833: Link UP Jul 15 00:11:27.252985 kernel: eth0: renamed from tmp18f3e Jul 15 00:11:27.271914 kernel: eth0: renamed from tmp8cc1c Jul 15 00:11:27.277542 systemd-networkd[1435]: lxc0af87951988b: Link UP Jul 15 00:11:27.277872 systemd-networkd[1435]: lxc3943847b6833: Gained carrier Jul 15 00:11:27.278227 systemd-networkd[1435]: lxc0af87951988b: Gained carrier Jul 15 00:11:27.310937 sshd[3875]: Connection closed by 10.0.0.1 port 58300 Jul 15 00:11:27.312083 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:27.315313 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:58300.service: Deactivated successfully. Jul 15 00:11:27.317741 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 00:11:27.320163 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Jul 15 00:11:27.321276 systemd-logind[1492]: Removed session 11. Jul 15 00:11:27.948688 systemd-networkd[1435]: cilium_vxlan: Gained IPv6LL Jul 15 00:11:28.206035 systemd-networkd[1435]: lxc_health: Gained IPv6LL Jul 15 00:11:28.235302 kubelet[2645]: E0715 00:11:28.235247 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:28.332051 systemd-networkd[1435]: lxc0af87951988b: Gained IPv6LL Jul 15 00:11:28.593619 kubelet[2645]: E0715 00:11:28.593581 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:28.780046 systemd-networkd[1435]: lxc3943847b6833: Gained IPv6LL Jul 15 00:11:29.595252 kubelet[2645]: E0715 00:11:29.595193 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:30.655677 containerd[1516]: time="2025-07-15T00:11:30.655583115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:11:30.655677 containerd[1516]: time="2025-07-15T00:11:30.655651083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:11:30.656282 containerd[1516]: time="2025-07-15T00:11:30.656178424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:30.656904 containerd[1516]: time="2025-07-15T00:11:30.656832473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:30.665743 containerd[1516]: time="2025-07-15T00:11:30.665484049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:11:30.665743 containerd[1516]: time="2025-07-15T00:11:30.665542649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:11:30.665743 containerd[1516]: time="2025-07-15T00:11:30.665558809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:30.665743 containerd[1516]: time="2025-07-15T00:11:30.665647796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:11:30.671838 systemd[1]: run-containerd-runc-k8s.io-18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f-runc.zedWST.mount: Deactivated successfully. Jul 15 00:11:30.681005 systemd[1]: Started cri-containerd-18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f.scope - libcontainer container 18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f. Jul 15 00:11:30.702002 systemd[1]: Started cri-containerd-8cc1c0710a342b51467e7b0ab5122a84f0e09591cd88dfe92ba32fc45a3d4596.scope - libcontainer container 8cc1c0710a342b51467e7b0ab5122a84f0e09591cd88dfe92ba32fc45a3d4596. Jul 15 00:11:30.705894 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 00:11:30.714422 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 00:11:30.733130 containerd[1516]: time="2025-07-15T00:11:30.733075238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4fdhr,Uid:bf1a6d60-5b7a-4739-b5f0-a0c30be912b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f\"" Jul 15 00:11:30.733924 kubelet[2645]: E0715 00:11:30.733764 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:30.736834 containerd[1516]: time="2025-07-15T00:11:30.736767556Z" level=info msg="CreateContainer within sandbox \"18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 00:11:30.742329 containerd[1516]: time="2025-07-15T00:11:30.742263162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5qz24,Uid:1611c4af-125c-40d6-b898-c9e260bc7271,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cc1c0710a342b51467e7b0ab5122a84f0e09591cd88dfe92ba32fc45a3d4596\"" Jul 15 00:11:30.742891 kubelet[2645]: E0715 00:11:30.742793 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:30.744984 containerd[1516]: time="2025-07-15T00:11:30.744951222Z" level=info msg="CreateContainer within sandbox \"8cc1c0710a342b51467e7b0ab5122a84f0e09591cd88dfe92ba32fc45a3d4596\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 00:11:30.756173 containerd[1516]: time="2025-07-15T00:11:30.756120538Z" level=info msg="CreateContainer within sandbox \"18f3ea6c13ea8f046e7fb4c0f70f632157608bc021abf1daead05febfef3b44f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"466aff326f1ba6d2fc111745af073b3aa1e6cf873c9ea8136c0cd98c774de673\"" Jul 15 00:11:30.756553 containerd[1516]: time="2025-07-15T00:11:30.756534026Z" level=info msg="StartContainer for \"466aff326f1ba6d2fc111745af073b3aa1e6cf873c9ea8136c0cd98c774de673\"" Jul 15 00:11:30.765717 containerd[1516]: time="2025-07-15T00:11:30.765667707Z" level=info msg="CreateContainer within sandbox \"8cc1c0710a342b51467e7b0ab5122a84f0e09591cd88dfe92ba32fc45a3d4596\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1d72610fde068624dddb19a9afbf72697ed6c594ffc37aa1ccd723398a95189\"" Jul 15 00:11:30.766271 containerd[1516]: time="2025-07-15T00:11:30.766105370Z" level=info msg="StartContainer for \"d1d72610fde068624dddb19a9afbf72697ed6c594ffc37aa1ccd723398a95189\"" Jul 15 00:11:30.788112 systemd[1]: Started cri-containerd-466aff326f1ba6d2fc111745af073b3aa1e6cf873c9ea8136c0cd98c774de673.scope - libcontainer container 466aff326f1ba6d2fc111745af073b3aa1e6cf873c9ea8136c0cd98c774de673. Jul 15 00:11:30.791985 systemd[1]: Started cri-containerd-d1d72610fde068624dddb19a9afbf72697ed6c594ffc37aa1ccd723398a95189.scope - libcontainer container d1d72610fde068624dddb19a9afbf72697ed6c594ffc37aa1ccd723398a95189. Jul 15 00:11:30.826284 containerd[1516]: time="2025-07-15T00:11:30.826169497Z" level=info msg="StartContainer for \"466aff326f1ba6d2fc111745af073b3aa1e6cf873c9ea8136c0cd98c774de673\" returns successfully" Jul 15 00:11:30.826284 containerd[1516]: time="2025-07-15T00:11:30.826197138Z" level=info msg="StartContainer for \"d1d72610fde068624dddb19a9afbf72697ed6c594ffc37aa1ccd723398a95189\" returns successfully" Jul 15 00:11:31.599882 kubelet[2645]: E0715 00:11:31.599832 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:31.605515 kubelet[2645]: E0715 00:11:31.605477 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:31.779651 kubelet[2645]: I0715 00:11:31.779592 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4fdhr" podStartSLOduration=27.779573413 podStartE2EDuration="27.779573413s" podCreationTimestamp="2025-07-15 00:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:11:31.779241098 +0000 UTC m=+33.427796738" watchObservedRunningTime="2025-07-15 00:11:31.779573413 +0000 UTC m=+33.428129053" Jul 15 00:11:32.325697 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:51316.service - OpenSSH per-connection server daemon (10.0.0.1:51316). Jul 15 00:11:32.370437 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 51316 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:32.372103 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:32.376426 systemd-logind[1492]: New session 12 of user core. Jul 15 00:11:32.386990 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 00:11:32.499352 sshd[4098]: Connection closed by 10.0.0.1 port 51316 Jul 15 00:11:32.499730 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:32.503356 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:51316.service: Deactivated successfully. Jul 15 00:11:32.505595 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 00:11:32.506360 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Jul 15 00:11:32.507290 systemd-logind[1492]: Removed session 12. Jul 15 00:11:32.606256 kubelet[2645]: E0715 00:11:32.606143 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:32.606256 kubelet[2645]: E0715 00:11:32.606169 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:33.608211 kubelet[2645]: E0715 00:11:33.608173 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:11:37.515082 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Jul 15 00:11:37.558690 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:37.560276 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:37.564553 systemd-logind[1492]: New session 13 of user core. Jul 15 00:11:37.573012 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 00:11:37.684343 sshd[4116]: Connection closed by 10.0.0.1 port 51330 Jul 15 00:11:37.684738 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:37.695749 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:51330.service: Deactivated successfully. Jul 15 00:11:37.698124 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 00:11:37.699935 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Jul 15 00:11:37.707180 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:51340.service - OpenSSH per-connection server daemon (10.0.0.1:51340). Jul 15 00:11:37.708405 systemd-logind[1492]: Removed session 13. Jul 15 00:11:37.746335 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 51340 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:37.748054 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:37.752772 systemd-logind[1492]: New session 14 of user core. Jul 15 00:11:37.768008 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 00:11:37.908775 sshd[4132]: Connection closed by 10.0.0.1 port 51340 Jul 15 00:11:37.909329 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:37.921321 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:51340.service: Deactivated successfully. Jul 15 00:11:37.924288 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 00:11:37.928308 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Jul 15 00:11:37.936642 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:51352.service - OpenSSH per-connection server daemon (10.0.0.1:51352). Jul 15 00:11:37.938793 systemd-logind[1492]: Removed session 14. Jul 15 00:11:37.975707 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 51352 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:37.977504 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:37.982192 systemd-logind[1492]: New session 15 of user core. Jul 15 00:11:37.992009 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 00:11:38.110380 sshd[4145]: Connection closed by 10.0.0.1 port 51352 Jul 15 00:11:38.112097 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:38.115969 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:51352.service: Deactivated successfully. Jul 15 00:11:38.118087 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 00:11:38.118843 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Jul 15 00:11:38.119841 systemd-logind[1492]: Removed session 15. Jul 15 00:11:43.130236 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:51056.service - OpenSSH per-connection server daemon (10.0.0.1:51056). Jul 15 00:11:43.195566 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 51056 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:43.200266 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:43.213837 systemd-logind[1492]: New session 16 of user core. Jul 15 00:11:43.225952 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 00:11:43.412754 sshd[4162]: Connection closed by 10.0.0.1 port 51056 Jul 15 00:11:43.413755 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:43.421260 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:51056.service: Deactivated successfully. Jul 15 00:11:43.424606 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 00:11:43.426027 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Jul 15 00:11:43.427654 systemd-logind[1492]: Removed session 16. Jul 15 00:11:48.425710 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:51058.service - OpenSSH per-connection server daemon (10.0.0.1:51058). Jul 15 00:11:48.464463 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 51058 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:48.466154 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:48.470379 systemd-logind[1492]: New session 17 of user core. Jul 15 00:11:48.478051 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 00:11:48.591721 sshd[4179]: Connection closed by 10.0.0.1 port 51058 Jul 15 00:11:48.592143 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:48.603251 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:51058.service: Deactivated successfully. Jul 15 00:11:48.605422 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 00:11:48.606315 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Jul 15 00:11:48.622187 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:51074.service - OpenSSH per-connection server daemon (10.0.0.1:51074). Jul 15 00:11:48.623270 systemd-logind[1492]: Removed session 17. Jul 15 00:11:48.657910 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 51074 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:48.659615 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:48.664118 systemd-logind[1492]: New session 18 of user core. Jul 15 00:11:48.674009 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 00:11:48.859544 sshd[4194]: Connection closed by 10.0.0.1 port 51074 Jul 15 00:11:48.860185 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:48.877854 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:51074.service: Deactivated successfully. Jul 15 00:11:48.879983 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 00:11:48.881745 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Jul 15 00:11:48.883123 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:51090.service - OpenSSH per-connection server daemon (10.0.0.1:51090). Jul 15 00:11:48.884157 systemd-logind[1492]: Removed session 18. Jul 15 00:11:48.927604 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 51090 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:48.929219 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:48.933911 systemd-logind[1492]: New session 19 of user core. Jul 15 00:11:48.940992 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 00:11:50.339689 sshd[4208]: Connection closed by 10.0.0.1 port 51090 Jul 15 00:11:50.340194 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:50.356928 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:51090.service: Deactivated successfully. Jul 15 00:11:50.359269 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 00:11:50.360053 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Jul 15 00:11:50.382352 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:39122.service - OpenSSH per-connection server daemon (10.0.0.1:39122). Jul 15 00:11:50.383324 systemd-logind[1492]: Removed session 19. Jul 15 00:11:50.416468 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 39122 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:50.418127 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:50.422820 systemd-logind[1492]: New session 20 of user core. Jul 15 00:11:50.429008 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 00:11:50.650513 sshd[4229]: Connection closed by 10.0.0.1 port 39122 Jul 15 00:11:50.652063 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:50.665431 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:39122.service: Deactivated successfully. Jul 15 00:11:50.667527 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 00:11:50.668445 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Jul 15 00:11:50.683315 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:39126.service - OpenSSH per-connection server daemon (10.0.0.1:39126). Jul 15 00:11:50.684167 systemd-logind[1492]: Removed session 20. Jul 15 00:11:50.717226 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 39126 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:50.718825 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:50.724001 systemd-logind[1492]: New session 21 of user core. Jul 15 00:11:50.738162 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 00:11:50.851031 sshd[4242]: Connection closed by 10.0.0.1 port 39126 Jul 15 00:11:50.851478 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:50.855638 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:39126.service: Deactivated successfully. Jul 15 00:11:50.857896 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 00:11:50.858674 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Jul 15 00:11:50.859660 systemd-logind[1492]: Removed session 21. Jul 15 00:11:55.880470 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:39132.service - OpenSSH per-connection server daemon (10.0.0.1:39132). Jul 15 00:11:55.930921 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 39132 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:11:55.933283 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:11:55.939961 systemd-logind[1492]: New session 22 of user core. Jul 15 00:11:55.948195 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 00:11:56.107427 sshd[4260]: Connection closed by 10.0.0.1 port 39132 Jul 15 00:11:56.107985 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jul 15 00:11:56.113934 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:39132.service: Deactivated successfully. Jul 15 00:11:56.117315 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 00:11:56.118400 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Jul 15 00:11:56.121496 systemd-logind[1492]: Removed session 22. Jul 15 00:12:01.141388 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). Jul 15 00:12:01.196956 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:01.200723 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:01.219484 systemd-logind[1492]: New session 23 of user core. Jul 15 00:12:01.233269 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 00:12:01.394332 sshd[4277]: Connection closed by 10.0.0.1 port 57536 Jul 15 00:12:01.395175 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:01.402292 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:57536.service: Deactivated successfully. Jul 15 00:12:01.406210 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 00:12:01.408251 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Jul 15 00:12:01.411723 systemd-logind[1492]: Removed session 23. Jul 15 00:12:06.411437 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:57546.service - OpenSSH per-connection server daemon (10.0.0.1:57546). Jul 15 00:12:06.453563 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 57546 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:06.455472 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:06.460606 systemd-logind[1492]: New session 24 of user core. Jul 15 00:12:06.470141 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 00:12:06.591143 sshd[4295]: Connection closed by 10.0.0.1 port 57546 Jul 15 00:12:06.591564 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:06.595760 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:57546.service: Deactivated successfully. Jul 15 00:12:06.598060 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 00:12:06.599059 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Jul 15 00:12:06.600248 systemd-logind[1492]: Removed session 24. Jul 15 00:12:11.603960 systemd[1]: Started sshd@24-10.0.0.145:22-10.0.0.1:53968.service - OpenSSH per-connection server daemon (10.0.0.1:53968). Jul 15 00:12:11.641105 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 53968 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:11.642458 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:11.646189 systemd-logind[1492]: New session 25 of user core. Jul 15 00:12:11.656989 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 00:12:11.758136 sshd[4310]: Connection closed by 10.0.0.1 port 53968 Jul 15 00:12:11.758547 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:11.769801 systemd[1]: sshd@24-10.0.0.145:22-10.0.0.1:53968.service: Deactivated successfully. Jul 15 00:12:11.771654 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 00:12:11.773194 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Jul 15 00:12:11.787134 systemd[1]: Started sshd@25-10.0.0.145:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). Jul 15 00:12:11.788123 systemd-logind[1492]: Removed session 25. Jul 15 00:12:11.821329 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:11.822668 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:11.826964 systemd-logind[1492]: New session 26 of user core. Jul 15 00:12:11.837009 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 00:12:13.150912 kubelet[2645]: I0715 00:12:13.150076 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5qz24" podStartSLOduration=69.150056867 podStartE2EDuration="1m9.150056867s" podCreationTimestamp="2025-07-15 00:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:11:31.805418371 +0000 UTC m=+33.453974011" watchObservedRunningTime="2025-07-15 00:12:13.150056867 +0000 UTC m=+74.798612507" Jul 15 00:12:13.168542 containerd[1516]: time="2025-07-15T00:12:13.168266751Z" level=info msg="StopContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" with timeout 30 (s)" Jul 15 00:12:13.171179 containerd[1516]: time="2025-07-15T00:12:13.171144151Z" level=info msg="Stop container \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" with signal terminated" Jul 15 00:12:13.206427 containerd[1516]: time="2025-07-15T00:12:13.206357269Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 00:12:13.214614 systemd[1]: cri-containerd-25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e.scope: Deactivated successfully. Jul 15 00:12:13.219904 containerd[1516]: time="2025-07-15T00:12:13.219311207Z" level=info msg="StopContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" with timeout 2 (s)" Jul 15 00:12:13.228931 containerd[1516]: time="2025-07-15T00:12:13.226109798Z" level=info msg="Stop container \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" with signal terminated" Jul 15 00:12:13.235674 systemd-networkd[1435]: lxc_health: Link DOWN Jul 15 00:12:13.238894 systemd-networkd[1435]: lxc_health: Lost carrier Jul 15 00:12:13.254392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e-rootfs.mount: Deactivated successfully. Jul 15 00:12:13.257496 systemd[1]: cri-containerd-9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75.scope: Deactivated successfully. Jul 15 00:12:13.257883 systemd[1]: cri-containerd-9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75.scope: Consumed 6.856s CPU time, 128.1M memory peak, 316K read from disk, 13.3M written to disk. Jul 15 00:12:13.262741 containerd[1516]: time="2025-07-15T00:12:13.262675014Z" level=info msg="shim disconnected" id=25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e namespace=k8s.io Jul 15 00:12:13.262741 containerd[1516]: time="2025-07-15T00:12:13.262735300Z" level=warning msg="cleaning up after shim disconnected" id=25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e namespace=k8s.io Jul 15 00:12:13.262741 containerd[1516]: time="2025-07-15T00:12:13.262743775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:13.277981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75-rootfs.mount: Deactivated successfully. Jul 15 00:12:13.285137 containerd[1516]: time="2025-07-15T00:12:13.285093608Z" level=info msg="StopContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" returns successfully" Jul 15 00:12:13.285544 containerd[1516]: time="2025-07-15T00:12:13.285462625Z" level=info msg="shim disconnected" id=9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75 namespace=k8s.io Jul 15 00:12:13.285544 containerd[1516]: time="2025-07-15T00:12:13.285531667Z" level=warning msg="cleaning up after shim disconnected" id=9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75 namespace=k8s.io Jul 15 00:12:13.285544 containerd[1516]: time="2025-07-15T00:12:13.285542988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:13.289224 containerd[1516]: time="2025-07-15T00:12:13.289190745Z" level=info msg="StopPodSandbox for \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\"" Jul 15 00:12:13.293616 containerd[1516]: time="2025-07-15T00:12:13.289242854Z" level=info msg="Container to stop \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.295787 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73-shm.mount: Deactivated successfully. Jul 15 00:12:13.301942 systemd[1]: cri-containerd-08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73.scope: Deactivated successfully. Jul 15 00:12:13.306817 containerd[1516]: time="2025-07-15T00:12:13.306773618Z" level=info msg="StopContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" returns successfully" Jul 15 00:12:13.307419 containerd[1516]: time="2025-07-15T00:12:13.307390529Z" level=info msg="StopPodSandbox for \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\"" Jul 15 00:12:13.307494 containerd[1516]: time="2025-07-15T00:12:13.307427881Z" level=info msg="Container to stop \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.307524 containerd[1516]: time="2025-07-15T00:12:13.307492404Z" level=info msg="Container to stop \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.307524 containerd[1516]: time="2025-07-15T00:12:13.307502043Z" level=info msg="Container to stop \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.307524 containerd[1516]: time="2025-07-15T00:12:13.307510990Z" level=info msg="Container to stop \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.307524 containerd[1516]: time="2025-07-15T00:12:13.307519156Z" level=info msg="Container to stop \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 00:12:13.311965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6-shm.mount: Deactivated successfully. Jul 15 00:12:13.314848 systemd[1]: cri-containerd-917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6.scope: Deactivated successfully. Jul 15 00:12:13.337243 containerd[1516]: time="2025-07-15T00:12:13.337153279Z" level=info msg="shim disconnected" id=08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73 namespace=k8s.io Jul 15 00:12:13.337243 containerd[1516]: time="2025-07-15T00:12:13.337224484Z" level=warning msg="cleaning up after shim disconnected" id=08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73 namespace=k8s.io Jul 15 00:12:13.337243 containerd[1516]: time="2025-07-15T00:12:13.337238501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:13.338012 containerd[1516]: time="2025-07-15T00:12:13.337949834Z" level=info msg="shim disconnected" id=917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6 namespace=k8s.io Jul 15 00:12:13.338107 containerd[1516]: time="2025-07-15T00:12:13.338012083Z" level=warning msg="cleaning up after shim disconnected" id=917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6 namespace=k8s.io Jul 15 00:12:13.338107 containerd[1516]: time="2025-07-15T00:12:13.338024177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:13.355476 containerd[1516]: time="2025-07-15T00:12:13.355415824Z" level=info msg="TearDown network for sandbox \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" successfully" Jul 15 00:12:13.355476 containerd[1516]: time="2025-07-15T00:12:13.355458845Z" level=info msg="StopPodSandbox for \"917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6\" returns successfully" Jul 15 00:12:13.358201 containerd[1516]: time="2025-07-15T00:12:13.358149999Z" level=info msg="TearDown network for sandbox \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\" successfully" Jul 15 00:12:13.358201 containerd[1516]: time="2025-07-15T00:12:13.358180147Z" level=info msg="StopPodSandbox for \"08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73\" returns successfully" Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374233 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-run\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374292 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-hubble-tls\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374317 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22p7j\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-kube-api-access-22p7j\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374334 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-kernel\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374347 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-net\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375013 kubelet[2645]: I0715 00:12:13.374363 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-config-path\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374377 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/959108f4-5e12-4dfc-bb30-42860c11dc8e-cilium-config-path\") pod \"959108f4-5e12-4dfc-bb30-42860c11dc8e\" (UID: \"959108f4-5e12-4dfc-bb30-42860c11dc8e\") " Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374391 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-xtables-lock\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374392 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374407 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-etc-cni-netd\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374464 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cni-path\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375448 kubelet[2645]: I0715 00:12:13.374487 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51af13a5-f0af-4da4-a090-c2431676c9ef-clustermesh-secrets\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374503 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-hostproc\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374518 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-lib-modules\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374533 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-bpf-maps\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374546 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-cgroup\") pod \"51af13a5-f0af-4da4-a090-c2431676c9ef\" (UID: \"51af13a5-f0af-4da4-a090-c2431676c9ef\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374563 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtvw\" (UniqueName: \"kubernetes.io/projected/959108f4-5e12-4dfc-bb30-42860c11dc8e-kube-api-access-9wtvw\") pod \"959108f4-5e12-4dfc-bb30-42860c11dc8e\" (UID: \"959108f4-5e12-4dfc-bb30-42860c11dc8e\") " Jul 15 00:12:13.375592 kubelet[2645]: I0715 00:12:13.374589 2645 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.376122 kubelet[2645]: I0715 00:12:13.374431 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.376122 kubelet[2645]: I0715 00:12:13.376079 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-hostproc" (OuterVolumeSpecName: "hostproc") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.376122 kubelet[2645]: I0715 00:12:13.376117 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cni-path" (OuterVolumeSpecName: "cni-path") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.376643 kubelet[2645]: I0715 00:12:13.376374 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.376643 kubelet[2645]: I0715 00:12:13.376404 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.376643 kubelet[2645]: I0715 00:12:13.376420 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.378486 kubelet[2645]: I0715 00:12:13.377923 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.379098 kubelet[2645]: I0715 00:12:13.378828 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.379460 kubelet[2645]: I0715 00:12:13.379417 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 00:12:13.379518 kubelet[2645]: I0715 00:12:13.379495 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 00:12:13.381920 kubelet[2645]: I0715 00:12:13.381843 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959108f4-5e12-4dfc-bb30-42860c11dc8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "959108f4-5e12-4dfc-bb30-42860c11dc8e" (UID: "959108f4-5e12-4dfc-bb30-42860c11dc8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 00:12:13.382115 kubelet[2645]: I0715 00:12:13.382091 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-kube-api-access-22p7j" (OuterVolumeSpecName: "kube-api-access-22p7j") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "kube-api-access-22p7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 00:12:13.382429 kubelet[2645]: I0715 00:12:13.382395 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959108f4-5e12-4dfc-bb30-42860c11dc8e-kube-api-access-9wtvw" (OuterVolumeSpecName: "kube-api-access-9wtvw") pod "959108f4-5e12-4dfc-bb30-42860c11dc8e" (UID: "959108f4-5e12-4dfc-bb30-42860c11dc8e"). InnerVolumeSpecName "kube-api-access-9wtvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 00:12:13.383343 kubelet[2645]: I0715 00:12:13.383310 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 00:12:13.384650 kubelet[2645]: I0715 00:12:13.384626 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51af13a5-f0af-4da4-a090-c2431676c9ef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "51af13a5-f0af-4da4-a090-c2431676c9ef" (UID: "51af13a5-f0af-4da4-a090-c2431676c9ef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475252 2645 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475284 2645 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475294 2645 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51af13a5-f0af-4da4-a090-c2431676c9ef-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475305 2645 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475315 2645 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475324 2645 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475332 2645 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475348 kubelet[2645]: I0715 00:12:13.475341 2645 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtvw\" (UniqueName: \"kubernetes.io/projected/959108f4-5e12-4dfc-bb30-42860c11dc8e-kube-api-access-9wtvw\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475350 2645 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475360 2645 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22p7j\" (UniqueName: \"kubernetes.io/projected/51af13a5-f0af-4da4-a090-c2431676c9ef-kube-api-access-22p7j\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475369 2645 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475377 2645 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475385 2645 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51af13a5-f0af-4da4-a090-c2431676c9ef-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475394 2645 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/959108f4-5e12-4dfc-bb30-42860c11dc8e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.475579 kubelet[2645]: I0715 00:12:13.475402 2645 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51af13a5-f0af-4da4-a090-c2431676c9ef-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 00:12:13.493720 kubelet[2645]: E0715 00:12:13.493670 2645 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 00:12:13.706943 kubelet[2645]: I0715 00:12:13.706911 2645 scope.go:117] "RemoveContainer" containerID="9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75" Jul 15 00:12:13.715407 systemd[1]: Removed slice kubepods-burstable-pod51af13a5_f0af_4da4_a090_c2431676c9ef.slice - libcontainer container kubepods-burstable-pod51af13a5_f0af_4da4_a090_c2431676c9ef.slice. Jul 15 00:12:13.715520 systemd[1]: kubepods-burstable-pod51af13a5_f0af_4da4_a090_c2431676c9ef.slice: Consumed 6.958s CPU time, 128.4M memory peak, 332K read from disk, 16.6M written to disk. Jul 15 00:12:13.718280 containerd[1516]: time="2025-07-15T00:12:13.718228873Z" level=info msg="RemoveContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\"" Jul 15 00:12:13.720473 systemd[1]: Removed slice kubepods-besteffort-pod959108f4_5e12_4dfc_bb30_42860c11dc8e.slice - libcontainer container kubepods-besteffort-pod959108f4_5e12_4dfc_bb30_42860c11dc8e.slice. Jul 15 00:12:13.722417 containerd[1516]: time="2025-07-15T00:12:13.722379421Z" level=info msg="RemoveContainer for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" returns successfully" Jul 15 00:12:13.722693 kubelet[2645]: I0715 00:12:13.722641 2645 scope.go:117] "RemoveContainer" containerID="603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1" Jul 15 00:12:13.723955 containerd[1516]: time="2025-07-15T00:12:13.723927185Z" level=info msg="RemoveContainer for \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\"" Jul 15 00:12:13.728612 containerd[1516]: time="2025-07-15T00:12:13.728231058Z" level=info msg="RemoveContainer for \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\" returns successfully" Jul 15 00:12:13.728727 kubelet[2645]: I0715 00:12:13.728477 2645 scope.go:117] "RemoveContainer" containerID="ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf" Jul 15 00:12:13.729524 containerd[1516]: time="2025-07-15T00:12:13.729495929Z" level=info msg="RemoveContainer for \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\"" Jul 15 00:12:13.733044 containerd[1516]: time="2025-07-15T00:12:13.733014549Z" level=info msg="RemoveContainer for \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\" returns successfully" Jul 15 00:12:13.733232 kubelet[2645]: I0715 00:12:13.733196 2645 scope.go:117] "RemoveContainer" containerID="333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b" Jul 15 00:12:13.734492 containerd[1516]: time="2025-07-15T00:12:13.734249343Z" level=info msg="RemoveContainer for \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\"" Jul 15 00:12:13.737807 containerd[1516]: time="2025-07-15T00:12:13.737763353Z" level=info msg="RemoveContainer for \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\" returns successfully" Jul 15 00:12:13.737963 kubelet[2645]: I0715 00:12:13.737936 2645 scope.go:117] "RemoveContainer" containerID="bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6" Jul 15 00:12:13.738837 containerd[1516]: time="2025-07-15T00:12:13.738796551Z" level=info msg="RemoveContainer for \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\"" Jul 15 00:12:13.742060 containerd[1516]: time="2025-07-15T00:12:13.742028912Z" level=info msg="RemoveContainer for \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\" returns successfully" Jul 15 00:12:13.742224 kubelet[2645]: I0715 00:12:13.742190 2645 scope.go:117] "RemoveContainer" containerID="9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75" Jul 15 00:12:13.742830 containerd[1516]: time="2025-07-15T00:12:13.742384242Z" level=error msg="ContainerStatus for \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\": not found" Jul 15 00:12:13.750120 kubelet[2645]: E0715 00:12:13.750085 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\": not found" containerID="9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75" Jul 15 00:12:13.750232 kubelet[2645]: I0715 00:12:13.750121 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75"} err="failed to get container status \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eaa96ed319e7aa84e9da6f5c407688510bcafc50bf2852a850405da1d939e75\": not found" Jul 15 00:12:13.750232 kubelet[2645]: I0715 00:12:13.750201 2645 scope.go:117] "RemoveContainer" containerID="603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1" Jul 15 00:12:13.750424 containerd[1516]: time="2025-07-15T00:12:13.750387279Z" level=error msg="ContainerStatus for \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\": not found" Jul 15 00:12:13.750547 kubelet[2645]: E0715 00:12:13.750525 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\": not found" containerID="603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1" Jul 15 00:12:13.750584 kubelet[2645]: I0715 00:12:13.750548 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1"} err="failed to get container status \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"603d1356a0667f3b75ef1d499f77f249b5d5b2f29c9c5fb586e0960fac6cebd1\": not found" Jul 15 00:12:13.750584 kubelet[2645]: I0715 00:12:13.750563 2645 scope.go:117] "RemoveContainer" containerID="ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf" Jul 15 00:12:13.750725 containerd[1516]: time="2025-07-15T00:12:13.750701712Z" level=error msg="ContainerStatus for \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\": not found" Jul 15 00:12:13.750899 kubelet[2645]: E0715 00:12:13.750871 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\": not found" containerID="ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf" Jul 15 00:12:13.750938 kubelet[2645]: I0715 00:12:13.750909 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf"} err="failed to get container status \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca73c865be5f5abf1f7c9a2c871e30ac2168669238a549ff7a980b5afaff6dcf\": not found" Jul 15 00:12:13.750965 kubelet[2645]: I0715 00:12:13.750939 2645 scope.go:117] "RemoveContainer" containerID="333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b" Jul 15 00:12:13.751179 containerd[1516]: time="2025-07-15T00:12:13.751137466Z" level=error msg="ContainerStatus for \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\": not found" Jul 15 00:12:13.751304 kubelet[2645]: E0715 00:12:13.751276 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\": not found" containerID="333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b" Jul 15 00:12:13.751413 kubelet[2645]: I0715 00:12:13.751309 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b"} err="failed to get container status \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\": rpc error: code = NotFound desc = an error occurred when try to find container \"333d3c9f5ca791e54072b7f588741767749847ac9a647b7df18e8e6ebe95415b\": not found" Jul 15 00:12:13.751413 kubelet[2645]: I0715 00:12:13.751335 2645 scope.go:117] "RemoveContainer" containerID="bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6" Jul 15 00:12:13.751489 containerd[1516]: time="2025-07-15T00:12:13.751455786Z" level=error msg="ContainerStatus for \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\": not found" Jul 15 00:12:13.751581 kubelet[2645]: E0715 00:12:13.751554 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\": not found" containerID="bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6" Jul 15 00:12:13.751637 kubelet[2645]: I0715 00:12:13.751579 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6"} err="failed to get container status \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd82719933baf1347dc11e3031a3a1b4e3ef7016336b6d4fbb92c8b8e46a12f6\": not found" Jul 15 00:12:13.751637 kubelet[2645]: I0715 00:12:13.751594 2645 scope.go:117] "RemoveContainer" containerID="25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e" Jul 15 00:12:13.752811 containerd[1516]: time="2025-07-15T00:12:13.752562846Z" level=info msg="RemoveContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\"" Jul 15 00:12:13.755745 containerd[1516]: time="2025-07-15T00:12:13.755703781Z" level=info msg="RemoveContainer for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" returns successfully" Jul 15 00:12:13.755868 kubelet[2645]: I0715 00:12:13.755832 2645 scope.go:117] "RemoveContainer" containerID="25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e" Jul 15 00:12:13.756027 containerd[1516]: time="2025-07-15T00:12:13.755999337Z" level=error msg="ContainerStatus for \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\": not found" Jul 15 00:12:13.756142 kubelet[2645]: E0715 00:12:13.756115 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\": not found" containerID="25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e" Jul 15 00:12:13.756194 kubelet[2645]: I0715 00:12:13.756139 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e"} err="failed to get container status \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\": rpc error: code = NotFound desc = an error occurred when try to find container \"25ca32d00c1155024bbe7ab9fa99cd4bad212bd5c2ed07c5c69da914dc80754e\": not found" Jul 15 00:12:14.167957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08f04b14af43b59d43378e067ebb52f506ae01f617d0d51ac084204bb57d1d73-rootfs.mount: Deactivated successfully. Jul 15 00:12:14.168092 systemd[1]: var-lib-kubelet-pods-959108f4\x2d5e12\x2d4dfc\x2dbb30\x2d42860c11dc8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9wtvw.mount: Deactivated successfully. Jul 15 00:12:14.168200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917cbae8941ff7919bebb1c05650b9da5e6c45845508945ef48ece5564bb1ae6-rootfs.mount: Deactivated successfully. Jul 15 00:12:14.168308 systemd[1]: var-lib-kubelet-pods-51af13a5\x2df0af\x2d4da4\x2da090\x2dc2431676c9ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22p7j.mount: Deactivated successfully. Jul 15 00:12:14.168394 systemd[1]: var-lib-kubelet-pods-51af13a5\x2df0af\x2d4da4\x2da090\x2dc2431676c9ef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 00:12:14.168476 systemd[1]: var-lib-kubelet-pods-51af13a5\x2df0af\x2d4da4\x2da090\x2dc2431676c9ef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 00:12:14.447901 kubelet[2645]: I0715 00:12:14.447755 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" path="/var/lib/kubelet/pods/51af13a5-f0af-4da4-a090-c2431676c9ef/volumes" Jul 15 00:12:14.448698 kubelet[2645]: I0715 00:12:14.448668 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959108f4-5e12-4dfc-bb30-42860c11dc8e" path="/var/lib/kubelet/pods/959108f4-5e12-4dfc-bb30-42860c11dc8e/volumes" Jul 15 00:12:15.127621 sshd[4325]: Connection closed by 10.0.0.1 port 53974 Jul 15 00:12:15.128130 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:15.143733 systemd[1]: sshd@25-10.0.0.145:22-10.0.0.1:53974.service: Deactivated successfully. Jul 15 00:12:15.145554 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 00:12:15.146333 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Jul 15 00:12:15.158107 systemd[1]: Started sshd@26-10.0.0.145:22-10.0.0.1:53988.service - OpenSSH per-connection server daemon (10.0.0.1:53988). Jul 15 00:12:15.159215 systemd-logind[1492]: Removed session 26. Jul 15 00:12:15.196447 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 53988 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:15.197833 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:15.202244 systemd-logind[1492]: New session 27 of user core. Jul 15 00:12:15.211996 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 00:12:15.664282 sshd[4488]: Connection closed by 10.0.0.1 port 53988 Jul 15 00:12:15.664786 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676611 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="clean-cilium-state" Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676646 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="959108f4-5e12-4dfc-bb30-42860c11dc8e" containerName="cilium-operator" Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676654 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="mount-bpf-fs" Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676660 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="apply-sysctl-overwrites" Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676667 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="cilium-agent" Jul 15 00:12:15.676680 kubelet[2645]: E0715 00:12:15.676674 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="mount-cgroup" Jul 15 00:12:15.679535 kubelet[2645]: I0715 00:12:15.676705 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="959108f4-5e12-4dfc-bb30-42860c11dc8e" containerName="cilium-operator" Jul 15 00:12:15.679535 kubelet[2645]: I0715 00:12:15.676713 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="51af13a5-f0af-4da4-a090-c2431676c9ef" containerName="cilium-agent" Jul 15 00:12:15.683423 systemd[1]: sshd@26-10.0.0.145:22-10.0.0.1:53988.service: Deactivated successfully. Jul 15 00:12:15.686737 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688027 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-cilium-run\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688066 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-hostproc\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688090 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-xtables-lock\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688105 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-hubble-tls\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688120 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-host-proc-sys-kernel\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688188 kubelet[2645]: I0715 00:12:15.688134 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhbq\" (UniqueName: \"kubernetes.io/projected/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-kube-api-access-jxhbq\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688148 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-cilium-ipsec-secrets\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688213 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-bpf-maps\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688232 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-clustermesh-secrets\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688295 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-host-proc-sys-net\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688330 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-cni-path\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688456 kubelet[2645]: I0715 00:12:15.688351 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-lib-modules\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688651 kubelet[2645]: I0715 00:12:15.688412 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-cilium-cgroup\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688651 kubelet[2645]: I0715 00:12:15.688450 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-etc-cni-netd\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.688651 kubelet[2645]: I0715 00:12:15.688476 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b60e2a1-e5b6-4448-8179-ef5a272d4c31-cilium-config-path\") pod \"cilium-rdqc6\" (UID: \"8b60e2a1-e5b6-4448-8179-ef5a272d4c31\") " pod="kube-system/cilium-rdqc6" Jul 15 00:12:15.690271 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. Jul 15 00:12:15.698298 systemd[1]: Started sshd@27-10.0.0.145:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). Jul 15 00:12:15.701950 systemd-logind[1492]: Removed session 27. Jul 15 00:12:15.706322 systemd[1]: Created slice kubepods-burstable-pod8b60e2a1_e5b6_4448_8179_ef5a272d4c31.slice - libcontainer container kubepods-burstable-pod8b60e2a1_e5b6_4448_8179_ef5a272d4c31.slice. Jul 15 00:12:15.735297 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:15.736897 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:15.741422 systemd-logind[1492]: New session 28 of user core. Jul 15 00:12:15.757009 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 00:12:15.808161 sshd[4503]: Connection closed by 10.0.0.1 port 53990 Jul 15 00:12:15.808526 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:15.820810 systemd[1]: sshd@27-10.0.0.145:22-10.0.0.1:53990.service: Deactivated successfully. Jul 15 00:12:15.822829 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 00:12:15.823619 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. Jul 15 00:12:15.834215 systemd[1]: Started sshd@28-10.0.0.145:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Jul 15 00:12:15.835338 systemd-logind[1492]: Removed session 28. Jul 15 00:12:15.869610 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:UuprTZ1GLJ/rCgQaEN05xv5bOUyUZ3I5/8qpcJxSq5s Jul 15 00:12:15.871085 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 00:12:15.875580 systemd-logind[1492]: New session 29 of user core. Jul 15 00:12:15.891986 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 15 00:12:16.008506 kubelet[2645]: E0715 00:12:16.008476 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:16.009136 containerd[1516]: time="2025-07-15T00:12:16.008987537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdqc6,Uid:8b60e2a1-e5b6-4448-8179-ef5a272d4c31,Namespace:kube-system,Attempt:0,}" Jul 15 00:12:16.028166 containerd[1516]: time="2025-07-15T00:12:16.028094807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 00:12:16.028166 containerd[1516]: time="2025-07-15T00:12:16.028135145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 00:12:16.028166 containerd[1516]: time="2025-07-15T00:12:16.028144642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:12:16.030070 containerd[1516]: time="2025-07-15T00:12:16.028205148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 00:12:16.049025 systemd[1]: Started cri-containerd-cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11.scope - libcontainer container cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11. Jul 15 00:12:16.070806 containerd[1516]: time="2025-07-15T00:12:16.070768514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdqc6,Uid:8b60e2a1-e5b6-4448-8179-ef5a272d4c31,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\"" Jul 15 00:12:16.071482 kubelet[2645]: E0715 00:12:16.071460 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:16.073121 containerd[1516]: time="2025-07-15T00:12:16.073098540Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 00:12:16.085141 containerd[1516]: time="2025-07-15T00:12:16.085107846Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a\"" Jul 15 00:12:16.085514 containerd[1516]: time="2025-07-15T00:12:16.085427458Z" level=info msg="StartContainer for \"180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a\"" Jul 15 00:12:16.111027 systemd[1]: Started cri-containerd-180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a.scope - libcontainer container 180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a. Jul 15 00:12:16.138496 containerd[1516]: time="2025-07-15T00:12:16.138449544Z" level=info msg="StartContainer for \"180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a\" returns successfully" Jul 15 00:12:16.148328 systemd[1]: cri-containerd-180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a.scope: Deactivated successfully. Jul 15 00:12:16.177109 containerd[1516]: time="2025-07-15T00:12:16.177034082Z" level=info msg="shim disconnected" id=180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a namespace=k8s.io Jul 15 00:12:16.177109 containerd[1516]: time="2025-07-15T00:12:16.177099576Z" level=warning msg="cleaning up after shim disconnected" id=180d07a5bee30a2621819724c56b7ccf45ac4493256a4655d9fc151462d6263a namespace=k8s.io Jul 15 00:12:16.177109 containerd[1516]: time="2025-07-15T00:12:16.177107683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:16.721426 kubelet[2645]: E0715 00:12:16.721398 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:16.723271 containerd[1516]: time="2025-07-15T00:12:16.723223557Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 00:12:16.737876 containerd[1516]: time="2025-07-15T00:12:16.737809411Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10\"" Jul 15 00:12:16.738490 containerd[1516]: time="2025-07-15T00:12:16.738339264Z" level=info msg="StartContainer for \"863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10\"" Jul 15 00:12:16.778034 systemd[1]: Started cri-containerd-863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10.scope - libcontainer container 863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10. Jul 15 00:12:16.814543 containerd[1516]: time="2025-07-15T00:12:16.814481775Z" level=info msg="StartContainer for \"863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10\" returns successfully" Jul 15 00:12:16.817795 systemd[1]: cri-containerd-863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10.scope: Deactivated successfully. Jul 15 00:12:16.837943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10-rootfs.mount: Deactivated successfully. Jul 15 00:12:16.842015 containerd[1516]: time="2025-07-15T00:12:16.841954042Z" level=info msg="shim disconnected" id=863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10 namespace=k8s.io Jul 15 00:12:16.842015 containerd[1516]: time="2025-07-15T00:12:16.842004809Z" level=warning msg="cleaning up after shim disconnected" id=863ad6e9bdd867b8ba306d5bbb4e4caeae53bcdabb4d2531fb104605a6ca9f10 namespace=k8s.io Jul 15 00:12:16.842015 containerd[1516]: time="2025-07-15T00:12:16.842013856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:17.724204 kubelet[2645]: E0715 00:12:17.724167 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:17.727226 containerd[1516]: time="2025-07-15T00:12:17.727156121Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 00:12:17.761236 containerd[1516]: time="2025-07-15T00:12:17.761173815Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536\"" Jul 15 00:12:17.761733 containerd[1516]: time="2025-07-15T00:12:17.761695853Z" level=info msg="StartContainer for \"67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536\"" Jul 15 00:12:17.789988 systemd[1]: Started cri-containerd-67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536.scope - libcontainer container 67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536. Jul 15 00:12:17.820047 containerd[1516]: time="2025-07-15T00:12:17.819995042Z" level=info msg="StartContainer for \"67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536\" returns successfully" Jul 15 00:12:17.821747 systemd[1]: cri-containerd-67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536.scope: Deactivated successfully. Jul 15 00:12:17.844784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536-rootfs.mount: Deactivated successfully. Jul 15 00:12:17.848102 containerd[1516]: time="2025-07-15T00:12:17.848032385Z" level=info msg="shim disconnected" id=67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536 namespace=k8s.io Jul 15 00:12:17.848212 containerd[1516]: time="2025-07-15T00:12:17.848105164Z" level=warning msg="cleaning up after shim disconnected" id=67ac0320a7b3f719076921117bca2ce4a7a940d06ce73005d64c0b2766feb536 namespace=k8s.io Jul 15 00:12:17.848212 containerd[1516]: time="2025-07-15T00:12:17.848119161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:18.494371 kubelet[2645]: E0715 00:12:18.494324 2645 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 00:12:18.726620 kubelet[2645]: E0715 00:12:18.726589 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:18.728157 containerd[1516]: time="2025-07-15T00:12:18.728085986Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 00:12:18.844789 containerd[1516]: time="2025-07-15T00:12:18.844737347Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce\"" Jul 15 00:12:18.845254 containerd[1516]: time="2025-07-15T00:12:18.845228195Z" level=info msg="StartContainer for \"4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce\"" Jul 15 00:12:18.868383 systemd[1]: run-containerd-runc-k8s.io-4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce-runc.rrzir4.mount: Deactivated successfully. Jul 15 00:12:18.878008 systemd[1]: Started cri-containerd-4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce.scope - libcontainer container 4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce. Jul 15 00:12:18.901565 systemd[1]: cri-containerd-4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce.scope: Deactivated successfully. Jul 15 00:12:18.903899 containerd[1516]: time="2025-07-15T00:12:18.903833619Z" level=info msg="StartContainer for \"4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce\" returns successfully" Jul 15 00:12:18.925588 containerd[1516]: time="2025-07-15T00:12:18.925528921Z" level=info msg="shim disconnected" id=4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce namespace=k8s.io Jul 15 00:12:18.925588 containerd[1516]: time="2025-07-15T00:12:18.925574648Z" level=warning msg="cleaning up after shim disconnected" id=4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce namespace=k8s.io Jul 15 00:12:18.925588 containerd[1516]: time="2025-07-15T00:12:18.925585089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 00:12:19.730234 kubelet[2645]: E0715 00:12:19.730199 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:19.731798 containerd[1516]: time="2025-07-15T00:12:19.731759403Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 00:12:19.747480 containerd[1516]: time="2025-07-15T00:12:19.747419722Z" level=info msg="CreateContainer within sandbox \"cb53061cbe8f0f7f8f45899ca8c1df3ece2f1ae047d2966b750959ff25ae1e11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6df04a3646c2f460cb1d5bcfb5de13a2dc6b6dea75006fd6985c04c35af80fa5\"" Jul 15 00:12:19.747954 containerd[1516]: time="2025-07-15T00:12:19.747924927Z" level=info msg="StartContainer for \"6df04a3646c2f460cb1d5bcfb5de13a2dc6b6dea75006fd6985c04c35af80fa5\"" Jul 15 00:12:19.777996 systemd[1]: Started cri-containerd-6df04a3646c2f460cb1d5bcfb5de13a2dc6b6dea75006fd6985c04c35af80fa5.scope - libcontainer container 6df04a3646c2f460cb1d5bcfb5de13a2dc6b6dea75006fd6985c04c35af80fa5. Jul 15 00:12:19.807234 containerd[1516]: time="2025-07-15T00:12:19.807089842Z" level=info msg="StartContainer for \"6df04a3646c2f460cb1d5bcfb5de13a2dc6b6dea75006fd6985c04c35af80fa5\" returns successfully" Jul 15 00:12:19.842141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d685f90c578d68f79c8cf103f9cb32a4aafc4340833291a3f79b54df6cfa6ce-rootfs.mount: Deactivated successfully. Jul 15 00:12:20.216924 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 00:12:20.734825 kubelet[2645]: E0715 00:12:20.734778 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:20.908271 kubelet[2645]: I0715 00:12:20.908192 2645 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T00:12:20Z","lastTransitionTime":"2025-07-15T00:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 00:12:22.010903 kubelet[2645]: E0715 00:12:22.010846 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:23.312846 systemd-networkd[1435]: lxc_health: Link UP Jul 15 00:12:23.321923 systemd-networkd[1435]: lxc_health: Gained carrier Jul 15 00:12:24.011565 kubelet[2645]: E0715 00:12:24.010642 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:24.030089 kubelet[2645]: I0715 00:12:24.029736 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rdqc6" podStartSLOduration=9.029695261 podStartE2EDuration="9.029695261s" podCreationTimestamp="2025-07-15 00:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 00:12:20.751036348 +0000 UTC m=+82.399591998" watchObservedRunningTime="2025-07-15 00:12:24.029695261 +0000 UTC m=+85.678250901" Jul 15 00:12:24.741939 kubelet[2645]: E0715 00:12:24.741848 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:24.845316 systemd-networkd[1435]: lxc_health: Gained IPv6LL Jul 15 00:12:25.743753 kubelet[2645]: E0715 00:12:25.743716 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:28.446076 kubelet[2645]: E0715 00:12:28.446027 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 00:12:28.477938 sshd[4517]: Connection closed by 10.0.0.1 port 53996 Jul 15 00:12:28.477005 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jul 15 00:12:28.481718 systemd[1]: sshd@28-10.0.0.145:22-10.0.0.1:53996.service: Deactivated successfully. Jul 15 00:12:28.483625 systemd[1]: session-29.scope: Deactivated successfully. Jul 15 00:12:28.484405 systemd-logind[1492]: Session 29 logged out. Waiting for processes to exit. Jul 15 00:12:28.485179 systemd-logind[1492]: Removed session 29.