Jul 6 23:27:07.035205 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:27:07.035242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:27:07.035254 kernel: BIOS-provided physical RAM map: Jul 6 23:27:07.035261 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:27:07.035268 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 6 23:27:07.035274 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 6 23:27:07.035282 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 6 23:27:07.035310 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 6 23:27:07.035317 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 6 23:27:07.035324 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 6 23:27:07.035331 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 6 23:27:07.035340 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 6 23:27:07.035351 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 6 23:27:07.035358 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 6 23:27:07.035369 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 6 23:27:07.035377 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 6 23:27:07.035386 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 6 23:27:07.035393 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 6 23:27:07.035400 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 6 23:27:07.035407 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 6 23:27:07.035414 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 6 23:27:07.035421 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 6 23:27:07.035428 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 6 23:27:07.035435 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:27:07.035442 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 6 23:27:07.035449 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:27:07.035456 kernel: NX (Execute Disable) protection: active Jul 6 23:27:07.035466 kernel: APIC: Static calls initialized Jul 6 23:27:07.035473 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 6 23:27:07.035480 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jul 6 23:27:07.035487 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 6 23:27:07.035494 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jul 6 23:27:07.035501 kernel: extended physical RAM map: Jul 6 23:27:07.035508 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:27:07.035515 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 6 23:27:07.035522 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 6 23:27:07.035529 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 6 23:27:07.035536 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 6 23:27:07.035543 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 6 23:27:07.035553 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 6 23:27:07.035564 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jul 6 23:27:07.035571 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jul 6 23:27:07.035578 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jul 6 23:27:07.035586 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jul 6 23:27:07.035593 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jul 6 23:27:07.035605 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 6 23:27:07.035613 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 6 23:27:07.035621 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 6 23:27:07.035628 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 6 23:27:07.035635 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 6 23:27:07.035643 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jul 6 23:27:07.035650 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jul 6 23:27:07.035657 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jul 6 23:27:07.035665 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jul 6 23:27:07.035674 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 6 23:27:07.035682 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 6 23:27:07.035689 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 6 23:27:07.035696 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:27:07.035706 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 6 23:27:07.035713 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:27:07.035720 kernel: efi: EFI v2.7 by EDK II Jul 6 23:27:07.035728 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jul 6 23:27:07.035735 kernel: random: crng init done Jul 6 23:27:07.035743 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 6 23:27:07.035750 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 6 23:27:07.035760 kernel: secureboot: Secure boot disabled Jul 6 23:27:07.035770 kernel: SMBIOS 2.8 present. Jul 6 23:27:07.035777 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 6 23:27:07.035784 kernel: Hypervisor detected: KVM Jul 6 23:27:07.035792 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:27:07.035799 kernel: kvm-clock: using sched offset of 4408957817 cycles Jul 6 23:27:07.035807 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:27:07.035815 kernel: tsc: Detected 2794.750 MHz processor Jul 6 23:27:07.035823 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:27:07.035830 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:27:07.035838 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 6 23:27:07.035848 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:27:07.035856 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:27:07.035863 kernel: Using GB pages for direct mapping Jul 6 23:27:07.035871 kernel: ACPI: Early table checksum verification disabled Jul 6 23:27:07.035879 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 6 23:27:07.035886 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:27:07.035894 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035902 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035909 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 6 23:27:07.035919 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035927 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035943 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035951 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:27:07.035959 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:27:07.035966 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 6 23:27:07.035974 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 6 23:27:07.035981 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 6 23:27:07.035989 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 6 23:27:07.036002 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 6 23:27:07.036012 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 6 23:27:07.036021 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 6 23:27:07.036031 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 6 23:27:07.036040 kernel: No NUMA configuration found Jul 6 23:27:07.036049 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 6 23:27:07.036056 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jul 6 23:27:07.036064 kernel: Zone ranges: Jul 6 23:27:07.036073 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:27:07.036086 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 6 23:27:07.036096 kernel: Normal empty Jul 6 23:27:07.036109 kernel: Movable zone start for each node Jul 6 23:27:07.036118 kernel: Early memory node ranges Jul 6 23:27:07.036128 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:27:07.036137 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 6 23:27:07.036147 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 6 23:27:07.036156 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 6 23:27:07.036166 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 6 23:27:07.036215 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 6 23:27:07.036225 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jul 6 23:27:07.036234 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jul 6 23:27:07.036244 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 6 23:27:07.036253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:27:07.036263 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:27:07.036283 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 6 23:27:07.036294 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:27:07.036302 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 6 23:27:07.036310 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 6 23:27:07.036318 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 6 23:27:07.036329 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 6 23:27:07.036341 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 6 23:27:07.036352 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:27:07.036361 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:27:07.036369 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:27:07.036377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:27:07.036387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:27:07.036395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:27:07.036403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:27:07.036411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:27:07.036419 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:27:07.036427 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:27:07.036435 kernel: TSC deadline timer available Jul 6 23:27:07.036443 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:27:07.036451 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:27:07.036461 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:27:07.036469 kernel: kvm-guest: setup PV sched yield Jul 6 23:27:07.036477 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 6 23:27:07.036485 kernel: Booting paravirtualized kernel on KVM Jul 6 23:27:07.036493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:27:07.036501 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:27:07.036509 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:27:07.036517 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:27:07.036525 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:27:07.036535 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:27:07.036543 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:27:07.036552 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:27:07.036561 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:27:07.036569 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:27:07.036580 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:27:07.036588 kernel: Fallback order for Node 0: 0 Jul 6 23:27:07.036595 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jul 6 23:27:07.036606 kernel: Policy zone: DMA32 Jul 6 23:27:07.036614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:27:07.036622 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 177824K reserved, 0K cma-reserved) Jul 6 23:27:07.036630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:27:07.036638 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:27:07.036646 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:27:07.036654 kernel: Dynamic Preempt: voluntary Jul 6 23:27:07.036662 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:27:07.036678 kernel: rcu: RCU event tracing is enabled. Jul 6 23:27:07.036689 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:27:07.036697 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:27:07.036705 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:27:07.036713 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:27:07.036721 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:27:07.036729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:27:07.036737 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:27:07.036745 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:27:07.036752 kernel: Console: colour dummy device 80x25 Jul 6 23:27:07.036760 kernel: printk: console [ttyS0] enabled Jul 6 23:27:07.036770 kernel: ACPI: Core revision 20230628 Jul 6 23:27:07.036779 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:27:07.036786 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:27:07.036794 kernel: x2apic enabled Jul 6 23:27:07.036802 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:27:07.036813 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:27:07.036821 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:27:07.036829 kernel: kvm-guest: setup PV IPIs Jul 6 23:27:07.036837 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:27:07.036847 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:27:07.036855 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 6 23:27:07.036863 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:27:07.036871 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:27:07.036879 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:27:07.036887 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:27:07.036895 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:27:07.036903 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:27:07.036911 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:27:07.036921 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:27:07.036929 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:27:07.036946 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:27:07.036954 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:27:07.036962 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:27:07.036970 kernel: x86/bugs: return thunk changed Jul 6 23:27:07.036981 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:27:07.036989 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:27:07.036999 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:27:07.037007 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:27:07.037015 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:27:07.037023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:27:07.037031 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:27:07.037039 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:27:07.037047 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:27:07.037055 kernel: landlock: Up and running. Jul 6 23:27:07.037063 kernel: SELinux: Initializing. Jul 6 23:27:07.037073 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.037081 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.037089 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:27:07.037097 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:27:07.037105 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:27:07.037113 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:27:07.037121 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:27:07.037129 kernel: ... version: 0 Jul 6 23:27:07.037137 kernel: ... bit width: 48 Jul 6 23:27:07.037147 kernel: ... generic registers: 6 Jul 6 23:27:07.037155 kernel: ... value mask: 0000ffffffffffff Jul 6 23:27:07.037163 kernel: ... max period: 00007fffffffffff Jul 6 23:27:07.037170 kernel: ... fixed-purpose events: 0 Jul 6 23:27:07.037204 kernel: ... event mask: 000000000000003f Jul 6 23:27:07.037212 kernel: signal: max sigframe size: 1776 Jul 6 23:27:07.037220 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:27:07.037228 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:27:07.037236 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:27:07.037248 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:27:07.037256 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:27:07.037264 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:27:07.037291 kernel: smpboot: Max logical packages: 1 Jul 6 23:27:07.037299 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 6 23:27:07.037307 kernel: devtmpfs: initialized Jul 6 23:27:07.037315 kernel: x86/mm: Memory block size: 128MB Jul 6 23:27:07.037323 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 6 23:27:07.037331 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 6 23:27:07.037341 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 6 23:27:07.037354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 6 23:27:07.037362 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jul 6 23:27:07.037370 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 6 23:27:07.037378 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:27:07.037386 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.037394 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:27:07.037402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:27:07.037410 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:27:07.037420 kernel: audit: type=2000 audit(1751844426.749:1): state=initialized audit_enabled=0 res=1 Jul 6 23:27:07.037428 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:27:07.037436 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:27:07.037444 kernel: cpuidle: using governor menu Jul 6 23:27:07.037452 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:27:07.037460 kernel: dca service started, version 1.12.1 Jul 6 23:27:07.037468 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 6 23:27:07.037476 kernel: PCI: Using configuration type 1 for base access Jul 6 23:27:07.037484 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:27:07.037494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:27:07.037502 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:27:07.037510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:27:07.037518 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:27:07.037526 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:27:07.037534 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:27:07.037541 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:27:07.037549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:27:07.037557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:27:07.037568 kernel: ACPI: Interpreter enabled Jul 6 23:27:07.037575 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:27:07.037583 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:27:07.037591 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:27:07.037599 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:27:07.037607 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:27:07.037615 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:27:07.037864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:27:07.038034 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:27:07.038263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:27:07.038276 kernel: PCI host bridge to bus 0000:00 Jul 6 23:27:07.038432 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:27:07.038555 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:27:07.038675 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:27:07.038807 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 6 23:27:07.038944 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 6 23:27:07.039067 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 6 23:27:07.039203 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:27:07.039370 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:27:07.039515 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:27:07.039647 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 6 23:27:07.039782 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 6 23:27:07.039912 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:27:07.040054 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 6 23:27:07.040253 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:27:07.040422 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:27:07.040557 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 6 23:27:07.040690 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 6 23:27:07.040856 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jul 6 23:27:07.041022 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:27:07.041162 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 6 23:27:07.041316 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 6 23:27:07.041457 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jul 6 23:27:07.041609 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:27:07.041743 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 6 23:27:07.041880 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 6 23:27:07.042027 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 6 23:27:07.042159 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 6 23:27:07.042337 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:27:07.042470 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:27:07.042622 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:27:07.042762 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 6 23:27:07.042892 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 6 23:27:07.043098 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:27:07.043294 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 6 23:27:07.043307 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:27:07.043315 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:27:07.043323 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:27:07.043331 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:27:07.043343 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:27:07.043351 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:27:07.043359 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:27:07.043367 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:27:07.043375 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:27:07.043383 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:27:07.043390 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:27:07.043398 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:27:07.043406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:27:07.043417 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:27:07.043424 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:27:07.043432 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:27:07.043440 kernel: iommu: Default domain type: Translated Jul 6 23:27:07.043448 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:27:07.043456 kernel: efivars: Registered efivars operations Jul 6 23:27:07.043464 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:27:07.043471 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:27:07.043479 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 6 23:27:07.043489 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 6 23:27:07.043497 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jul 6 23:27:07.043505 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jul 6 23:27:07.043513 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 6 23:27:07.043520 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 6 23:27:07.043528 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jul 6 23:27:07.043536 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 6 23:27:07.043669 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:27:07.043799 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:27:07.043931 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:27:07.043954 kernel: vgaarb: loaded Jul 6 23:27:07.043962 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:27:07.043970 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:27:07.043978 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:27:07.043987 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:27:07.043995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:27:07.044003 kernel: pnp: PnP ACPI init Jul 6 23:27:07.044234 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 6 23:27:07.044250 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:27:07.044258 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:27:07.044266 kernel: NET: Registered PF_INET protocol family Jul 6 23:27:07.044274 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:27:07.044303 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:27:07.044314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:27:07.044322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:27:07.044333 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:27:07.044341 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:27:07.044349 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.044357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:27:07.044365 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:27:07.044374 kernel: NET: Registered PF_XDP protocol family Jul 6 23:27:07.044512 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 6 23:27:07.044644 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 6 23:27:07.044769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:27:07.044892 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:27:07.045023 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:27:07.045143 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 6 23:27:07.045335 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 6 23:27:07.045459 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 6 23:27:07.045470 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:27:07.045478 kernel: Initialise system trusted keyrings Jul 6 23:27:07.045486 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:27:07.045500 kernel: Key type asymmetric registered Jul 6 23:27:07.045508 kernel: Asymmetric key parser 'x509' registered Jul 6 23:27:07.045516 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:27:07.045524 kernel: io scheduler mq-deadline registered Jul 6 23:27:07.045532 kernel: io scheduler kyber registered Jul 6 23:27:07.045540 kernel: io scheduler bfq registered Jul 6 23:27:07.045549 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:27:07.045557 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:27:07.045566 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:27:07.045577 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:27:07.045585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:27:07.045596 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:27:07.045604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:27:07.045613 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:27:07.045621 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:27:07.045770 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:27:07.045783 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:27:07.045903 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:27:07.046035 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:27:06 UTC (1751844426) Jul 6 23:27:07.046163 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:27:07.046192 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:27:07.046230 kernel: efifb: probing for efifb Jul 6 23:27:07.046244 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 6 23:27:07.046252 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 6 23:27:07.046260 kernel: efifb: scrolling: redraw Jul 6 23:27:07.046268 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 6 23:27:07.046277 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:27:07.046285 kernel: fb0: EFI VGA frame buffer device Jul 6 23:27:07.046293 kernel: pstore: Using crash dump compression: deflate Jul 6 23:27:07.046302 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:27:07.046310 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:27:07.046321 kernel: Segment Routing with IPv6 Jul 6 23:27:07.046329 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:27:07.046337 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:27:07.046345 kernel: Key type dns_resolver registered Jul 6 23:27:07.046353 kernel: IPI shorthand broadcast: enabled Jul 6 23:27:07.046362 kernel: sched_clock: Marking stable (1340003180, 161673598)->(1536613309, -34936531) Jul 6 23:27:07.046370 kernel: registered taskstats version 1 Jul 6 23:27:07.046378 kernel: Loading compiled-in X.509 certificates Jul 6 23:27:07.046386 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:27:07.046394 kernel: Key type .fscrypt registered Jul 6 23:27:07.046405 kernel: Key type fscrypt-provisioning registered Jul 6 23:27:07.046413 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:27:07.046421 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:27:07.046429 kernel: ima: No architecture policies found Jul 6 23:27:07.046438 kernel: clk: Disabling unused clocks Jul 6 23:27:07.046446 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:27:07.046454 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:27:07.046462 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:27:07.046473 kernel: Run /init as init process Jul 6 23:27:07.046481 kernel: with arguments: Jul 6 23:27:07.046489 kernel: /init Jul 6 23:27:07.046497 kernel: with environment: Jul 6 23:27:07.046505 kernel: HOME=/ Jul 6 23:27:07.046513 kernel: TERM=linux Jul 6 23:27:07.046521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:27:07.046534 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:27:07.046547 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:07.046559 systemd[1]: Detected virtualization kvm. Jul 6 23:27:07.046567 systemd[1]: Detected architecture x86-64. Jul 6 23:27:07.046576 systemd[1]: Running in initrd. Jul 6 23:27:07.046584 systemd[1]: No hostname configured, using default hostname. Jul 6 23:27:07.046594 systemd[1]: Hostname set to . Jul 6 23:27:07.046602 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:07.046611 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:27:07.046622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:07.046631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:07.046641 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:27:07.046650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:07.046659 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:27:07.046668 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:27:07.046679 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:27:07.046690 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:27:07.046699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:07.046708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:07.046716 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:07.046725 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:07.046734 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:07.046742 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:07.046751 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:07.046760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:07.046771 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:27:07.046780 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:27:07.046788 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:07.046797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:07.046806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:07.046815 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:07.046823 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:27:07.046832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:07.046843 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:27:07.046852 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:27:07.046861 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:07.046870 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:07.046878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.046887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:07.046896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:07.046907 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:27:07.046952 systemd-journald[193]: Collecting audit messages is disabled. Jul 6 23:27:07.046976 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:07.046986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.046996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:07.047005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:07.047014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:07.047026 systemd-journald[193]: Journal started Jul 6 23:27:07.047062 systemd-journald[193]: Runtime Journal (/run/log/journal/5ccbc6736432420eade603a4ca725333) is 6M, max 48.2M, 42.2M free. Jul 6 23:27:07.029644 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:27:07.049209 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:07.053255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:07.059312 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:27:07.061813 kernel: Bridge firewalling registered Jul 6 23:27:07.061157 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:27:07.062287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:07.063231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:07.065799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:07.067857 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:07.091813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:27:07.093061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:07.104166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:07.109370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:07.113217 dracut-cmdline[225]: dracut-dracut-053 Jul 6 23:27:07.116060 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:27:07.149165 systemd-resolved[235]: Positive Trust Anchors: Jul 6 23:27:07.149205 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:07.149242 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:07.152169 systemd-resolved[235]: Defaulting to hostname 'linux'. Jul 6 23:27:07.153716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:07.158746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:07.204454 kernel: SCSI subsystem initialized Jul 6 23:27:07.214222 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:27:07.227212 kernel: iscsi: registered transport (tcp) Jul 6 23:27:07.249207 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:27:07.249233 kernel: QLogic iSCSI HBA Driver Jul 6 23:27:07.303862 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:07.313411 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:27:07.338245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:27:07.338297 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:27:07.339253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:27:07.381207 kernel: raid6: avx2x4 gen() 29270 MB/s Jul 6 23:27:07.398211 kernel: raid6: avx2x2 gen() 30480 MB/s Jul 6 23:27:07.415278 kernel: raid6: avx2x1 gen() 25149 MB/s Jul 6 23:27:07.415307 kernel: raid6: using algorithm avx2x2 gen() 30480 MB/s Jul 6 23:27:07.433238 kernel: raid6: .... xor() 19280 MB/s, rmw enabled Jul 6 23:27:07.433262 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:27:07.454212 kernel: xor: automatically using best checksumming function avx Jul 6 23:27:07.610216 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:27:07.622566 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:07.638328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:07.653896 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 6 23:27:07.659767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:07.667377 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:27:07.680964 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jul 6 23:27:07.712144 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:07.724297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:07.796064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:07.802351 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:27:07.820082 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:07.823108 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:07.825625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:07.827873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:07.831197 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:27:07.834194 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:27:07.835365 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:27:07.839744 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:27:07.839773 kernel: GPT:9289727 != 19775487 Jul 6 23:27:07.839794 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:27:07.839812 kernel: GPT:9289727 != 19775487 Jul 6 23:27:07.840930 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:27:07.840959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:27:07.848750 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:07.865198 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:27:07.878195 kernel: libata version 3.00 loaded. Jul 6 23:27:07.884234 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:27:07.885227 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:27:07.894261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:27:07.898135 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:27:07.898155 kernel: AES CTR mode by8 optimization enabled Jul 6 23:27:07.894512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:07.898298 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:07.906357 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:27:07.906559 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:27:07.901665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:07.901759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.906538 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.914231 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (466) Jul 6 23:27:07.916329 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (467) Jul 6 23:27:07.916379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:07.922272 kernel: scsi host0: ahci Jul 6 23:27:07.922516 kernel: scsi host1: ahci Jul 6 23:27:07.924201 kernel: scsi host2: ahci Jul 6 23:27:07.928956 kernel: scsi host3: ahci Jul 6 23:27:07.929214 kernel: scsi host4: ahci Jul 6 23:27:07.929431 kernel: scsi host5: ahci Jul 6 23:27:07.929635 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 6 23:27:07.930725 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 6 23:27:07.930754 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 6 23:27:07.930770 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 6 23:27:07.933317 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 6 23:27:07.933344 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 6 23:27:07.938197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:07.954768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:27:07.963357 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:27:07.972093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:27:07.979089 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:27:07.979163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:27:07.994331 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:27:07.996403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:27:08.005133 disk-uuid[558]: Primary Header is updated. Jul 6 23:27:08.005133 disk-uuid[558]: Secondary Entries is updated. Jul 6 23:27:08.005133 disk-uuid[558]: Secondary Header is updated. Jul 6 23:27:08.009211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:27:08.014211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:27:08.024473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:08.245407 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:27:08.245505 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:27:08.245519 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:27:08.247205 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:27:08.247236 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:27:08.248206 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:27:08.249206 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:27:08.249228 kernel: ata3.00: applying bridge limits Jul 6 23:27:08.250205 kernel: ata3.00: configured for UDMA/100 Jul 6 23:27:08.252210 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:27:08.301220 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:27:08.301556 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:27:08.314397 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:27:09.088214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:27:09.088321 disk-uuid[559]: The operation has completed successfully. Jul 6 23:27:09.118783 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:27:09.118955 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:27:09.170318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:27:09.184212 sh[594]: Success Jul 6 23:27:09.195207 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:27:09.232125 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:27:09.244713 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:27:09.247101 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:27:09.259491 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:27:09.259521 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:27:09.259533 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:27:09.260496 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:27:09.261223 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:27:09.265691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:27:09.266337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:27:09.275331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:27:09.276989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:27:09.295605 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:27:09.295663 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:27:09.295675 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:27:09.298200 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:27:09.302209 kernel: BTRFS info (device vda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:27:09.387808 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:09.411410 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:09.432751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:27:09.438993 systemd-networkd[770]: lo: Link UP Jul 6 23:27:09.439003 systemd-networkd[770]: lo: Gained carrier Jul 6 23:27:09.440366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:27:09.441026 systemd-networkd[770]: Enumeration completed Jul 6 23:27:09.441470 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:09.441475 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:09.443102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:09.443143 systemd-networkd[770]: eth0: Link UP Jul 6 23:27:09.443149 systemd-networkd[770]: eth0: Gained carrier Jul 6 23:27:09.443158 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:09.447051 systemd[1]: Reached target network.target - Network. Jul 6 23:27:09.534339 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:27:09.756977 ignition[775]: Ignition 2.20.0 Jul 6 23:27:09.756993 ignition[775]: Stage: fetch-offline Jul 6 23:27:09.757048 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.757060 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:09.757235 ignition[775]: parsed url from cmdline: "" Jul 6 23:27:09.757241 ignition[775]: no config URL provided Jul 6 23:27:09.757248 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:27:09.757261 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:27:09.757297 ignition[775]: op(1): [started] loading QEMU firmware config module Jul 6 23:27:09.757302 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:27:09.771307 ignition[775]: op(1): [finished] loading QEMU firmware config module Jul 6 23:27:09.815123 ignition[775]: parsing config with SHA512: ac757b0c3e227768d52065eb959108cfb50db58d6ad4c653cd5cef1f073102faf912c5ed790a55fa32a7622b8c30fd4c6de56b587608a86898f3ddf40cb015f5 Jul 6 23:27:09.821420 unknown[775]: fetched base config from "system" Jul 6 23:27:09.821433 unknown[775]: fetched user config from "qemu" Jul 6 23:27:09.822066 ignition[775]: fetch-offline: fetch-offline passed Jul 6 23:27:09.822195 ignition[775]: Ignition finished successfully Jul 6 23:27:09.826059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:09.829598 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:27:09.840351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:27:09.862793 ignition[786]: Ignition 2.20.0 Jul 6 23:27:09.862806 ignition[786]: Stage: kargs Jul 6 23:27:09.862996 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.863008 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:09.863898 ignition[786]: kargs: kargs passed Jul 6 23:27:09.863952 ignition[786]: Ignition finished successfully Jul 6 23:27:09.870967 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:27:09.894633 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:27:09.923147 ignition[794]: Ignition 2.20.0 Jul 6 23:27:09.923161 ignition[794]: Stage: disks Jul 6 23:27:09.923363 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:09.923376 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:09.924469 ignition[794]: disks: disks passed Jul 6 23:27:09.924539 ignition[794]: Ignition finished successfully Jul 6 23:27:09.931478 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:27:09.931771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:09.935556 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:27:09.935791 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:09.936192 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:09.942420 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:09.953370 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:27:09.970167 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:27:09.978036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:27:09.987317 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:27:10.091214 kernel: EXT4-fs (vda9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:27:10.091962 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:27:10.092912 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:10.111401 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:10.112960 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:27:10.114686 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:27:10.114741 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:27:10.114775 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:10.124879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:27:10.127578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:27:10.134202 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (812) Jul 6 23:27:10.136891 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:27:10.136918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:27:10.136933 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:27:10.140199 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:27:10.142087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:10.174786 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:27:10.180695 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:27:10.186384 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:27:10.191812 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:27:10.293639 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:10.302307 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:27:10.304571 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:27:10.311963 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:27:10.313592 kernel: BTRFS info (device vda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:27:10.342226 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:27:10.345086 ignition[925]: INFO : Ignition 2.20.0 Jul 6 23:27:10.345086 ignition[925]: INFO : Stage: mount Jul 6 23:27:10.347214 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.347214 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:10.347214 ignition[925]: INFO : mount: mount passed Jul 6 23:27:10.347214 ignition[925]: INFO : Ignition finished successfully Jul 6 23:27:10.353586 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:27:10.366320 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:27:10.375877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:27:10.392208 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (937) Jul 6 23:27:10.394247 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:27:10.394271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:27:10.394294 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:27:10.397227 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:27:10.399536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:27:10.430832 ignition[954]: INFO : Ignition 2.20.0 Jul 6 23:27:10.430832 ignition[954]: INFO : Stage: files Jul 6 23:27:10.432985 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:10.432985 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:10.432985 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:27:10.437279 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:27:10.437279 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:27:10.443080 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:27:10.444829 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:27:10.446441 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:27:10.445440 unknown[954]: wrote ssh authorized keys file for user: core Jul 6 23:27:10.449169 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:27:10.449169 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:27:10.488406 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:27:10.622005 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:27:10.622005 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:27:10.626665 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:27:10.638542 systemd-networkd[770]: eth0: Gained IPv6LL Jul 6 23:27:10.977031 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:27:11.270091 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:27:11.270091 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:27:11.273770 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:27:11.764681 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:27:12.855350 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:27:12.855350 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:27:12.859125 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:27:12.861160 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:27:12.927538 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:27:12.939285 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:27:12.956826 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:27:12.956826 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:12.956826 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:27:12.956826 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:12.956826 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:27:12.956826 ignition[954]: INFO : files: files passed Jul 6 23:27:12.956826 ignition[954]: INFO : Ignition finished successfully Jul 6 23:27:13.001832 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:27:13.012376 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:27:13.014710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:27:13.018508 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:27:13.018658 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:27:13.034427 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:27:13.044695 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.044695 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.048027 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:27:13.048262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:13.052123 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:27:13.067587 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:27:13.098414 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:27:13.098766 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:27:13.102074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:27:13.103601 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:27:13.103750 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:27:13.117516 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:27:13.133496 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:13.150460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:27:13.165138 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:13.165372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:13.169006 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:27:13.171666 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:27:13.171891 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:27:13.174599 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:27:13.177232 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:27:13.180036 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:27:13.182265 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:27:13.183250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:27:13.183702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:27:13.184079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:27:13.184594 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:27:13.184985 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:27:13.186234 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:27:13.186700 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:27:13.186892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:27:13.187750 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:13.188140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:13.188402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:27:13.188522 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:13.188907 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:27:13.189059 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:27:13.189910 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:27:13.251652 ignition[1009]: INFO : Ignition 2.20.0 Jul 6 23:27:13.251652 ignition[1009]: INFO : Stage: umount Jul 6 23:27:13.251652 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:27:13.251652 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:27:13.251652 ignition[1009]: INFO : umount: umount passed Jul 6 23:27:13.251652 ignition[1009]: INFO : Ignition finished successfully Jul 6 23:27:13.190066 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:27:13.190539 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:27:13.190956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:27:13.194298 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:13.194753 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:27:13.195030 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:27:13.195536 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:27:13.195643 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:27:13.196052 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:27:13.196142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:27:13.196554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:27:13.196749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:27:13.197103 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:27:13.197234 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:27:13.217473 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:27:13.220061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:27:13.221013 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:27:13.221150 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:13.223856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:27:13.223985 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:27:13.278884 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:27:13.280032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:27:13.285609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:27:13.302619 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:27:13.305945 systemd[1]: Stopped target network.target - Network. Jul 6 23:27:13.307985 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:27:13.308983 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:27:13.310941 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:27:13.311825 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:27:13.313738 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:27:13.314645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:27:13.316588 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:27:13.316647 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:27:13.320158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:27:13.322372 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:27:13.329805 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:27:13.329984 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:27:13.334916 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:27:13.335361 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:27:13.335420 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:13.339826 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:13.342064 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:27:13.342230 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:27:13.346392 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:27:13.346625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:27:13.346681 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:13.357425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:27:13.359417 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:27:13.359537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:27:13.360699 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:27:13.360774 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:13.365215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:27:13.365290 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:13.366316 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:13.367957 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:27:13.376116 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:27:13.377157 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:27:13.394266 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:27:13.395374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:13.398133 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:27:13.399091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:13.401280 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:27:13.402246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:13.404356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:27:13.405290 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:27:13.407354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:27:13.408515 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:27:13.410590 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:27:13.411571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:27:13.420336 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:27:13.421414 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:27:13.421473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:13.423945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:27:13.423998 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:13.426337 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:27:13.426390 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:13.430053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:13.430124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:13.437491 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:27:13.437564 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:13.437932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:27:13.438038 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:27:13.788497 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:27:13.788637 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:27:13.791122 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:27:13.791986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:27:13.792048 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:27:13.811500 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:27:13.819399 systemd[1]: Switching root. Jul 6 23:27:13.850012 systemd-journald[193]: Journal stopped Jul 6 23:27:15.339287 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 6 23:27:15.339362 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:27:15.339376 kernel: SELinux: policy capability open_perms=1 Jul 6 23:27:15.339393 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:27:15.339405 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:27:15.339417 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:27:15.339434 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:27:15.339446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:27:15.339458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:27:15.339473 kernel: audit: type=1403 audit(1751844434.431:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:27:15.339486 systemd[1]: Successfully loaded SELinux policy in 46.918ms. Jul 6 23:27:15.339507 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.392ms. Jul 6 23:27:15.339522 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:27:15.339535 systemd[1]: Detected virtualization kvm. Jul 6 23:27:15.339548 systemd[1]: Detected architecture x86-64. Jul 6 23:27:15.339560 systemd[1]: Detected first boot. Jul 6 23:27:15.339573 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:27:15.339586 zram_generator::config[1055]: No configuration found. Jul 6 23:27:15.339602 kernel: Guest personality initialized and is inactive Jul 6 23:27:15.339614 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:27:15.339626 kernel: Initialized host personality Jul 6 23:27:15.339638 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:27:15.339825 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:27:15.339839 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:27:15.339852 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:27:15.339865 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:27:15.339881 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:15.339894 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:27:15.339907 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:27:15.339920 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:27:15.339939 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:27:15.339952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:27:15.339970 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:27:15.339984 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:27:15.339997 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:27:15.340013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:27:15.340026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:27:15.340039 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:27:15.340051 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:27:15.340065 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:27:15.340078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:27:15.340091 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:27:15.340106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:27:15.340128 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:27:15.340145 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:27:15.340158 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:27:15.340186 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:27:15.340199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:27:15.340212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:27:15.340225 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:27:15.340237 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:27:15.340250 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:27:15.340266 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:27:15.340279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:27:15.340291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:27:15.340304 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:27:15.340316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:27:15.340329 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:27:15.340342 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:27:15.340355 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:27:15.340367 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:27:15.340383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:15.340396 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:27:15.340409 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:27:15.340429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:27:15.340444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:27:15.340456 systemd[1]: Reached target machines.target - Containers. Jul 6 23:27:15.340469 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:27:15.340482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:15.340498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:27:15.340511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:27:15.340523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:15.340536 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:15.340549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:15.340562 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:27:15.340575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:15.340588 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:27:15.340604 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:27:15.340617 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:27:15.340629 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:27:15.340642 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:27:15.340655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:15.340670 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:27:15.340685 kernel: fuse: init (API version 7.39) Jul 6 23:27:15.340700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:27:15.340739 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:27:15.340759 kernel: loop: module loaded Jul 6 23:27:15.340798 systemd-journald[1119]: Collecting audit messages is disabled. Jul 6 23:27:15.340821 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:27:15.340834 systemd-journald[1119]: Journal started Jul 6 23:27:15.340860 systemd-journald[1119]: Runtime Journal (/run/log/journal/5ccbc6736432420eade603a4ca725333) is 6M, max 48.2M, 42.2M free. Jul 6 23:27:15.079644 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:27:15.093688 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:27:15.094215 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:27:15.343202 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:27:15.371024 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:27:15.372684 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:27:15.372727 systemd[1]: Stopped verity-setup.service. Jul 6 23:27:15.376202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:15.387352 kernel: ACPI: bus type drm_connector registered Jul 6 23:27:15.391193 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:27:15.393593 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:27:15.394829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:27:15.396052 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:27:15.397190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:27:15.398537 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:27:15.399774 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:27:15.401310 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:27:15.402939 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:27:15.403273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:27:15.404861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:15.405217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:15.406737 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:15.406990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:15.409019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:15.409272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:15.411119 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:27:15.411361 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:27:15.412928 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:15.413171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:15.414716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:27:15.416226 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:27:15.418310 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:27:15.419924 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:27:15.437515 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:27:15.446304 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:27:15.448768 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:27:15.449876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:27:15.449920 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:27:15.451962 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:27:15.454363 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:27:15.457378 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:27:15.458577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:15.460936 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:27:15.465302 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:27:15.467319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:15.471428 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:27:15.473019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:15.478298 systemd-journald[1119]: Time spent on flushing to /var/log/journal/5ccbc6736432420eade603a4ca725333 is 17.535ms for 1052 entries. Jul 6 23:27:15.478298 systemd-journald[1119]: System Journal (/var/log/journal/5ccbc6736432420eade603a4ca725333) is 8M, max 195.6M, 187.6M free. Jul 6 23:27:16.004923 systemd-journald[1119]: Received client request to flush runtime journal. Jul 6 23:27:16.005097 kernel: loop0: detected capacity change from 0 to 224512 Jul 6 23:27:16.005138 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:27:16.005163 kernel: loop1: detected capacity change from 0 to 147912 Jul 6 23:27:16.005216 kernel: loop2: detected capacity change from 0 to 138176 Jul 6 23:27:16.005330 kernel: loop3: detected capacity change from 0 to 224512 Jul 6 23:27:16.005425 kernel: loop4: detected capacity change from 0 to 147912 Jul 6 23:27:15.478504 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:27:15.485663 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:27:15.491241 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:27:15.495352 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:27:15.496790 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:27:15.560928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:27:15.562833 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:27:15.597681 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:27:15.599479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:27:15.609888 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:27:15.758074 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jul 6 23:27:15.758087 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jul 6 23:27:15.764896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:27:15.888384 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:27:15.967829 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:27:15.979392 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:27:16.011389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:27:16.017141 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:27:16.025516 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:27:16.030419 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:27:16.036736 kernel: loop5: detected capacity change from 0 to 138176 Jul 6 23:27:16.065563 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:27:16.066529 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 6 23:27:16.068354 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:27:16.082384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:27:16.087523 systemd[1]: Reload requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:27:16.087540 systemd[1]: Reloading... Jul 6 23:27:16.162635 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:27:16.162658 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 6 23:27:16.209212 zram_generator::config[1228]: No configuration found. Jul 6 23:27:16.390878 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:27:16.439272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:16.508798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:27:16.508984 systemd[1]: Reloading finished in 420 ms. Jul 6 23:27:16.534269 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:27:16.536395 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:27:16.538413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:27:16.566579 systemd[1]: Starting ensure-sysext.service... Jul 6 23:27:16.568893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:27:16.597010 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:27:16.597029 systemd[1]: Reloading... Jul 6 23:27:16.699217 zram_generator::config[1305]: No configuration found. Jul 6 23:27:16.701026 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:27:16.701278 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:27:16.702312 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:27:16.702682 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:27:16.702793 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:27:16.707901 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:16.708554 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:27:16.731197 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:27:16.731217 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:27:16.829714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:27:16.903404 systemd[1]: Reloading finished in 305 ms. Jul 6 23:27:16.919305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:27:16.938752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:27:16.949080 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:16.952751 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:27:16.955954 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:27:16.960943 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:27:16.965891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:27:16.973152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:27:16.979061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:16.979716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:16.981861 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:16.985621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:16.992273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:16.993561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:16.993690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:16.996123 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:27:16.997457 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:16.998722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:16.998966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:17.000819 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:27:17.002998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:17.004276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:17.006531 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:17.007118 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:17.018391 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Jul 6 23:27:17.025680 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:27:17.035945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:17.036725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:27:17.045211 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:27:17.046517 augenrules[1373]: No rules Jul 6 23:27:17.049047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:27:17.054168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:27:17.057727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:27:17.059258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:27:17.059399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:27:17.061875 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:27:17.063228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:27:17.065243 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:17.065539 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:17.069793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:27:17.070106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:27:17.072787 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:27:17.073081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:27:17.075108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:27:17.075398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:27:17.082647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:27:17.086038 systemd[1]: Finished ensure-sysext.service. Jul 6 23:27:17.087486 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:27:17.087776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:27:17.097495 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:27:17.102146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:27:17.112549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:27:17.114010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:27:17.114076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:27:17.120250 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:27:17.126248 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:27:17.132511 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:27:17.167268 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:27:17.252842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Jul 6 23:27:17.275916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:27:17.286298 systemd-resolved[1341]: Positive Trust Anchors: Jul 6 23:27:17.286320 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:27:17.286361 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:27:17.290571 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:27:17.294994 systemd-resolved[1341]: Defaulting to hostname 'linux'. Jul 6 23:27:17.300054 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:27:17.301703 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:27:17.317214 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:27:17.318948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:27:17.320953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:27:17.324235 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:27:17.326249 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:27:17.329675 systemd-networkd[1404]: lo: Link UP Jul 6 23:27:17.329897 systemd-networkd[1404]: lo: Gained carrier Jul 6 23:27:17.333052 systemd-networkd[1404]: Enumeration completed Jul 6 23:27:17.333212 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:27:17.333629 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:17.333635 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:27:17.334815 systemd-networkd[1404]: eth0: Link UP Jul 6 23:27:17.334822 systemd-networkd[1404]: eth0: Gained carrier Jul 6 23:27:17.334840 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:27:17.335821 systemd[1]: Reached target network.target - Network. Jul 6 23:27:17.339471 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:27:17.344509 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:27:17.346237 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:27:17.349801 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jul 6 23:27:18.721027 systemd-resolved[1341]: Clock change detected. Flushing caches. Jul 6 23:27:18.723526 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:27:18.723602 systemd-timesyncd[1407]: Initial clock synchronization to Sun 2025-07-06 23:27:18.720979 UTC. Jul 6 23:27:18.728748 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:27:18.737738 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 6 23:27:18.738069 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:27:18.740976 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:27:18.741261 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:27:18.993447 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:27:19.059808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:19.062657 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:27:19.108723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:27:19.109312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:19.115214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:27:19.116067 kernel: kvm_amd: TSC scaling supported Jul 6 23:27:19.116146 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:27:19.116167 kernel: kvm_amd: Nested Paging enabled Jul 6 23:27:19.116184 kernel: kvm_amd: LBR virtualization supported Jul 6 23:27:19.117681 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:27:19.117720 kernel: kvm_amd: Virtual GIF supported Jul 6 23:27:19.138656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:27:19.144521 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:27:19.183551 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:27:19.191798 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:27:19.211115 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:27:19.214915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:27:19.262781 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:27:19.264593 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:27:19.265787 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:27:19.267181 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:27:19.268621 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:27:19.270645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:27:19.271990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:27:19.273338 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:27:19.274636 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:27:19.274716 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:27:19.275731 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:27:19.279298 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:27:19.283067 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:27:19.288775 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:27:19.290652 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:27:19.292152 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:27:19.297197 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:27:19.299373 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:27:19.302951 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:27:19.305411 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:27:19.307009 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:27:19.308140 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:27:19.309304 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:19.309386 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:27:19.311820 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:27:19.315779 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:27:19.322436 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:27:19.323455 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:27:19.327198 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:27:19.328913 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:27:19.334695 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:27:19.341647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:27:19.346439 jq[1456]: false Jul 6 23:27:19.348352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:27:19.356821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:27:19.366777 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:27:19.369954 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:27:19.371072 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:27:19.380820 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:27:19.384789 extend-filesystems[1457]: Found loop3 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found loop4 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found loop5 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found sr0 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda1 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda2 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda3 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found usr Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda4 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda6 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda7 Jul 6 23:27:19.387502 extend-filesystems[1457]: Found vda9 Jul 6 23:27:19.387502 extend-filesystems[1457]: Checking size of /dev/vda9 Jul 6 23:27:19.384938 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:27:19.391140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:27:19.392327 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:27:19.396566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:27:19.397263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:27:19.407145 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:27:19.411887 update_engine[1467]: I20250706 23:27:19.411775 1467 main.cc:92] Flatcar Update Engine starting Jul 6 23:27:19.419245 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:27:19.419500 dbus-daemon[1455]: [system] SELinux support is enabled Jul 6 23:27:19.419732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:27:19.422459 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:27:19.424989 jq[1472]: true Jul 6 23:27:19.430711 update_engine[1467]: I20250706 23:27:19.430639 1467 update_check_scheduler.cc:74] Next update check in 5m41s Jul 6 23:27:19.434772 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:27:19.439125 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:27:19.439178 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:27:19.440848 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:27:19.440891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:27:19.442942 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:27:19.444375 tar[1476]: linux-amd64/LICENSE Jul 6 23:27:19.444937 tar[1476]: linux-amd64/helm Jul 6 23:27:19.446913 jq[1485]: true Jul 6 23:27:19.456887 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:27:19.483679 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:27:19.484112 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:27:19.493598 systemd-logind[1464]: New seat seat0. Jul 6 23:27:19.506913 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:27:19.513369 extend-filesystems[1457]: Resized partition /dev/vda9 Jul 6 23:27:19.539545 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:27:19.550439 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:27:19.566437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1412) Jul 6 23:27:19.637132 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:27:19.646393 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:27:19.654437 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:27:19.685166 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:27:19.686831 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:27:19.686831 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:27:19.686831 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:27:19.700488 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Jul 6 23:27:19.690497 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:27:19.704948 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:27:19.690854 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:27:19.693876 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:27:19.744251 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:27:19.745994 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:27:19.755729 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:27:19.756136 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:27:19.773991 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:27:19.826815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:27:19.836885 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:27:19.841023 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:27:19.842621 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:27:19.962162 containerd[1481]: time="2025-07-06T23:27:19.961964925Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:27:20.001166 containerd[1481]: time="2025-07-06T23:27:20.001089263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.003804 containerd[1481]: time="2025-07-06T23:27:20.003544667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:27:20.003804 containerd[1481]: time="2025-07-06T23:27:20.003572580Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:27:20.003804 containerd[1481]: time="2025-07-06T23:27:20.003589762Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:27:20.003804 containerd[1481]: time="2025-07-06T23:27:20.003810856Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:27:20.004013 containerd[1481]: time="2025-07-06T23:27:20.003831495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004013 containerd[1481]: time="2025-07-06T23:27:20.003923187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004013 containerd[1481]: time="2025-07-06T23:27:20.003935761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004285 containerd[1481]: time="2025-07-06T23:27:20.004257283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004285 containerd[1481]: time="2025-07-06T23:27:20.004277672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004328 containerd[1481]: time="2025-07-06T23:27:20.004290396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004328 containerd[1481]: time="2025-07-06T23:27:20.004300294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004454 containerd[1481]: time="2025-07-06T23:27:20.004431340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004739 containerd[1481]: time="2025-07-06T23:27:20.004706506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004948 containerd[1481]: time="2025-07-06T23:27:20.004914015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:27:20.004948 containerd[1481]: time="2025-07-06T23:27:20.004935345Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:27:20.005077 containerd[1481]: time="2025-07-06T23:27:20.005054949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:27:20.005148 containerd[1481]: time="2025-07-06T23:27:20.005128307Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:27:20.081477 containerd[1481]: time="2025-07-06T23:27:20.081368958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:27:20.081635 containerd[1481]: time="2025-07-06T23:27:20.081519680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:27:20.081635 containerd[1481]: time="2025-07-06T23:27:20.081550689Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:27:20.081635 containerd[1481]: time="2025-07-06T23:27:20.081576216Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:27:20.081635 containerd[1481]: time="2025-07-06T23:27:20.081629777Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:27:20.082076 containerd[1481]: time="2025-07-06T23:27:20.081931994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:27:20.082349 containerd[1481]: time="2025-07-06T23:27:20.082314831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:27:20.082676 containerd[1481]: time="2025-07-06T23:27:20.082639450Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:27:20.082676 containerd[1481]: time="2025-07-06T23:27:20.082659878Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:27:20.082676 containerd[1481]: time="2025-07-06T23:27:20.082674506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082690456Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082705824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082739508Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082759776Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082774333Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082802 containerd[1481]: time="2025-07-06T23:27:20.082789702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082806153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082819788Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082877506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082892515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082918613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.082940 containerd[1481]: time="2025-07-06T23:27:20.082932489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.082946786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.082960031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.082973366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.082987733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083001258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083020024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083035653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083047665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083058716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083091 containerd[1481]: time="2025-07-06T23:27:20.083095245Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083116645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083129359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083142543Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083198839Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083216121Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083227412Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083268139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083279851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083336257Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083367645Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:27:20.083388 containerd[1481]: time="2025-07-06T23:27:20.083385419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:27:20.083848 containerd[1481]: time="2025-07-06T23:27:20.083773376Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:27:20.083848 containerd[1481]: time="2025-07-06T23:27:20.083844520Z" level=info msg="Connect containerd service" Jul 6 23:27:20.083848 containerd[1481]: time="2025-07-06T23:27:20.083884344Z" level=info msg="using legacy CRI server" Jul 6 23:27:20.083848 containerd[1481]: time="2025-07-06T23:27:20.083893111Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:27:20.084216 containerd[1481]: time="2025-07-06T23:27:20.084201258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:27:20.085152 containerd[1481]: time="2025-07-06T23:27:20.085105384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:27:20.085415 containerd[1481]: time="2025-07-06T23:27:20.085314105Z" level=info msg="Start subscribing containerd event" Jul 6 23:27:20.086348 containerd[1481]: time="2025-07-06T23:27:20.086164510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:27:20.086348 containerd[1481]: time="2025-07-06T23:27:20.086247425Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:27:20.086573 containerd[1481]: time="2025-07-06T23:27:20.086540274Z" level=info msg="Start recovering state" Jul 6 23:27:20.086774 containerd[1481]: time="2025-07-06T23:27:20.086753424Z" level=info msg="Start event monitor" Jul 6 23:27:20.087688 containerd[1481]: time="2025-07-06T23:27:20.086853081Z" level=info msg="Start snapshots syncer" Jul 6 23:27:20.087688 containerd[1481]: time="2025-07-06T23:27:20.086914255Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:27:20.087688 containerd[1481]: time="2025-07-06T23:27:20.086930776Z" level=info msg="Start streaming server" Jul 6 23:27:20.087688 containerd[1481]: time="2025-07-06T23:27:20.087053256Z" level=info msg="containerd successfully booted in 0.131257s" Jul 6 23:27:20.087553 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:27:20.294565 tar[1476]: linux-amd64/README.md Jul 6 23:27:20.312786 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:27:20.584693 systemd-networkd[1404]: eth0: Gained IPv6LL Jul 6 23:27:20.588622 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:27:20.590749 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:27:20.605853 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:27:20.608932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:20.611586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:27:20.631047 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:27:20.631367 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:27:20.633620 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:27:20.639070 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:27:21.184177 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:27:21.236073 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:46856.service - OpenSSH per-connection server daemon (10.0.0.1:46856). Jul 6 23:27:21.338667 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 46856 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:21.341390 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:21.349575 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:27:21.362634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:27:21.372162 systemd-logind[1464]: New session 1 of user core. Jul 6 23:27:21.398765 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:27:21.409793 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:27:21.418241 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:27:21.421454 systemd-logind[1464]: New session c1 of user core. Jul 6 23:27:21.606757 systemd[1568]: Queued start job for default target default.target. Jul 6 23:27:21.649322 systemd[1568]: Created slice app.slice - User Application Slice. Jul 6 23:27:21.649369 systemd[1568]: Reached target paths.target - Paths. Jul 6 23:27:21.649480 systemd[1568]: Reached target timers.target - Timers. Jul 6 23:27:21.652972 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:27:21.669884 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:27:21.670093 systemd[1568]: Reached target sockets.target - Sockets. Jul 6 23:27:21.670138 systemd[1568]: Reached target basic.target - Basic System. Jul 6 23:27:21.670184 systemd[1568]: Reached target default.target - Main User Target. Jul 6 23:27:21.670217 systemd[1568]: Startup finished in 238ms. Jul 6 23:27:21.671035 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:27:21.687626 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:27:21.759722 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:46858.service - OpenSSH per-connection server daemon (10.0.0.1:46858). Jul 6 23:27:21.864881 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 46858 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:21.867272 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:21.872252 systemd-logind[1464]: New session 2 of user core. Jul 6 23:27:21.885704 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:27:21.958877 sshd[1581]: Connection closed by 10.0.0.1 port 46858 Jul 6 23:27:21.959306 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:21.997079 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:46858.service: Deactivated successfully. Jul 6 23:27:21.999086 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:27:21.999908 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:27:22.012856 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:46870.service - OpenSSH per-connection server daemon (10.0.0.1:46870). Jul 6 23:27:22.015940 systemd-logind[1464]: Removed session 2. Jul 6 23:27:22.027990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:22.029759 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:27:22.031542 systemd[1]: Startup finished in 1.500s (kernel) + 7.607s (initrd) + 6.274s (userspace) = 15.383s. Jul 6 23:27:22.035069 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:22.113846 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 46870 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:22.116639 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:22.122878 systemd-logind[1464]: New session 3 of user core. Jul 6 23:27:22.131821 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:27:22.196665 sshd[1599]: Connection closed by 10.0.0.1 port 46870 Jul 6 23:27:22.197802 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:22.203499 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:46870.service: Deactivated successfully. Jul 6 23:27:22.205969 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:27:22.207688 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:27:22.209346 systemd-logind[1464]: Removed session 3. Jul 6 23:27:22.912850 kubelet[1593]: E0706 23:27:22.912743 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:22.917116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:22.917351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:22.917889 systemd[1]: kubelet.service: Consumed 2.124s CPU time, 268.5M memory peak. Jul 6 23:27:32.210110 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:50850.service - OpenSSH per-connection server daemon (10.0.0.1:50850). Jul 6 23:27:32.251583 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 50850 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:32.253171 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:32.257927 systemd-logind[1464]: New session 4 of user core. Jul 6 23:27:32.268550 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:27:32.322102 sshd[1615]: Connection closed by 10.0.0.1 port 50850 Jul 6 23:27:32.322524 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:32.335043 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:50850.service: Deactivated successfully. Jul 6 23:27:32.336786 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:27:32.338534 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:27:32.347680 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:50858.service - OpenSSH per-connection server daemon (10.0.0.1:50858). Jul 6 23:27:32.348790 systemd-logind[1464]: Removed session 4. Jul 6 23:27:32.383505 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 50858 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:32.385081 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:32.390272 systemd-logind[1464]: New session 5 of user core. Jul 6 23:27:32.399643 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:27:32.451375 sshd[1623]: Connection closed by 10.0.0.1 port 50858 Jul 6 23:27:32.451816 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:32.469083 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:50858.service: Deactivated successfully. Jul 6 23:27:32.470962 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:27:32.472418 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:27:32.473742 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:50874.service - OpenSSH per-connection server daemon (10.0.0.1:50874). Jul 6 23:27:32.474465 systemd-logind[1464]: Removed session 5. Jul 6 23:27:32.513677 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 50874 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:32.515139 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:32.519509 systemd-logind[1464]: New session 6 of user core. Jul 6 23:27:32.529553 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:27:32.583159 sshd[1631]: Connection closed by 10.0.0.1 port 50874 Jul 6 23:27:32.583911 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:32.597417 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:50874.service: Deactivated successfully. Jul 6 23:27:32.599687 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:27:32.601470 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:27:32.608760 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:50890.service - OpenSSH per-connection server daemon (10.0.0.1:50890). Jul 6 23:27:32.609869 systemd-logind[1464]: Removed session 6. Jul 6 23:27:32.646038 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 50890 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:32.647639 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:32.652256 systemd-logind[1464]: New session 7 of user core. Jul 6 23:27:32.662609 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:27:32.722428 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:27:32.722796 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:32.740437 sudo[1640]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:32.742100 sshd[1639]: Connection closed by 10.0.0.1 port 50890 Jul 6 23:27:32.742558 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:32.764659 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:50890.service: Deactivated successfully. Jul 6 23:27:32.767026 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:27:32.768617 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:27:32.777696 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Jul 6 23:27:32.778642 systemd-logind[1464]: Removed session 7. Jul 6 23:27:32.814764 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:32.816370 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:32.821293 systemd-logind[1464]: New session 8 of user core. Jul 6 23:27:32.830589 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:27:32.885199 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:27:32.885587 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:32.889920 sudo[1650]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:32.896796 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:27:32.897133 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:32.918694 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:27:32.919663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:27:32.921172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:32.950535 augenrules[1675]: No rules Jul 6 23:27:32.952508 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:27:32.952787 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:27:32.954286 sudo[1649]: pam_unix(sudo:session): session closed for user root Jul 6 23:27:32.955871 sshd[1648]: Connection closed by 10.0.0.1 port 50904 Jul 6 23:27:32.956177 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:32.974126 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:50904.service: Deactivated successfully. Jul 6 23:27:32.975953 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:27:32.977367 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:27:32.987645 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:50906.service - OpenSSH per-connection server daemon (10.0.0.1:50906). Jul 6 23:27:32.988907 systemd-logind[1464]: Removed session 8. Jul 6 23:27:33.023460 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 50906 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:27:33.025284 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:33.030078 systemd-logind[1464]: New session 9 of user core. Jul 6 23:27:33.039555 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:27:33.094085 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:27:33.094460 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:27:33.160554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:33.165480 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:33.246916 kubelet[1702]: E0706 23:27:33.246765 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:33.253574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:33.253810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:33.254202 systemd[1]: kubelet.service: Consumed 322ms CPU time, 113.2M memory peak. Jul 6 23:27:33.551687 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:27:33.551863 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:27:34.095627 dockerd[1720]: time="2025-07-06T23:27:34.095532391Z" level=info msg="Starting up" Jul 6 23:27:35.270209 dockerd[1720]: time="2025-07-06T23:27:35.269758604Z" level=info msg="Loading containers: start." Jul 6 23:27:35.777444 kernel: Initializing XFRM netlink socket Jul 6 23:27:35.874054 systemd-networkd[1404]: docker0: Link UP Jul 6 23:27:35.927088 dockerd[1720]: time="2025-07-06T23:27:35.927038514Z" level=info msg="Loading containers: done." Jul 6 23:27:35.945294 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3967082210-merged.mount: Deactivated successfully. Jul 6 23:27:36.098096 dockerd[1720]: time="2025-07-06T23:27:36.097947719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:27:36.098096 dockerd[1720]: time="2025-07-06T23:27:36.098090747Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:27:36.098276 dockerd[1720]: time="2025-07-06T23:27:36.098240297Z" level=info msg="Daemon has completed initialization" Jul 6 23:27:36.261431 dockerd[1720]: time="2025-07-06T23:27:36.261329890Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:27:36.261702 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:27:37.302872 containerd[1481]: time="2025-07-06T23:27:37.302816187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:27:37.890292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796038260.mount: Deactivated successfully. Jul 6 23:27:39.214460 containerd[1481]: time="2025-07-06T23:27:39.214340555Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 6 23:27:39.214460 containerd[1481]: time="2025-07-06T23:27:39.214386661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:39.215749 containerd[1481]: time="2025-07-06T23:27:39.215706656Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:39.219475 containerd[1481]: time="2025-07-06T23:27:39.219423305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:39.220748 containerd[1481]: time="2025-07-06T23:27:39.220707613Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.917826304s" Jul 6 23:27:39.220748 containerd[1481]: time="2025-07-06T23:27:39.220737739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:27:39.221693 containerd[1481]: time="2025-07-06T23:27:39.221637977Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:27:43.390641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:27:43.399556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:43.588246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:43.596021 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:43.676164 kubelet[1980]: E0706 23:27:43.675989 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:43.680314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:43.680626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:43.681118 systemd[1]: kubelet.service: Consumed 276ms CPU time, 110.8M memory peak. Jul 6 23:27:44.817035 containerd[1481]: time="2025-07-06T23:27:44.816962880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.817735 containerd[1481]: time="2025-07-06T23:27:44.817668603Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 6 23:27:44.818897 containerd[1481]: time="2025-07-06T23:27:44.818864095Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.822211 containerd[1481]: time="2025-07-06T23:27:44.822145577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:44.823257 containerd[1481]: time="2025-07-06T23:27:44.823222907Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 5.601551407s" Jul 6 23:27:44.823333 containerd[1481]: time="2025-07-06T23:27:44.823262391Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:27:44.823951 containerd[1481]: time="2025-07-06T23:27:44.823921608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:27:47.344354 containerd[1481]: time="2025-07-06T23:27:47.344259861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:47.441419 containerd[1481]: time="2025-07-06T23:27:47.441322844Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 6 23:27:47.502116 containerd[1481]: time="2025-07-06T23:27:47.502016244Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:47.601628 containerd[1481]: time="2025-07-06T23:27:47.601380953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:47.602755 containerd[1481]: time="2025-07-06T23:27:47.602673697Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.778695553s" Jul 6 23:27:47.602755 containerd[1481]: time="2025-07-06T23:27:47.602743518Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:27:47.603492 containerd[1481]: time="2025-07-06T23:27:47.603387686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:27:50.245526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432199234.mount: Deactivated successfully. Jul 6 23:27:53.738527 containerd[1481]: time="2025-07-06T23:27:53.738445539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:53.755190 containerd[1481]: time="2025-07-06T23:27:53.755113982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 6 23:27:53.771206 containerd[1481]: time="2025-07-06T23:27:53.771132824Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:53.851179 containerd[1481]: time="2025-07-06T23:27:53.851085981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:53.851846 containerd[1481]: time="2025-07-06T23:27:53.851787931Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 6.248353246s" Jul 6 23:27:53.851886 containerd[1481]: time="2025-07-06T23:27:53.851844110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:27:53.852514 containerd[1481]: time="2025-07-06T23:27:53.852481446Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:27:53.890659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:27:53.899656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:27:54.079771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:27:54.084280 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:27:54.647804 kubelet[2012]: E0706 23:27:54.647615 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:27:54.652381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:27:54.652604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:27:54.652982 systemd[1]: kubelet.service: Consumed 275ms CPU time, 113M memory peak. Jul 6 23:27:56.765576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700272237.mount: Deactivated successfully. Jul 6 23:27:59.758102 containerd[1481]: time="2025-07-06T23:27:59.757900553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:59.780204 containerd[1481]: time="2025-07-06T23:27:59.780082825Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:27:59.810987 containerd[1481]: time="2025-07-06T23:27:59.810900405Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:59.846719 containerd[1481]: time="2025-07-06T23:27:59.846622843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:27:59.848202 containerd[1481]: time="2025-07-06T23:27:59.848146711Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.995632411s" Jul 6 23:27:59.848292 containerd[1481]: time="2025-07-06T23:27:59.848204280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:27:59.848994 containerd[1481]: time="2025-07-06T23:27:59.848946356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:28:00.796207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013884583.mount: Deactivated successfully. Jul 6 23:28:00.819338 containerd[1481]: time="2025-07-06T23:28:00.818002854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:00.821351 containerd[1481]: time="2025-07-06T23:28:00.821291332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:28:00.824232 containerd[1481]: time="2025-07-06T23:28:00.824142646Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:00.829992 containerd[1481]: time="2025-07-06T23:28:00.829867817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:00.831383 containerd[1481]: time="2025-07-06T23:28:00.831313222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 982.322071ms" Jul 6 23:28:00.831383 containerd[1481]: time="2025-07-06T23:28:00.831352867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:28:00.831979 containerd[1481]: time="2025-07-06T23:28:00.831946600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:28:01.554174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766592806.mount: Deactivated successfully. Jul 6 23:28:04.678635 update_engine[1467]: I20250706 23:28:04.678484 1467 update_attempter.cc:509] Updating boot flags... Jul 6 23:28:04.731065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:28:04.739704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:04.755437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2097) Jul 6 23:28:04.944536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:04.950424 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2102) Jul 6 23:28:04.957833 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:28:05.877170 kubelet[2111]: E0706 23:28:05.877095 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:28:05.881815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:28:05.882090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:28:05.882508 systemd[1]: kubelet.service: Consumed 265ms CPU time, 115M memory peak. Jul 6 23:28:06.008428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2102) Jul 6 23:28:10.104022 containerd[1481]: time="2025-07-06T23:28:10.103931972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:10.111172 containerd[1481]: time="2025-07-06T23:28:10.111053931Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 6 23:28:10.127210 containerd[1481]: time="2025-07-06T23:28:10.126581687Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:10.147092 containerd[1481]: time="2025-07-06T23:28:10.145030547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:10.147092 containerd[1481]: time="2025-07-06T23:28:10.146675779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.314694243s" Jul 6 23:28:10.147092 containerd[1481]: time="2025-07-06T23:28:10.146714523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:28:12.863088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:12.863359 systemd[1]: kubelet.service: Consumed 265ms CPU time, 115M memory peak. Jul 6 23:28:12.877722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:12.912371 systemd[1]: Reload requested from client PID 2193 ('systemctl') (unit session-9.scope)... Jul 6 23:28:12.912392 systemd[1]: Reloading... Jul 6 23:28:13.019726 zram_generator::config[2240]: No configuration found. Jul 6 23:28:14.134268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:14.257189 systemd[1]: Reloading finished in 1344 ms. Jul 6 23:28:14.313947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:14.319806 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:14.320770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:14.321187 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:28:14.321671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:14.321725 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.2M memory peak. Jul 6 23:28:14.324896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:14.517453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:14.523648 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:14.580162 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:14.580162 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:14.580162 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:14.580684 kubelet[2288]: I0706 23:28:14.580234 2288 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:14.807144 kubelet[2288]: I0706 23:28:14.806975 2288 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:28:14.807144 kubelet[2288]: I0706 23:28:14.807020 2288 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:14.809168 kubelet[2288]: I0706 23:28:14.808700 2288 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:28:14.840653 kubelet[2288]: E0706 23:28:14.840606 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:14.845991 kubelet[2288]: I0706 23:28:14.845944 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:14.873107 kubelet[2288]: E0706 23:28:14.873059 2288 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:28:14.873107 kubelet[2288]: I0706 23:28:14.873097 2288 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:28:14.880553 kubelet[2288]: I0706 23:28:14.880509 2288 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:14.880896 kubelet[2288]: I0706 23:28:14.880833 2288 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:14.881191 kubelet[2288]: I0706 23:28:14.880888 2288 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:14.882213 kubelet[2288]: I0706 23:28:14.882104 2288 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:14.882213 kubelet[2288]: I0706 23:28:14.882133 2288 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:28:14.882456 kubelet[2288]: I0706 23:28:14.882336 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:14.889844 kubelet[2288]: I0706 23:28:14.889768 2288 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:28:14.889844 kubelet[2288]: I0706 23:28:14.889844 2288 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:14.889938 kubelet[2288]: I0706 23:28:14.889876 2288 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:28:14.889938 kubelet[2288]: I0706 23:28:14.889896 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:14.894134 kubelet[2288]: W0706 23:28:14.894052 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:14.894134 kubelet[2288]: E0706 23:28:14.894124 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:14.894134 kubelet[2288]: W0706 23:28:14.894051 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:14.894370 kubelet[2288]: E0706 23:28:14.894159 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:14.900741 kubelet[2288]: I0706 23:28:14.900698 2288 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:28:14.901339 kubelet[2288]: I0706 23:28:14.901322 2288 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:28:14.901461 kubelet[2288]: W0706 23:28:14.901439 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:28:14.905776 kubelet[2288]: I0706 23:28:14.905729 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:14.905943 kubelet[2288]: I0706 23:28:14.905801 2288 server.go:1287] "Started kubelet" Jul 6 23:28:14.907441 kubelet[2288]: I0706 23:28:14.907381 2288 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:14.908203 kubelet[2288]: I0706 23:28:14.908135 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:14.908540 kubelet[2288]: I0706 23:28:14.908515 2288 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:14.908856 kubelet[2288]: I0706 23:28:14.908831 2288 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:28:14.908896 kubelet[2288]: I0706 23:28:14.908872 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:14.910071 kubelet[2288]: I0706 23:28:14.910021 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:14.914556 kubelet[2288]: E0706 23:28:14.914525 2288 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:14.914712 kubelet[2288]: E0706 23:28:14.914661 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:14.914837 kubelet[2288]: I0706 23:28:14.914809 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:14.915054 kubelet[2288]: I0706 23:28:14.915032 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:14.915114 kubelet[2288]: I0706 23:28:14.915099 2288 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:14.915552 kubelet[2288]: W0706 23:28:14.915506 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:14.915608 kubelet[2288]: E0706 23:28:14.915553 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:14.916776 kubelet[2288]: E0706 23:28:14.916718 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Jul 6 23:28:14.916936 kubelet[2288]: I0706 23:28:14.916855 2288 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:28:14.916936 kubelet[2288]: I0706 23:28:14.916866 2288 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:28:14.916997 kubelet[2288]: I0706 23:28:14.916936 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:15.026740 kubelet[2288]: E0706 23:28:15.026159 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.029162 kubelet[2288]: E0706 23:28:15.027373 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcd495494bbff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:28:14.905752575 +0000 UTC m=+0.374036034,LastTimestamp:2025-07-06 23:28:14.905752575 +0000 UTC m=+0.374036034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:28:15.034695 kubelet[2288]: I0706 23:28:15.034645 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:15.036080 kubelet[2288]: I0706 23:28:15.036045 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:15.036080 kubelet[2288]: I0706 23:28:15.036072 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:15.036172 kubelet[2288]: I0706 23:28:15.036096 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:15.038157 kubelet[2288]: I0706 23:28:15.038122 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:15.038211 kubelet[2288]: I0706 23:28:15.038165 2288 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:28:15.038211 kubelet[2288]: I0706 23:28:15.038192 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:15.038211 kubelet[2288]: I0706 23:28:15.038201 2288 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:28:15.038336 kubelet[2288]: E0706 23:28:15.038260 2288 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:15.038839 kubelet[2288]: W0706 23:28:15.038787 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:15.040476 kubelet[2288]: E0706 23:28:15.038836 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:15.118624 kubelet[2288]: E0706 23:28:15.118436 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Jul 6 23:28:15.126538 kubelet[2288]: E0706 23:28:15.126471 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.138731 kubelet[2288]: E0706 23:28:15.138645 2288 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:15.227440 kubelet[2288]: E0706 23:28:15.227315 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.328087 kubelet[2288]: E0706 23:28:15.327991 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.339350 kubelet[2288]: E0706 23:28:15.339270 2288 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:15.428685 kubelet[2288]: E0706 23:28:15.428594 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.519972 kubelet[2288]: E0706 23:28:15.519897 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Jul 6 23:28:15.529079 kubelet[2288]: E0706 23:28:15.529028 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.629811 kubelet[2288]: E0706 23:28:15.629722 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.730799 kubelet[2288]: E0706 23:28:15.730551 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.739926 kubelet[2288]: E0706 23:28:15.739835 2288 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:15.831569 kubelet[2288]: E0706 23:28:15.831477 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:15.932523 kubelet[2288]: E0706 23:28:15.932449 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.028643 kubelet[2288]: W0706 23:28:16.028491 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:16.028787 kubelet[2288]: E0706 23:28:16.028545 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:16.033098 kubelet[2288]: E0706 23:28:16.033042 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.134015 kubelet[2288]: E0706 23:28:16.133934 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.150707 kubelet[2288]: W0706 23:28:16.150673 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:16.150826 kubelet[2288]: E0706 23:28:16.150716 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:16.156437 kubelet[2288]: W0706 23:28:16.156383 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:16.156538 kubelet[2288]: E0706 23:28:16.156442 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:16.188150 kubelet[2288]: W0706 23:28:16.188114 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:16.188150 kubelet[2288]: E0706 23:28:16.188149 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:16.235145 kubelet[2288]: E0706 23:28:16.235042 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.321255 kubelet[2288]: E0706 23:28:16.321102 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Jul 6 23:28:16.335247 kubelet[2288]: E0706 23:28:16.335186 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.374274 kubelet[2288]: I0706 23:28:16.374196 2288 policy_none.go:49] "None policy: Start" Jul 6 23:28:16.374274 kubelet[2288]: I0706 23:28:16.374249 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:16.374274 kubelet[2288]: I0706 23:28:16.374273 2288 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:16.411432 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:28:16.429261 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:28:16.433680 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:28:16.435597 kubelet[2288]: E0706 23:28:16.435566 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:16.445993 kubelet[2288]: I0706 23:28:16.445913 2288 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:28:16.446547 kubelet[2288]: I0706 23:28:16.446214 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:16.446547 kubelet[2288]: I0706 23:28:16.446230 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:16.446547 kubelet[2288]: I0706 23:28:16.446507 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:16.447750 kubelet[2288]: E0706 23:28:16.447721 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:16.447832 kubelet[2288]: E0706 23:28:16.447776 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:28:16.547929 kubelet[2288]: I0706 23:28:16.547856 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:16.549448 kubelet[2288]: E0706 23:28:16.548604 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 6 23:28:16.551323 systemd[1]: Created slice kubepods-burstable-podb2a3a5fcb966b445c700edd58e67e246.slice - libcontainer container kubepods-burstable-podb2a3a5fcb966b445c700edd58e67e246.slice. Jul 6 23:28:16.567953 kubelet[2288]: E0706 23:28:16.567872 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:16.572149 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 6 23:28:16.575791 kubelet[2288]: E0706 23:28:16.575761 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:16.579703 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 6 23:28:16.583061 kubelet[2288]: E0706 23:28:16.582265 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:16.637433 kubelet[2288]: I0706 23:28:16.637313 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:16.637433 kubelet[2288]: I0706 23:28:16.637371 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:16.637433 kubelet[2288]: I0706 23:28:16.637440 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:16.637433 kubelet[2288]: I0706 23:28:16.637462 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:16.638136 kubelet[2288]: I0706 23:28:16.637484 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:16.638136 kubelet[2288]: I0706 23:28:16.637552 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:16.638136 kubelet[2288]: I0706 23:28:16.637574 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:16.638136 kubelet[2288]: I0706 23:28:16.637666 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:16.638136 kubelet[2288]: I0706 23:28:16.637717 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:16.750648 kubelet[2288]: I0706 23:28:16.750593 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:16.751113 kubelet[2288]: E0706 23:28:16.751054 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 6 23:28:16.868704 kubelet[2288]: E0706 23:28:16.868551 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:16.869573 containerd[1481]: time="2025-07-06T23:28:16.869504577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b2a3a5fcb966b445c700edd58e67e246,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:16.876747 kubelet[2288]: E0706 23:28:16.876720 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:16.877414 containerd[1481]: time="2025-07-06T23:28:16.877343621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:16.883643 kubelet[2288]: E0706 23:28:16.883607 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:16.884146 containerd[1481]: time="2025-07-06T23:28:16.884075497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:16.886041 kubelet[2288]: E0706 23:28:16.886006 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:17.153200 kubelet[2288]: I0706 23:28:17.153159 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:17.153691 kubelet[2288]: E0706 23:28:17.153637 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 6 23:28:17.922203 kubelet[2288]: E0706 23:28:17.922147 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="3.2s" Jul 6 23:28:17.956314 kubelet[2288]: I0706 23:28:17.956256 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:17.956852 kubelet[2288]: E0706 23:28:17.956800 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 6 23:28:18.408837 kubelet[2288]: W0706 23:28:18.408757 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:18.409114 kubelet[2288]: E0706 23:28:18.408842 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:18.601475 kubelet[2288]: W0706 23:28:18.601355 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:18.601475 kubelet[2288]: E0706 23:28:18.601477 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:18.714265 kubelet[2288]: W0706 23:28:18.714059 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:18.714265 kubelet[2288]: E0706 23:28:18.714147 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:19.389740 kubelet[2288]: W0706 23:28:19.389637 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Jul 6 23:28:19.389740 kubelet[2288]: E0706 23:28:19.389740 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:28:19.559313 kubelet[2288]: I0706 23:28:19.559187 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:19.559691 kubelet[2288]: E0706 23:28:19.559643 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Jul 6 23:28:19.645823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598266840.mount: Deactivated successfully. Jul 6 23:28:19.998633 containerd[1481]: time="2025-07-06T23:28:19.998352636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:20.097093 containerd[1481]: time="2025-07-06T23:28:20.096975454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:28:20.101975 containerd[1481]: time="2025-07-06T23:28:20.101863712Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:20.133751 containerd[1481]: time="2025-07-06T23:28:20.133661213Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:20.152907 containerd[1481]: time="2025-07-06T23:28:20.152808717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:28:20.208455 containerd[1481]: time="2025-07-06T23:28:20.208365448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:20.236598 containerd[1481]: time="2025-07-06T23:28:20.236533403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:28:20.278519 containerd[1481]: time="2025-07-06T23:28:20.276680921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:28:20.278519 containerd[1481]: time="2025-07-06T23:28:20.278145790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.408504935s" Jul 6 23:28:20.343759 containerd[1481]: time="2025-07-06T23:28:20.343700071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.45950586s" Jul 6 23:28:20.344677 containerd[1481]: time="2025-07-06T23:28:20.344630123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.467127922s" Jul 6 23:28:20.739566 containerd[1481]: time="2025-07-06T23:28:20.739388342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:20.739566 containerd[1481]: time="2025-07-06T23:28:20.739521312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:20.741168 containerd[1481]: time="2025-07-06T23:28:20.739328359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:20.741168 containerd[1481]: time="2025-07-06T23:28:20.741038019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:20.741168 containerd[1481]: time="2025-07-06T23:28:20.741051915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.741168 containerd[1481]: time="2025-07-06T23:28:20.740783760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.741168 containerd[1481]: time="2025-07-06T23:28:20.740936137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.741504 containerd[1481]: time="2025-07-06T23:28:20.741156593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.755204 containerd[1481]: time="2025-07-06T23:28:20.754806121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:20.755204 containerd[1481]: time="2025-07-06T23:28:20.754871044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:20.755204 containerd[1481]: time="2025-07-06T23:28:20.754897283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.755204 containerd[1481]: time="2025-07-06T23:28:20.755054309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:20.777659 systemd[1]: Started cri-containerd-51dca5f5c12e069bbe048689846c64b97376b103c5b0dc6895fdc1c2baae0bde.scope - libcontainer container 51dca5f5c12e069bbe048689846c64b97376b103c5b0dc6895fdc1c2baae0bde. Jul 6 23:28:20.845375 systemd[1]: Started cri-containerd-f106bf066f404ea878b0455e1cea76042f4dfc19da63170b0d0ce540691aa42a.scope - libcontainer container f106bf066f404ea878b0455e1cea76042f4dfc19da63170b0d0ce540691aa42a. Jul 6 23:28:20.870647 systemd[1]: Started cri-containerd-4ab4dfd1bcd099553860ce4358f08b01a041ab65c3772c08dbc3926a9d16a2ad.scope - libcontainer container 4ab4dfd1bcd099553860ce4358f08b01a041ab65c3772c08dbc3926a9d16a2ad. Jul 6 23:28:20.898238 containerd[1481]: time="2025-07-06T23:28:20.898187791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"51dca5f5c12e069bbe048689846c64b97376b103c5b0dc6895fdc1c2baae0bde\"" Jul 6 23:28:20.899800 kubelet[2288]: E0706 23:28:20.899768 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:20.906892 containerd[1481]: time="2025-07-06T23:28:20.905026362Z" level=info msg="CreateContainer within sandbox \"51dca5f5c12e069bbe048689846c64b97376b103c5b0dc6895fdc1c2baae0bde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:28:20.913497 containerd[1481]: time="2025-07-06T23:28:20.913438176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b2a3a5fcb966b445c700edd58e67e246,Namespace:kube-system,Attempt:0,} returns sandbox id \"f106bf066f404ea878b0455e1cea76042f4dfc19da63170b0d0ce540691aa42a\"" Jul 6 23:28:20.914382 kubelet[2288]: E0706 23:28:20.914357 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:20.918160 containerd[1481]: time="2025-07-06T23:28:20.918112811Z" level=info msg="CreateContainer within sandbox \"f106bf066f404ea878b0455e1cea76042f4dfc19da63170b0d0ce540691aa42a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:28:20.940939 containerd[1481]: time="2025-07-06T23:28:20.940873881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab4dfd1bcd099553860ce4358f08b01a041ab65c3772c08dbc3926a9d16a2ad\"" Jul 6 23:28:20.941624 kubelet[2288]: E0706 23:28:20.941591 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:20.943131 containerd[1481]: time="2025-07-06T23:28:20.943089184Z" level=info msg="CreateContainer within sandbox \"4ab4dfd1bcd099553860ce4358f08b01a041ab65c3772c08dbc3926a9d16a2ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:28:20.951135 containerd[1481]: time="2025-07-06T23:28:20.951096285Z" level=info msg="CreateContainer within sandbox \"51dca5f5c12e069bbe048689846c64b97376b103c5b0dc6895fdc1c2baae0bde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b47420ca127dfb7b9960a6daf349accb70975a5923fd17adf4532c123f95fd7f\"" Jul 6 23:28:20.952868 containerd[1481]: time="2025-07-06T23:28:20.951695273Z" level=info msg="StartContainer for \"b47420ca127dfb7b9960a6daf349accb70975a5923fd17adf4532c123f95fd7f\"" Jul 6 23:28:20.977757 containerd[1481]: time="2025-07-06T23:28:20.977708328Z" level=info msg="CreateContainer within sandbox \"f106bf066f404ea878b0455e1cea76042f4dfc19da63170b0d0ce540691aa42a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4007e0918ebe9bf979e6a342dc1ce71bbdc57688750a470af4350c6a48b9217f\"" Jul 6 23:28:20.978629 containerd[1481]: time="2025-07-06T23:28:20.978586563Z" level=info msg="StartContainer for \"4007e0918ebe9bf979e6a342dc1ce71bbdc57688750a470af4350c6a48b9217f\"" Jul 6 23:28:20.987876 systemd[1]: Started cri-containerd-b47420ca127dfb7b9960a6daf349accb70975a5923fd17adf4532c123f95fd7f.scope - libcontainer container b47420ca127dfb7b9960a6daf349accb70975a5923fd17adf4532c123f95fd7f. Jul 6 23:28:21.002992 containerd[1481]: time="2025-07-06T23:28:21.002797003Z" level=info msg="CreateContainer within sandbox \"4ab4dfd1bcd099553860ce4358f08b01a041ab65c3772c08dbc3926a9d16a2ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49a5562650f11d68bbd114ca315670bb2c0d3b697890c1c09aea6a6f122cb6d8\"" Jul 6 23:28:21.004212 containerd[1481]: time="2025-07-06T23:28:21.003468116Z" level=info msg="StartContainer for \"49a5562650f11d68bbd114ca315670bb2c0d3b697890c1c09aea6a6f122cb6d8\"" Jul 6 23:28:21.044726 systemd[1]: Started cri-containerd-49a5562650f11d68bbd114ca315670bb2c0d3b697890c1c09aea6a6f122cb6d8.scope - libcontainer container 49a5562650f11d68bbd114ca315670bb2c0d3b697890c1c09aea6a6f122cb6d8. Jul 6 23:28:21.050238 systemd[1]: Started cri-containerd-4007e0918ebe9bf979e6a342dc1ce71bbdc57688750a470af4350c6a48b9217f.scope - libcontainer container 4007e0918ebe9bf979e6a342dc1ce71bbdc57688750a470af4350c6a48b9217f. Jul 6 23:28:21.079959 containerd[1481]: time="2025-07-06T23:28:21.079894566Z" level=info msg="StartContainer for \"b47420ca127dfb7b9960a6daf349accb70975a5923fd17adf4532c123f95fd7f\" returns successfully" Jul 6 23:28:21.122959 kubelet[2288]: E0706 23:28:21.122883 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="6.4s" Jul 6 23:28:21.162536 containerd[1481]: time="2025-07-06T23:28:21.162458332Z" level=info msg="StartContainer for \"49a5562650f11d68bbd114ca315670bb2c0d3b697890c1c09aea6a6f122cb6d8\" returns successfully" Jul 6 23:28:21.163033 containerd[1481]: time="2025-07-06T23:28:21.162586553Z" level=info msg="StartContainer for \"4007e0918ebe9bf979e6a342dc1ce71bbdc57688750a470af4350c6a48b9217f\" returns successfully" Jul 6 23:28:22.065763 kubelet[2288]: E0706 23:28:22.065717 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:22.066222 kubelet[2288]: E0706 23:28:22.065862 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:22.067677 kubelet[2288]: E0706 23:28:22.067639 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:22.067817 kubelet[2288]: E0706 23:28:22.067796 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:22.069252 kubelet[2288]: E0706 23:28:22.069226 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:22.069424 kubelet[2288]: E0706 23:28:22.069387 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:22.761982 kubelet[2288]: I0706 23:28:22.761685 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:22.983081 kubelet[2288]: I0706 23:28:22.983023 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:28:22.983081 kubelet[2288]: E0706 23:28:22.983067 2288 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:28:23.071784 kubelet[2288]: E0706 23:28:23.071644 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:23.072265 kubelet[2288]: E0706 23:28:23.071791 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:23.072265 kubelet[2288]: E0706 23:28:23.071794 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:23.072265 kubelet[2288]: E0706 23:28:23.071892 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:28:23.072265 kubelet[2288]: E0706 23:28:23.071934 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:23.072265 kubelet[2288]: E0706 23:28:23.071983 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:23.117089 kubelet[2288]: I0706 23:28:23.117008 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:23.228645 kubelet[2288]: E0706 23:28:23.228558 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:23.228645 kubelet[2288]: I0706 23:28:23.228603 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:23.230350 kubelet[2288]: E0706 23:28:23.230306 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:23.230350 kubelet[2288]: I0706 23:28:23.230331 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:23.231748 kubelet[2288]: E0706 23:28:23.231720 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:23.896782 kubelet[2288]: I0706 23:28:23.896713 2288 apiserver.go:52] "Watching apiserver" Jul 6 23:28:23.915713 kubelet[2288]: I0706 23:28:23.915672 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:24.072093 kubelet[2288]: I0706 23:28:24.072036 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:24.072560 kubelet[2288]: I0706 23:28:24.072278 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:24.248120 kubelet[2288]: E0706 23:28:24.247478 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:24.248120 kubelet[2288]: E0706 23:28:24.247914 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:25.073926 kubelet[2288]: E0706 23:28:25.073887 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:25.074357 kubelet[2288]: E0706 23:28:25.073963 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:25.582194 kubelet[2288]: I0706 23:28:25.581953 2288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.581924007 podStartE2EDuration="1.581924007s" podCreationTimestamp="2025-07-06 23:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:25.402502426 +0000 UTC m=+10.870785885" watchObservedRunningTime="2025-07-06 23:28:25.581924007 +0000 UTC m=+11.050207466" Jul 6 23:28:29.748626 kubelet[2288]: I0706 23:28:29.748566 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:29.907103 kubelet[2288]: I0706 23:28:29.907017 2288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.90696673 podStartE2EDuration="5.90696673s" podCreationTimestamp="2025-07-06 23:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:25.585377393 +0000 UTC m=+11.053660852" watchObservedRunningTime="2025-07-06 23:28:29.90696673 +0000 UTC m=+15.375250189" Jul 6 23:28:29.907477 kubelet[2288]: E0706 23:28:29.907447 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:30.074466 kubelet[2288]: E0706 23:28:30.074316 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:30.081815 kubelet[2288]: E0706 23:28:30.081779 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:30.210653 kubelet[2288]: E0706 23:28:30.210577 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:30.392890 systemd[1]: Reload requested from client PID 2571 ('systemctl') (unit session-9.scope)... Jul 6 23:28:30.392908 systemd[1]: Reloading... Jul 6 23:28:30.513559 zram_generator::config[2621]: No configuration found. Jul 6 23:28:30.715388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:28:30.855206 systemd[1]: Reloading finished in 461 ms. Jul 6 23:28:30.880171 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:30.904034 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:28:30.904442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:30.904529 systemd[1]: kubelet.service: Consumed 1.524s CPU time, 136.3M memory peak. Jul 6 23:28:30.910612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:28:31.105815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:28:31.110498 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:28:31.157210 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:31.157210 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:28:31.157210 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:28:31.157210 kubelet[2660]: I0706 23:28:31.156776 2660 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:28:31.166638 kubelet[2660]: I0706 23:28:31.166578 2660 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:28:31.166638 kubelet[2660]: I0706 23:28:31.166615 2660 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:28:31.167036 kubelet[2660]: I0706 23:28:31.167011 2660 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:28:31.168480 kubelet[2660]: I0706 23:28:31.168447 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:28:31.173789 kubelet[2660]: I0706 23:28:31.173718 2660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:28:31.177951 kubelet[2660]: E0706 23:28:31.177913 2660 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:28:31.177951 kubelet[2660]: I0706 23:28:31.177946 2660 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:28:31.183644 kubelet[2660]: I0706 23:28:31.183598 2660 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:28:31.183926 kubelet[2660]: I0706 23:28:31.183878 2660 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:28:31.184105 kubelet[2660]: I0706 23:28:31.183914 2660 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:28:31.184183 kubelet[2660]: I0706 23:28:31.184107 2660 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:28:31.184183 kubelet[2660]: I0706 23:28:31.184116 2660 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:28:31.184183 kubelet[2660]: I0706 23:28:31.184179 2660 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:31.184405 kubelet[2660]: I0706 23:28:31.184370 2660 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:28:31.184456 kubelet[2660]: I0706 23:28:31.184437 2660 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:28:31.184495 kubelet[2660]: I0706 23:28:31.184462 2660 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:28:31.184517 kubelet[2660]: I0706 23:28:31.184496 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:28:31.186618 kubelet[2660]: I0706 23:28:31.186559 2660 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:28:31.187218 kubelet[2660]: I0706 23:28:31.186998 2660 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:28:31.188077 kubelet[2660]: I0706 23:28:31.188048 2660 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:28:31.188147 kubelet[2660]: I0706 23:28:31.188096 2660 server.go:1287] "Started kubelet" Jul 6 23:28:31.189051 kubelet[2660]: I0706 23:28:31.189001 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:28:31.189330 kubelet[2660]: I0706 23:28:31.189316 2660 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:28:31.189385 kubelet[2660]: I0706 23:28:31.189365 2660 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:28:31.190266 kubelet[2660]: I0706 23:28:31.190243 2660 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:28:31.194571 kubelet[2660]: I0706 23:28:31.194536 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:28:31.196929 kubelet[2660]: I0706 23:28:31.196894 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:28:31.200311 kubelet[2660]: E0706 23:28:31.200131 2660 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:28:31.200311 kubelet[2660]: I0706 23:28:31.200286 2660 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:28:31.201949 kubelet[2660]: I0706 23:28:31.201892 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:28:31.204241 kubelet[2660]: I0706 23:28:31.204212 2660 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:28:31.204376 kubelet[2660]: I0706 23:28:31.204328 2660 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:28:31.204615 kubelet[2660]: I0706 23:28:31.204598 2660 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:28:31.204615 kubelet[2660]: I0706 23:28:31.204612 2660 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:28:31.213672 kubelet[2660]: I0706 23:28:31.213623 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:28:31.214923 kubelet[2660]: I0706 23:28:31.214897 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:28:31.214981 kubelet[2660]: I0706 23:28:31.214928 2660 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:28:31.214981 kubelet[2660]: I0706 23:28:31.214952 2660 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:28:31.214981 kubelet[2660]: I0706 23:28:31.214959 2660 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:28:31.215056 kubelet[2660]: E0706 23:28:31.215011 2660 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:31.315465 kubelet[2660]: E0706 23:28:31.315360 2660 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:31.516342 kubelet[2660]: E0706 23:28:31.516283 2660 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:28:31.887605 kubelet[2660]: E0706 23:28:31.887350 2660 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:28:31.916736 kubelet[2660]: E0706 23:28:31.916502 2660 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:28:31.929917 kubelet[2660]: I0706 23:28:31.929881 2660 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:28:31.929917 kubelet[2660]: I0706 23:28:31.929905 2660 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:28:31.929917 kubelet[2660]: I0706 23:28:31.929929 2660 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:28:31.930213 kubelet[2660]: I0706 23:28:31.930192 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:28:31.930242 kubelet[2660]: I0706 23:28:31.930210 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:28:31.930242 kubelet[2660]: I0706 23:28:31.930233 2660 policy_none.go:49] "None policy: Start" Jul 6 23:28:31.930300 kubelet[2660]: I0706 23:28:31.930249 2660 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:28:31.930300 kubelet[2660]: I0706 23:28:31.930262 2660 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:28:31.930435 kubelet[2660]: I0706 23:28:31.930390 2660 state_mem.go:75] "Updated machine memory state" Jul 6 23:28:31.935879 kubelet[2660]: I0706 23:28:31.935834 2660 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:28:31.936130 kubelet[2660]: I0706 23:28:31.936085 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:28:31.936343 kubelet[2660]: I0706 23:28:31.936110 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:28:31.938761 kubelet[2660]: I0706 23:28:31.938732 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:28:31.939048 kubelet[2660]: E0706 23:28:31.939014 2660 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:28:32.006040 sudo[2695]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:28:32.006433 sudo[2695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:28:32.047845 kubelet[2660]: I0706 23:28:32.047783 2660 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:28:32.185981 kubelet[2660]: I0706 23:28:32.185930 2660 apiserver.go:52] "Watching apiserver" Jul 6 23:28:32.206625 kubelet[2660]: I0706 23:28:32.206577 2660 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:28:32.206895 kubelet[2660]: I0706 23:28:32.206704 2660 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:28:32.530677 sudo[2695]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:32.717544 kubelet[2660]: I0706 23:28:32.717293 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:32.717725 kubelet[2660]: I0706 23:28:32.717679 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:32.804991 kubelet[2660]: I0706 23:28:32.804849 2660 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:28:32.814012 kubelet[2660]: I0706 23:28:32.813944 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:32.814012 kubelet[2660]: I0706 23:28:32.814000 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:32.814131 kubelet[2660]: I0706 23:28:32.814025 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:32.814131 kubelet[2660]: I0706 23:28:32.814047 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:32.814131 kubelet[2660]: I0706 23:28:32.814083 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:32.814131 kubelet[2660]: I0706 23:28:32.814113 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:32.814131 kubelet[2660]: I0706 23:28:32.814131 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:32.814269 kubelet[2660]: I0706 23:28:32.814148 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:28:32.814269 kubelet[2660]: I0706 23:28:32.814167 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2a3a5fcb966b445c700edd58e67e246-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b2a3a5fcb966b445c700edd58e67e246\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:32.820806 kubelet[2660]: E0706 23:28:32.820381 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:28:32.820806 kubelet[2660]: E0706 23:28:32.820769 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:28:32.949210 kubelet[2660]: I0706 23:28:32.949097 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.949072996 podStartE2EDuration="3.949072996s" podCreationTimestamp="2025-07-06 23:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:32.832990957 +0000 UTC m=+1.718262610" watchObservedRunningTime="2025-07-06 23:28:32.949072996 +0000 UTC m=+1.834344429" Jul 6 23:28:33.018911 kubelet[2660]: E0706 23:28:33.018858 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:33.121830 kubelet[2660]: E0706 23:28:33.121200 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:33.121830 kubelet[2660]: E0706 23:28:33.121224 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:33.908054 kubelet[2660]: E0706 23:28:33.907631 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:33.908054 kubelet[2660]: E0706 23:28:33.907745 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:34.120523 sudo[1687]: pam_unix(sudo:session): session closed for user root Jul 6 23:28:34.123983 sshd[1686]: Connection closed by 10.0.0.1 port 50906 Jul 6 23:28:34.125172 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:34.133278 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:50906.service: Deactivated successfully. Jul 6 23:28:34.138685 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:28:34.139311 systemd[1]: session-9.scope: Consumed 5.557s CPU time, 249.9M memory peak. Jul 6 23:28:34.142935 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:28:34.145184 systemd-logind[1464]: Removed session 9. Jul 6 23:28:34.735777 kubelet[2660]: I0706 23:28:34.735712 2660 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:28:34.736474 containerd[1481]: time="2025-07-06T23:28:34.736377673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:28:34.736969 kubelet[2660]: I0706 23:28:34.736653 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:28:34.909507 kubelet[2660]: E0706 23:28:34.909453 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:34.910169 kubelet[2660]: E0706 23:28:34.909545 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:35.390195 systemd[1]: Created slice kubepods-besteffort-pod0e38b7fd_f3dd_4737_8f2b_961a9dbe0db4.slice - libcontainer container kubepods-besteffort-pod0e38b7fd_f3dd_4737_8f2b_961a9dbe0db4.slice. Jul 6 23:28:35.408977 systemd[1]: Created slice kubepods-burstable-pod01e4dbbb_86de_4e8f_ad9f_178fa607eaf0.slice - libcontainer container kubepods-burstable-pod01e4dbbb_86de_4e8f_ad9f_178fa607eaf0.slice. Jul 6 23:28:35.432006 kubelet[2660]: I0706 23:28:35.431942 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkd6\" (UniqueName: \"kubernetes.io/projected/0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4-kube-api-access-skkd6\") pod \"kube-proxy-77p9j\" (UID: \"0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4\") " pod="kube-system/kube-proxy-77p9j" Jul 6 23:28:35.432006 kubelet[2660]: I0706 23:28:35.431986 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-lib-modules\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432006 kubelet[2660]: I0706 23:28:35.432006 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-kernel\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432006 kubelet[2660]: I0706 23:28:35.432022 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-net\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432038 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4-lib-modules\") pod \"kube-proxy-77p9j\" (UID: \"0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4\") " pod="kube-system/kube-proxy-77p9j" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432057 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-cgroup\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432073 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hostproc\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432109 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-etc-cni-netd\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432125 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qfr\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-kube-api-access-x8qfr\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432275 kubelet[2660]: I0706 23:28:35.432184 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cni-path\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432270 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4-kube-proxy\") pod \"kube-proxy-77p9j\" (UID: \"0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4\") " pod="kube-system/kube-proxy-77p9j" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432309 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-bpf-maps\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432327 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hubble-tls\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432344 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-run\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432362 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-xtables-lock\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432482 kubelet[2660]: I0706 23:28:35.432379 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-config-path\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432696 kubelet[2660]: I0706 23:28:35.432415 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-clustermesh-secrets\") pod \"cilium-qhdqn\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " pod="kube-system/cilium-qhdqn" Jul 6 23:28:35.432696 kubelet[2660]: I0706 23:28:35.432431 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4-xtables-lock\") pod \"kube-proxy-77p9j\" (UID: \"0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4\") " pod="kube-system/kube-proxy-77p9j" Jul 6 23:28:35.910668 kubelet[2660]: E0706 23:28:35.910626 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:35.929042 systemd[1]: Created slice kubepods-besteffort-pod1295b88d_b598_4d9a_acc3_5414fd8a7322.slice - libcontainer container kubepods-besteffort-pod1295b88d_b598_4d9a_acc3_5414fd8a7322.slice. Jul 6 23:28:35.935834 kubelet[2660]: I0706 23:28:35.935767 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v42v\" (UniqueName: \"kubernetes.io/projected/1295b88d-b598-4d9a-acc3-5414fd8a7322-kube-api-access-2v42v\") pod \"cilium-operator-6c4d7847fc-rpv8q\" (UID: \"1295b88d-b598-4d9a-acc3-5414fd8a7322\") " pod="kube-system/cilium-operator-6c4d7847fc-rpv8q" Jul 6 23:28:35.935834 kubelet[2660]: I0706 23:28:35.935850 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1295b88d-b598-4d9a-acc3-5414fd8a7322-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rpv8q\" (UID: \"1295b88d-b598-4d9a-acc3-5414fd8a7322\") " pod="kube-system/cilium-operator-6c4d7847fc-rpv8q" Jul 6 23:28:36.002562 kubelet[2660]: E0706 23:28:36.002472 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.004802 containerd[1481]: time="2025-07-06T23:28:36.004530644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77p9j,Uid:0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:36.014151 kubelet[2660]: E0706 23:28:36.014114 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.014742 containerd[1481]: time="2025-07-06T23:28:36.014692788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhdqn,Uid:01e4dbbb-86de-4e8f-ad9f-178fa607eaf0,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:36.404889 containerd[1481]: time="2025-07-06T23:28:36.404602319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:36.404889 containerd[1481]: time="2025-07-06T23:28:36.404667732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:36.404889 containerd[1481]: time="2025-07-06T23:28:36.404679394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.404889 containerd[1481]: time="2025-07-06T23:28:36.404775834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.414582 containerd[1481]: time="2025-07-06T23:28:36.414424826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:36.414582 containerd[1481]: time="2025-07-06T23:28:36.414485290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:36.414582 containerd[1481]: time="2025-07-06T23:28:36.414496060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.414777 containerd[1481]: time="2025-07-06T23:28:36.414595586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.429622 systemd[1]: Started cri-containerd-ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf.scope - libcontainer container ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf. Jul 6 23:28:36.436841 systemd[1]: Started cri-containerd-e71d6041e4ded26772298436af741d3e1a161fab87b1d18ec41ffbbc60cec367.scope - libcontainer container e71d6041e4ded26772298436af741d3e1a161fab87b1d18ec41ffbbc60cec367. Jul 6 23:28:36.464262 containerd[1481]: time="2025-07-06T23:28:36.462765621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qhdqn,Uid:01e4dbbb-86de-4e8f-ad9f-178fa607eaf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\"" Jul 6 23:28:36.464815 kubelet[2660]: E0706 23:28:36.464782 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.468012 containerd[1481]: time="2025-07-06T23:28:36.467938215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:28:36.472954 containerd[1481]: time="2025-07-06T23:28:36.472905755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77p9j,Uid:0e38b7fd-f3dd-4737-8f2b-961a9dbe0db4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e71d6041e4ded26772298436af741d3e1a161fab87b1d18ec41ffbbc60cec367\"" Jul 6 23:28:36.473699 kubelet[2660]: E0706 23:28:36.473677 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.476694 containerd[1481]: time="2025-07-06T23:28:36.476636010Z" level=info msg="CreateContainer within sandbox \"e71d6041e4ded26772298436af741d3e1a161fab87b1d18ec41ffbbc60cec367\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:28:36.532426 kubelet[2660]: E0706 23:28:36.532364 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.533008 containerd[1481]: time="2025-07-06T23:28:36.532965017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rpv8q,Uid:1295b88d-b598-4d9a-acc3-5414fd8a7322,Namespace:kube-system,Attempt:0,}" Jul 6 23:28:36.584921 containerd[1481]: time="2025-07-06T23:28:36.584856201Z" level=info msg="CreateContainer within sandbox \"e71d6041e4ded26772298436af741d3e1a161fab87b1d18ec41ffbbc60cec367\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fefc8d7dba3c58eed7bfbe2a1109e03e8a210f6a2ee6e839fc394f9b79225ee8\"" Jul 6 23:28:36.587360 containerd[1481]: time="2025-07-06T23:28:36.585682803Z" level=info msg="StartContainer for \"fefc8d7dba3c58eed7bfbe2a1109e03e8a210f6a2ee6e839fc394f9b79225ee8\"" Jul 6 23:28:36.618682 systemd[1]: Started cri-containerd-fefc8d7dba3c58eed7bfbe2a1109e03e8a210f6a2ee6e839fc394f9b79225ee8.scope - libcontainer container fefc8d7dba3c58eed7bfbe2a1109e03e8a210f6a2ee6e839fc394f9b79225ee8. Jul 6 23:28:36.639207 containerd[1481]: time="2025-07-06T23:28:36.638853950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:28:36.639207 containerd[1481]: time="2025-07-06T23:28:36.639000094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:28:36.639207 containerd[1481]: time="2025-07-06T23:28:36.639030411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.640204 containerd[1481]: time="2025-07-06T23:28:36.640067829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:28:36.661149 containerd[1481]: time="2025-07-06T23:28:36.661006493Z" level=info msg="StartContainer for \"fefc8d7dba3c58eed7bfbe2a1109e03e8a210f6a2ee6e839fc394f9b79225ee8\" returns successfully" Jul 6 23:28:36.674713 systemd[1]: Started cri-containerd-750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745.scope - libcontainer container 750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745. Jul 6 23:28:36.726865 containerd[1481]: time="2025-07-06T23:28:36.726591965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rpv8q,Uid:1295b88d-b598-4d9a-acc3-5414fd8a7322,Namespace:kube-system,Attempt:0,} returns sandbox id \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\"" Jul 6 23:28:36.727537 kubelet[2660]: E0706 23:28:36.727284 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.915241 kubelet[2660]: E0706 23:28:36.915068 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:36.918487 kubelet[2660]: E0706 23:28:36.918454 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:37.220561 kubelet[2660]: E0706 23:28:37.219202 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:37.360370 kubelet[2660]: I0706 23:28:37.360288 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77p9j" podStartSLOduration=2.360252809 podStartE2EDuration="2.360252809s" podCreationTimestamp="2025-07-06 23:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:28:36.9383171 +0000 UTC m=+5.823588513" watchObservedRunningTime="2025-07-06 23:28:37.360252809 +0000 UTC m=+6.245524222" Jul 6 23:28:37.503930 kubelet[2660]: E0706 23:28:37.503686 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:37.919482 kubelet[2660]: E0706 23:28:37.919437 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:46.917159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973585063.mount: Deactivated successfully. Jul 6 23:28:49.653989 kubelet[2660]: E0706 23:28:49.653884 2660 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.439s" Jul 6 23:28:49.655638 kubelet[2660]: E0706 23:28:49.654830 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:55.011612 containerd[1481]: time="2025-07-06T23:28:55.011532575Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.014136 containerd[1481]: time="2025-07-06T23:28:55.014095322Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:28:55.015814 containerd[1481]: time="2025-07-06T23:28:55.015786826Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:55.017606 containerd[1481]: time="2025-07-06T23:28:55.017548972Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.549342762s" Jul 6 23:28:55.017606 containerd[1481]: time="2025-07-06T23:28:55.017604656Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:28:55.027322 containerd[1481]: time="2025-07-06T23:28:55.027242676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:28:55.030112 containerd[1481]: time="2025-07-06T23:28:55.030063317Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:28:55.046484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217642383.mount: Deactivated successfully. Jul 6 23:28:55.050566 containerd[1481]: time="2025-07-06T23:28:55.050521414Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\"" Jul 6 23:28:55.051980 containerd[1481]: time="2025-07-06T23:28:55.050991075Z" level=info msg="StartContainer for \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\"" Jul 6 23:28:55.092608 systemd[1]: Started cri-containerd-d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb.scope - libcontainer container d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb. Jul 6 23:28:55.120623 containerd[1481]: time="2025-07-06T23:28:55.120578483Z" level=info msg="StartContainer for \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\" returns successfully" Jul 6 23:28:55.132743 systemd[1]: cri-containerd-d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb.scope: Deactivated successfully. Jul 6 23:28:55.153009 kubelet[2660]: E0706 23:28:55.152421 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:55.706476 containerd[1481]: time="2025-07-06T23:28:55.706248282Z" level=info msg="shim disconnected" id=d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb namespace=k8s.io Jul 6 23:28:55.706476 containerd[1481]: time="2025-07-06T23:28:55.706334594Z" level=warning msg="cleaning up after shim disconnected" id=d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb namespace=k8s.io Jul 6 23:28:55.706476 containerd[1481]: time="2025-07-06T23:28:55.706346016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:28:56.044197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb-rootfs.mount: Deactivated successfully. Jul 6 23:28:56.155788 kubelet[2660]: E0706 23:28:56.155744 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:56.157845 containerd[1481]: time="2025-07-06T23:28:56.157797709Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:28:56.184348 containerd[1481]: time="2025-07-06T23:28:56.184250605Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\"" Jul 6 23:28:56.185140 containerd[1481]: time="2025-07-06T23:28:56.185086565Z" level=info msg="StartContainer for \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\"" Jul 6 23:28:56.219595 systemd[1]: Started cri-containerd-92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494.scope - libcontainer container 92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494. Jul 6 23:28:56.249906 containerd[1481]: time="2025-07-06T23:28:56.249846382Z" level=info msg="StartContainer for \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\" returns successfully" Jul 6 23:28:56.270597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:28:56.271258 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:56.271503 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:28:56.279998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:28:56.280354 systemd[1]: cri-containerd-92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494.scope: Deactivated successfully. Jul 6 23:28:56.307787 containerd[1481]: time="2025-07-06T23:28:56.307715978Z" level=info msg="shim disconnected" id=92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494 namespace=k8s.io Jul 6 23:28:56.307787 containerd[1481]: time="2025-07-06T23:28:56.307775654Z" level=warning msg="cleaning up after shim disconnected" id=92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494 namespace=k8s.io Jul 6 23:28:56.307787 containerd[1481]: time="2025-07-06T23:28:56.307787826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:28:56.309284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:28:57.045032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494-rootfs.mount: Deactivated successfully. Jul 6 23:28:57.158663 kubelet[2660]: E0706 23:28:57.158616 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:57.161083 containerd[1481]: time="2025-07-06T23:28:57.161030068Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:28:57.618076 containerd[1481]: time="2025-07-06T23:28:57.618004865Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\"" Jul 6 23:28:57.618560 containerd[1481]: time="2025-07-06T23:28:57.618527185Z" level=info msg="StartContainer for \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\"" Jul 6 23:28:57.620093 containerd[1481]: time="2025-07-06T23:28:57.620047967Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:57.621214 containerd[1481]: time="2025-07-06T23:28:57.621140752Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:28:57.622827 containerd[1481]: time="2025-07-06T23:28:57.622730748Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:28:57.624213 containerd[1481]: time="2025-07-06T23:28:57.624166837Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.596884857s" Jul 6 23:28:57.624213 containerd[1481]: time="2025-07-06T23:28:57.624202566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:28:57.627021 containerd[1481]: time="2025-07-06T23:28:57.626982795Z" level=info msg="CreateContainer within sandbox \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:28:57.644930 containerd[1481]: time="2025-07-06T23:28:57.644774148Z" level=info msg="CreateContainer within sandbox \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\"" Jul 6 23:28:57.645601 containerd[1481]: time="2025-07-06T23:28:57.645539168Z" level=info msg="StartContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\"" Jul 6 23:28:57.652641 systemd[1]: Started cri-containerd-e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403.scope - libcontainer container e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403. Jul 6 23:28:57.684592 systemd[1]: Started cri-containerd-cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0.scope - libcontainer container cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0. Jul 6 23:28:57.704825 systemd[1]: cri-containerd-e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403.scope: Deactivated successfully. Jul 6 23:28:58.049207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403-rootfs.mount: Deactivated successfully. Jul 6 23:28:58.414497 containerd[1481]: time="2025-07-06T23:28:58.414426361Z" level=info msg="StartContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" returns successfully" Jul 6 23:28:58.415098 containerd[1481]: time="2025-07-06T23:28:58.414492198Z" level=info msg="StartContainer for \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\" returns successfully" Jul 6 23:28:58.426462 containerd[1481]: time="2025-07-06T23:28:58.423759643Z" level=info msg="shim disconnected" id=e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403 namespace=k8s.io Jul 6 23:28:58.426462 containerd[1481]: time="2025-07-06T23:28:58.423810290Z" level=warning msg="cleaning up after shim disconnected" id=e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403 namespace=k8s.io Jul 6 23:28:58.426462 containerd[1481]: time="2025-07-06T23:28:58.423819589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:28:58.430558 kubelet[2660]: E0706 23:28:58.430521 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:59.433977 kubelet[2660]: E0706 23:28:59.433931 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:59.435992 containerd[1481]: time="2025-07-06T23:28:59.435831831Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:28:59.436327 kubelet[2660]: E0706 23:28:59.435892 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:28:59.579472 kubelet[2660]: I0706 23:28:59.577360 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rpv8q" podStartSLOduration=3.6814387650000002 podStartE2EDuration="24.577326996s" podCreationTimestamp="2025-07-06 23:28:35 +0000 UTC" firstStartedPulling="2025-07-06 23:28:36.729258312 +0000 UTC m=+5.614529725" lastFinishedPulling="2025-07-06 23:28:57.625146542 +0000 UTC m=+26.510417956" observedRunningTime="2025-07-06 23:28:59.577017037 +0000 UTC m=+28.462288450" watchObservedRunningTime="2025-07-06 23:28:59.577326996 +0000 UTC m=+28.462598409" Jul 6 23:28:59.583501 containerd[1481]: time="2025-07-06T23:28:59.583439682Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\"" Jul 6 23:28:59.584483 containerd[1481]: time="2025-07-06T23:28:59.584332317Z" level=info msg="StartContainer for \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\"" Jul 6 23:28:59.647679 systemd[1]: Started cri-containerd-948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0.scope - libcontainer container 948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0. Jul 6 23:28:59.674733 systemd[1]: cri-containerd-948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0.scope: Deactivated successfully. Jul 6 23:28:59.706463 containerd[1481]: time="2025-07-06T23:28:59.706258402Z" level=info msg="StartContainer for \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\" returns successfully" Jul 6 23:28:59.730725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0-rootfs.mount: Deactivated successfully. Jul 6 23:28:59.735318 containerd[1481]: time="2025-07-06T23:28:59.735224712Z" level=info msg="shim disconnected" id=948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0 namespace=k8s.io Jul 6 23:28:59.735318 containerd[1481]: time="2025-07-06T23:28:59.735301640Z" level=warning msg="cleaning up after shim disconnected" id=948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0 namespace=k8s.io Jul 6 23:28:59.735318 containerd[1481]: time="2025-07-06T23:28:59.735315277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:29:00.437866 kubelet[2660]: E0706 23:29:00.437827 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:00.439222 kubelet[2660]: E0706 23:29:00.439164 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:00.443499 containerd[1481]: time="2025-07-06T23:29:00.443451426Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:29:01.273066 containerd[1481]: time="2025-07-06T23:29:01.272906648Z" level=info msg="CreateContainer within sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\"" Jul 6 23:29:01.273549 containerd[1481]: time="2025-07-06T23:29:01.273520472Z" level=info msg="StartContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\"" Jul 6 23:29:01.303537 systemd[1]: Started cri-containerd-43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a.scope - libcontainer container 43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a. Jul 6 23:29:01.594699 containerd[1481]: time="2025-07-06T23:29:01.594462696Z" level=info msg="StartContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" returns successfully" Jul 6 23:29:01.756005 kubelet[2660]: I0706 23:29:01.755959 2660 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:29:02.276879 systemd[1]: Created slice kubepods-burstable-podb1e718d0_fce3_4bbb_b9a7_1249c4dd5d16.slice - libcontainer container kubepods-burstable-podb1e718d0_fce3_4bbb_b9a7_1249c4dd5d16.slice. Jul 6 23:29:02.298506 systemd[1]: Created slice kubepods-burstable-pod4edc9f5c_a8f1_4a7a_84aa_5b26fecda32e.slice - libcontainer container kubepods-burstable-pod4edc9f5c_a8f1_4a7a_84aa_5b26fecda32e.slice. Jul 6 23:29:02.301676 kubelet[2660]: I0706 23:29:02.301644 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16-config-volume\") pod \"coredns-668d6bf9bc-xrv7n\" (UID: \"b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16\") " pod="kube-system/coredns-668d6bf9bc-xrv7n" Jul 6 23:29:02.301832 kubelet[2660]: I0706 23:29:02.301685 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2lc\" (UniqueName: \"kubernetes.io/projected/b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16-kube-api-access-5f2lc\") pod \"coredns-668d6bf9bc-xrv7n\" (UID: \"b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16\") " pod="kube-system/coredns-668d6bf9bc-xrv7n" Jul 6 23:29:02.403218 kubelet[2660]: I0706 23:29:02.402793 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e-config-volume\") pod \"coredns-668d6bf9bc-hgkcq\" (UID: \"4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e\") " pod="kube-system/coredns-668d6bf9bc-hgkcq" Jul 6 23:29:02.403218 kubelet[2660]: I0706 23:29:02.402840 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnh2\" (UniqueName: \"kubernetes.io/projected/4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e-kube-api-access-dxnh2\") pod \"coredns-668d6bf9bc-hgkcq\" (UID: \"4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e\") " pod="kube-system/coredns-668d6bf9bc-hgkcq" Jul 6 23:29:02.580425 kubelet[2660]: E0706 23:29:02.580193 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:02.581380 containerd[1481]: time="2025-07-06T23:29:02.581329668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrv7n,Uid:b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:02.601691 kubelet[2660]: E0706 23:29:02.601626 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:02.602193 containerd[1481]: time="2025-07-06T23:29:02.602135892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hgkcq,Uid:4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e,Namespace:kube-system,Attempt:0,}" Jul 6 23:29:02.603236 kubelet[2660]: E0706 23:29:02.602921 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:02.654316 kubelet[2660]: I0706 23:29:02.654223 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qhdqn" podStartSLOduration=9.094586 podStartE2EDuration="27.65420417s" podCreationTimestamp="2025-07-06 23:28:35 +0000 UTC" firstStartedPulling="2025-07-06 23:28:36.46746657 +0000 UTC m=+5.352737983" lastFinishedPulling="2025-07-06 23:28:55.02708474 +0000 UTC m=+23.912356153" observedRunningTime="2025-07-06 23:29:02.652921718 +0000 UTC m=+31.538193132" watchObservedRunningTime="2025-07-06 23:29:02.65420417 +0000 UTC m=+31.539475583" Jul 6 23:29:03.828442 systemd-networkd[1404]: cilium_host: Link UP Jul 6 23:29:03.828694 systemd-networkd[1404]: cilium_net: Link UP Jul 6 23:29:03.828989 systemd-networkd[1404]: cilium_net: Gained carrier Jul 6 23:29:03.829282 systemd-networkd[1404]: cilium_host: Gained carrier Jul 6 23:29:03.939962 systemd-networkd[1404]: cilium_vxlan: Link UP Jul 6 23:29:03.939972 systemd-networkd[1404]: cilium_vxlan: Gained carrier Jul 6 23:29:04.017436 kubelet[2660]: E0706 23:29:04.015783 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:04.173430 kernel: NET: Registered PF_ALG protocol family Jul 6 23:29:04.328626 systemd-networkd[1404]: cilium_host: Gained IPv6LL Jul 6 23:29:04.584733 systemd-networkd[1404]: cilium_net: Gained IPv6LL Jul 6 23:29:04.971922 systemd-networkd[1404]: lxc_health: Link UP Jul 6 23:29:04.972251 systemd-networkd[1404]: lxc_health: Gained carrier Jul 6 23:29:05.234258 systemd-networkd[1404]: lxc97100e37cfcc: Link UP Jul 6 23:29:05.235442 kernel: eth0: renamed from tmpdd6f8 Jul 6 23:29:05.242896 systemd-networkd[1404]: lxc97100e37cfcc: Gained carrier Jul 6 23:29:05.269153 systemd-networkd[1404]: lxc7624ea1ae8f6: Link UP Jul 6 23:29:05.272462 kernel: eth0: renamed from tmpc483f Jul 6 23:29:05.286301 systemd-networkd[1404]: lxc7624ea1ae8f6: Gained carrier Jul 6 23:29:05.673588 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Jul 6 23:29:06.016133 kubelet[2660]: E0706 23:29:06.015987 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:06.611122 kubelet[2660]: E0706 23:29:06.611025 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:06.952797 systemd-networkd[1404]: lxc_health: Gained IPv6LL Jul 6 23:29:07.016641 systemd-networkd[1404]: lxc7624ea1ae8f6: Gained IPv6LL Jul 6 23:29:07.144709 systemd-networkd[1404]: lxc97100e37cfcc: Gained IPv6LL Jul 6 23:29:07.613531 kubelet[2660]: E0706 23:29:07.613437 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:09.272484 containerd[1481]: time="2025-07-06T23:29:09.272359954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:29:09.272484 containerd[1481]: time="2025-07-06T23:29:09.272460386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:29:09.272484 containerd[1481]: time="2025-07-06T23:29:09.272474764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:29:09.272958 containerd[1481]: time="2025-07-06T23:29:09.272555328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:29:09.294548 systemd[1]: Started cri-containerd-c483f869a9aca57621ae36b7b97191fd03e5ba07d96719b097e24a205980a35d.scope - libcontainer container c483f869a9aca57621ae36b7b97191fd03e5ba07d96719b097e24a205980a35d. Jul 6 23:29:09.307028 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:29:09.332864 containerd[1481]: time="2025-07-06T23:29:09.332809358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hgkcq,Uid:4edc9f5c-a8f1-4a7a-84aa-5b26fecda32e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c483f869a9aca57621ae36b7b97191fd03e5ba07d96719b097e24a205980a35d\"" Jul 6 23:29:09.333552 kubelet[2660]: E0706 23:29:09.333523 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:09.335474 containerd[1481]: time="2025-07-06T23:29:09.335441245Z" level=info msg="CreateContainer within sandbox \"c483f869a9aca57621ae36b7b97191fd03e5ba07d96719b097e24a205980a35d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:09.356029 containerd[1481]: time="2025-07-06T23:29:09.355884656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:29:09.356653 containerd[1481]: time="2025-07-06T23:29:09.356609366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:29:09.356653 containerd[1481]: time="2025-07-06T23:29:09.356636028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:29:09.356753 containerd[1481]: time="2025-07-06T23:29:09.356719738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:29:09.383675 systemd[1]: Started cri-containerd-dd6f829a553aa5affd2a9d4c1827e82975fcf28c0d3ad794975469f8fbbaa190.scope - libcontainer container dd6f829a553aa5affd2a9d4c1827e82975fcf28c0d3ad794975469f8fbbaa190. Jul 6 23:29:09.399228 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:29:09.427765 containerd[1481]: time="2025-07-06T23:29:09.427706823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrv7n,Uid:b1e718d0-fce3-4bbb-b9a7-1249c4dd5d16,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd6f829a553aa5affd2a9d4c1827e82975fcf28c0d3ad794975469f8fbbaa190\"" Jul 6 23:29:09.428559 kubelet[2660]: E0706 23:29:09.428529 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:09.430200 containerd[1481]: time="2025-07-06T23:29:09.430167903Z" level=info msg="CreateContainer within sandbox \"dd6f829a553aa5affd2a9d4c1827e82975fcf28c0d3ad794975469f8fbbaa190\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:29:10.635935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154523430.mount: Deactivated successfully. Jul 6 23:29:10.640087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749796960.mount: Deactivated successfully. Jul 6 23:29:10.697760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176758496.mount: Deactivated successfully. Jul 6 23:29:11.272360 containerd[1481]: time="2025-07-06T23:29:11.272289435Z" level=info msg="CreateContainer within sandbox \"c483f869a9aca57621ae36b7b97191fd03e5ba07d96719b097e24a205980a35d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37b61e5e06e654b6a08236d741dcd22ecfd2acf4040f89cee34bb9d2d2954b4b\"" Jul 6 23:29:11.273110 containerd[1481]: time="2025-07-06T23:29:11.273072054Z" level=info msg="StartContainer for \"37b61e5e06e654b6a08236d741dcd22ecfd2acf4040f89cee34bb9d2d2954b4b\"" Jul 6 23:29:11.306571 systemd[1]: Started cri-containerd-37b61e5e06e654b6a08236d741dcd22ecfd2acf4040f89cee34bb9d2d2954b4b.scope - libcontainer container 37b61e5e06e654b6a08236d741dcd22ecfd2acf4040f89cee34bb9d2d2954b4b. Jul 6 23:29:11.431979 containerd[1481]: time="2025-07-06T23:29:11.431919444Z" level=info msg="CreateContainer within sandbox \"dd6f829a553aa5affd2a9d4c1827e82975fcf28c0d3ad794975469f8fbbaa190\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9db35c760c3629f5f1c36aadf2e46a26ec418a0cd44c45db77a26d617c32cb97\"" Jul 6 23:29:11.432670 containerd[1481]: time="2025-07-06T23:29:11.432628332Z" level=info msg="StartContainer for \"9db35c760c3629f5f1c36aadf2e46a26ec418a0cd44c45db77a26d617c32cb97\"" Jul 6 23:29:11.464554 systemd[1]: Started cri-containerd-9db35c760c3629f5f1c36aadf2e46a26ec418a0cd44c45db77a26d617c32cb97.scope - libcontainer container 9db35c760c3629f5f1c36aadf2e46a26ec418a0cd44c45db77a26d617c32cb97. Jul 6 23:29:11.747208 containerd[1481]: time="2025-07-06T23:29:11.747099113Z" level=info msg="StartContainer for \"37b61e5e06e654b6a08236d741dcd22ecfd2acf4040f89cee34bb9d2d2954b4b\" returns successfully" Jul 6 23:29:11.747208 containerd[1481]: time="2025-07-06T23:29:11.747137297Z" level=info msg="StartContainer for \"9db35c760c3629f5f1c36aadf2e46a26ec418a0cd44c45db77a26d617c32cb97\" returns successfully" Jul 6 23:29:11.751989 kubelet[2660]: E0706 23:29:11.751932 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:11.917897 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:52812.service - OpenSSH per-connection server daemon (10.0.0.1:52812). Jul 6 23:29:12.030932 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 52812 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:12.032854 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:12.039097 systemd-logind[1464]: New session 10 of user core. Jul 6 23:29:12.047638 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:29:12.474338 kubelet[2660]: I0706 23:29:12.474262 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xrv7n" podStartSLOduration=37.474243794 podStartE2EDuration="37.474243794s" podCreationTimestamp="2025-07-06 23:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:12.473968256 +0000 UTC m=+41.359239669" watchObservedRunningTime="2025-07-06 23:29:12.474243794 +0000 UTC m=+41.359515217" Jul 6 23:29:12.502029 sshd[4021]: Connection closed by 10.0.0.1 port 52812 Jul 6 23:29:12.502482 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:12.508860 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:52812.service: Deactivated successfully. Jul 6 23:29:12.512829 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:29:12.514301 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:29:12.515892 systemd-logind[1464]: Removed session 10. Jul 6 23:29:12.754776 kubelet[2660]: E0706 23:29:12.754033 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:12.754776 kubelet[2660]: E0706 23:29:12.754118 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:12.772711 kubelet[2660]: I0706 23:29:12.772612 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hgkcq" podStartSLOduration=37.772588976 podStartE2EDuration="37.772588976s" podCreationTimestamp="2025-07-06 23:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:29:12.771999157 +0000 UTC m=+41.657270570" watchObservedRunningTime="2025-07-06 23:29:12.772588976 +0000 UTC m=+41.657860389" Jul 6 23:29:13.755570 kubelet[2660]: E0706 23:29:13.755526 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:13.756334 kubelet[2660]: E0706 23:29:13.755609 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:14.758004 kubelet[2660]: E0706 23:29:14.757965 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:15.759777 kubelet[2660]: E0706 23:29:15.759735 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:17.519889 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:52828.service - OpenSSH per-connection server daemon (10.0.0.1:52828). Jul 6 23:29:17.797453 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 52828 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:17.799380 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:17.804798 systemd-logind[1464]: New session 11 of user core. Jul 6 23:29:17.815554 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:29:18.236821 sshd[4070]: Connection closed by 10.0.0.1 port 52828 Jul 6 23:29:18.237235 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:18.241876 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:52828.service: Deactivated successfully. Jul 6 23:29:18.244247 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:29:18.244999 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:29:18.246028 systemd-logind[1464]: Removed session 11. Jul 6 23:29:23.249699 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:45906.service - OpenSSH per-connection server daemon (10.0.0.1:45906). Jul 6 23:29:23.293041 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:23.295070 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:23.300493 systemd-logind[1464]: New session 12 of user core. Jul 6 23:29:23.310726 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:29:23.530774 sshd[4087]: Connection closed by 10.0.0.1 port 45906 Jul 6 23:29:23.531098 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:23.535644 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:45906.service: Deactivated successfully. Jul 6 23:29:23.538178 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:29:23.538966 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:29:23.539939 systemd-logind[1464]: Removed session 12. Jul 6 23:29:28.591374 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). Jul 6 23:29:28.758570 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:28.767470 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:28.800671 systemd-logind[1464]: New session 13 of user core. Jul 6 23:29:28.818598 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:29:29.298558 sshd[4103]: Connection closed by 10.0.0.1 port 43030 Jul 6 23:29:29.296867 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:29.323702 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:43030.service: Deactivated successfully. Jul 6 23:29:29.343288 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:29:29.349326 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:29:29.358261 systemd-logind[1464]: Removed session 13. Jul 6 23:29:34.314903 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:43044.service - OpenSSH per-connection server daemon (10.0.0.1:43044). Jul 6 23:29:34.382295 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 43044 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:34.384031 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:34.388687 systemd-logind[1464]: New session 14 of user core. Jul 6 23:29:34.395558 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:29:34.740225 sshd[4121]: Connection closed by 10.0.0.1 port 43044 Jul 6 23:29:34.740729 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:34.745536 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:43044.service: Deactivated successfully. Jul 6 23:29:34.748581 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:29:34.749494 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:29:34.750493 systemd-logind[1464]: Removed session 14. Jul 6 23:29:39.754103 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:46446.service - OpenSSH per-connection server daemon (10.0.0.1:46446). Jul 6 23:29:39.801417 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 46446 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:39.801624 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:39.806240 systemd-logind[1464]: New session 15 of user core. Jul 6 23:29:39.815540 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:29:40.114906 sshd[4140]: Connection closed by 10.0.0.1 port 46446 Jul 6 23:29:40.115254 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:40.120389 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:46446.service: Deactivated successfully. Jul 6 23:29:40.122982 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:29:40.123795 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:29:40.125012 systemd-logind[1464]: Removed session 15. Jul 6 23:29:41.216807 kubelet[2660]: E0706 23:29:41.216680 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:45.128299 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:46448.service - OpenSSH per-connection server daemon (10.0.0.1:46448). Jul 6 23:29:45.171787 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 46448 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:45.173611 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:45.177920 systemd-logind[1464]: New session 16 of user core. Jul 6 23:29:45.186616 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:29:45.306298 sshd[4156]: Connection closed by 10.0.0.1 port 46448 Jul 6 23:29:45.306696 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:45.311467 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:46448.service: Deactivated successfully. Jul 6 23:29:45.313968 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:29:45.314763 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:29:45.315792 systemd-logind[1464]: Removed session 16. Jul 6 23:29:50.328072 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:35140.service - OpenSSH per-connection server daemon (10.0.0.1:35140). Jul 6 23:29:50.372553 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 35140 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:50.374587 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:50.383515 systemd-logind[1464]: New session 17 of user core. Jul 6 23:29:50.388709 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:29:50.542650 sshd[4172]: Connection closed by 10.0.0.1 port 35140 Jul 6 23:29:50.543064 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:50.558187 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:35140.service: Deactivated successfully. Jul 6 23:29:50.560713 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:29:50.562930 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:29:50.568718 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:35154.service - OpenSSH per-connection server daemon (10.0.0.1:35154). Jul 6 23:29:50.569709 systemd-logind[1464]: Removed session 17. Jul 6 23:29:50.607845 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 35154 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:50.609632 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:50.615138 systemd-logind[1464]: New session 18 of user core. Jul 6 23:29:50.622616 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:29:50.833694 sshd[4189]: Connection closed by 10.0.0.1 port 35154 Jul 6 23:29:50.834212 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:50.848125 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:35154.service: Deactivated successfully. Jul 6 23:29:50.850669 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:29:50.852768 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:29:50.861188 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:35162.service - OpenSSH per-connection server daemon (10.0.0.1:35162). Jul 6 23:29:50.862797 systemd-logind[1464]: Removed session 18. Jul 6 23:29:50.898780 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 35162 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:50.900506 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:50.906174 systemd-logind[1464]: New session 19 of user core. Jul 6 23:29:50.913625 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:29:51.060419 sshd[4202]: Connection closed by 10.0.0.1 port 35162 Jul 6 23:29:51.061022 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:51.066995 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:35162.service: Deactivated successfully. Jul 6 23:29:51.069768 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:29:51.070874 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:29:51.071899 systemd-logind[1464]: Removed session 19. Jul 6 23:29:52.215848 kubelet[2660]: E0706 23:29:52.215794 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:56.078184 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:35174.service - OpenSSH per-connection server daemon (10.0.0.1:35174). Jul 6 23:29:56.136807 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 35174 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:29:56.138510 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:29:56.143222 systemd-logind[1464]: New session 20 of user core. Jul 6 23:29:56.150554 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:29:56.216674 kubelet[2660]: E0706 23:29:56.216617 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:29:56.412747 sshd[4218]: Connection closed by 10.0.0.1 port 35174 Jul 6 23:29:56.413259 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jul 6 23:29:56.418144 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:35174.service: Deactivated successfully. Jul 6 23:29:56.420826 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:29:56.421684 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:29:56.422756 systemd-logind[1464]: Removed session 20. Jul 6 23:30:01.430745 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:46658.service - OpenSSH per-connection server daemon (10.0.0.1:46658). Jul 6 23:30:01.499476 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 46658 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:01.501534 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:01.507503 systemd-logind[1464]: New session 21 of user core. Jul 6 23:30:01.524801 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:30:01.663737 sshd[4233]: Connection closed by 10.0.0.1 port 46658 Jul 6 23:30:01.664294 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:01.670173 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:46658.service: Deactivated successfully. Jul 6 23:30:01.672863 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:30:01.673829 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:30:01.675155 systemd-logind[1464]: Removed session 21. Jul 6 23:30:03.216640 kubelet[2660]: E0706 23:30:03.216572 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:06.685279 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:46674.service - OpenSSH per-connection server daemon (10.0.0.1:46674). Jul 6 23:30:06.727099 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 46674 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:06.728744 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:06.733012 systemd-logind[1464]: New session 22 of user core. Jul 6 23:30:06.749759 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:30:06.869132 sshd[4249]: Connection closed by 10.0.0.1 port 46674 Jul 6 23:30:06.869654 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:06.881428 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:46674.service: Deactivated successfully. Jul 6 23:30:06.883491 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:30:06.885017 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:30:06.889729 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Jul 6 23:30:06.890611 systemd-logind[1464]: Removed session 22. Jul 6 23:30:06.929608 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:06.931429 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:06.936282 systemd-logind[1464]: New session 23 of user core. Jul 6 23:30:06.944625 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:30:08.262204 sshd[4266]: Connection closed by 10.0.0.1 port 46686 Jul 6 23:30:08.262918 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:08.276183 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:46686.service: Deactivated successfully. Jul 6 23:30:08.279363 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:30:08.281452 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:30:08.291760 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:58432.service - OpenSSH per-connection server daemon (10.0.0.1:58432). Jul 6 23:30:08.293038 systemd-logind[1464]: Removed session 23. Jul 6 23:30:08.334334 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 58432 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:08.336037 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:08.341444 systemd-logind[1464]: New session 24 of user core. Jul 6 23:30:08.350575 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:30:09.151352 sshd[4280]: Connection closed by 10.0.0.1 port 58432 Jul 6 23:30:09.153546 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:09.167613 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:58432.service: Deactivated successfully. Jul 6 23:30:09.171415 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:30:09.172860 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:30:09.182955 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:58448.service - OpenSSH per-connection server daemon (10.0.0.1:58448). Jul 6 23:30:09.186209 systemd-logind[1464]: Removed session 24. Jul 6 23:30:09.228648 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 58448 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:09.230903 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:09.236634 systemd-logind[1464]: New session 25 of user core. Jul 6 23:30:09.243581 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:30:09.738184 sshd[4304]: Connection closed by 10.0.0.1 port 58448 Jul 6 23:30:09.738723 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:09.747646 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:58448.service: Deactivated successfully. Jul 6 23:30:09.749803 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:30:09.751585 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:30:09.761946 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:58460.service - OpenSSH per-connection server daemon (10.0.0.1:58460). Jul 6 23:30:09.763644 systemd-logind[1464]: Removed session 25. Jul 6 23:30:09.805391 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 58460 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:09.807127 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:09.812087 systemd-logind[1464]: New session 26 of user core. Jul 6 23:30:09.823542 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:30:09.950154 sshd[4318]: Connection closed by 10.0.0.1 port 58460 Jul 6 23:30:09.950587 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:09.954573 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:58460.service: Deactivated successfully. Jul 6 23:30:09.956842 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:30:09.957796 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:30:09.958780 systemd-logind[1464]: Removed session 26. Jul 6 23:30:14.966895 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:58468.service - OpenSSH per-connection server daemon (10.0.0.1:58468). Jul 6 23:30:15.009566 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 58468 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:15.011390 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:15.016593 systemd-logind[1464]: New session 27 of user core. Jul 6 23:30:15.027809 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:30:15.154237 sshd[4333]: Connection closed by 10.0.0.1 port 58468 Jul 6 23:30:15.154721 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:15.160359 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:58468.service: Deactivated successfully. Jul 6 23:30:15.163192 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:30:15.163975 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:30:15.165099 systemd-logind[1464]: Removed session 27. Jul 6 23:30:20.169714 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Jul 6 23:30:20.214727 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:20.216825 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:20.222763 systemd-logind[1464]: New session 28 of user core. Jul 6 23:30:20.232743 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:30:20.361477 sshd[4352]: Connection closed by 10.0.0.1 port 35052 Jul 6 23:30:20.361980 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:20.367697 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:35052.service: Deactivated successfully. Jul 6 23:30:20.370841 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:30:20.371900 systemd-logind[1464]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:30:20.372901 systemd-logind[1464]: Removed session 28. Jul 6 23:30:25.376115 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:35054.service - OpenSSH per-connection server daemon (10.0.0.1:35054). Jul 6 23:30:25.426246 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 35054 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:25.428044 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:25.433770 systemd-logind[1464]: New session 29 of user core. Jul 6 23:30:25.445560 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:30:25.562580 sshd[4368]: Connection closed by 10.0.0.1 port 35054 Jul 6 23:30:25.563017 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:25.567656 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:35054.service: Deactivated successfully. Jul 6 23:30:25.570303 systemd[1]: session-29.scope: Deactivated successfully. Jul 6 23:30:25.571226 systemd-logind[1464]: Session 29 logged out. Waiting for processes to exit. Jul 6 23:30:25.572294 systemd-logind[1464]: Removed session 29. Jul 6 23:30:27.216777 kubelet[2660]: E0706 23:30:27.216308 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:27.216777 kubelet[2660]: E0706 23:30:27.216471 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:27.216777 kubelet[2660]: E0706 23:30:27.216531 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:30.595763 systemd[1]: Started sshd@29-10.0.0.54:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Jul 6 23:30:30.636833 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:30.638680 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:30.643843 systemd-logind[1464]: New session 30 of user core. Jul 6 23:30:30.653556 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 6 23:30:30.774624 sshd[4383]: Connection closed by 10.0.0.1 port 35750 Jul 6 23:30:30.775062 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:30.793667 systemd[1]: sshd@29-10.0.0.54:22-10.0.0.1:35750.service: Deactivated successfully. Jul 6 23:30:30.795988 systemd[1]: session-30.scope: Deactivated successfully. Jul 6 23:30:30.798211 systemd-logind[1464]: Session 30 logged out. Waiting for processes to exit. Jul 6 23:30:30.806686 systemd[1]: Started sshd@30-10.0.0.54:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). Jul 6 23:30:30.807745 systemd-logind[1464]: Removed session 30. Jul 6 23:30:30.846554 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:30.848041 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:30.852854 systemd-logind[1464]: New session 31 of user core. Jul 6 23:30:30.862544 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 6 23:30:32.555531 containerd[1481]: time="2025-07-06T23:30:32.555465647Z" level=info msg="StopContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" with timeout 30 (s)" Jul 6 23:30:32.556193 containerd[1481]: time="2025-07-06T23:30:32.555986939Z" level=info msg="Stop container \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" with signal terminated" Jul 6 23:30:32.581642 systemd[1]: cri-containerd-cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0.scope: Deactivated successfully. Jul 6 23:30:32.592842 containerd[1481]: time="2025-07-06T23:30:32.592712877Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:30:32.596077 containerd[1481]: time="2025-07-06T23:30:32.595950749Z" level=info msg="StopContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" with timeout 2 (s)" Jul 6 23:30:32.597087 containerd[1481]: time="2025-07-06T23:30:32.596318862Z" level=info msg="Stop container \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" with signal terminated" Jul 6 23:30:32.606853 systemd-networkd[1404]: lxc_health: Link DOWN Jul 6 23:30:32.606869 systemd-networkd[1404]: lxc_health: Lost carrier Jul 6 23:30:32.613684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0-rootfs.mount: Deactivated successfully. Jul 6 23:30:32.629048 systemd[1]: cri-containerd-43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a.scope: Deactivated successfully. Jul 6 23:30:32.629741 systemd[1]: cri-containerd-43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a.scope: Consumed 7.901s CPU time, 124.9M memory peak, 132K read from disk, 13.3M written to disk. Jul 6 23:30:32.630625 containerd[1481]: time="2025-07-06T23:30:32.630547634Z" level=info msg="shim disconnected" id=cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0 namespace=k8s.io Jul 6 23:30:32.630788 containerd[1481]: time="2025-07-06T23:30:32.630624048Z" level=warning msg="cleaning up after shim disconnected" id=cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0 namespace=k8s.io Jul 6 23:30:32.630788 containerd[1481]: time="2025-07-06T23:30:32.630642583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:32.652961 containerd[1481]: time="2025-07-06T23:30:32.652909944Z" level=info msg="StopContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" returns successfully" Jul 6 23:30:32.657032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a-rootfs.mount: Deactivated successfully. Jul 6 23:30:32.657905 containerd[1481]: time="2025-07-06T23:30:32.657570767Z" level=info msg="StopPodSandbox for \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\"" Jul 6 23:30:32.662206 containerd[1481]: time="2025-07-06T23:30:32.657644246Z" level=info msg="Container to stop \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.662605 containerd[1481]: time="2025-07-06T23:30:32.662527889Z" level=info msg="shim disconnected" id=43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a namespace=k8s.io Jul 6 23:30:32.662605 containerd[1481]: time="2025-07-06T23:30:32.662605525Z" level=warning msg="cleaning up after shim disconnected" id=43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a namespace=k8s.io Jul 6 23:30:32.662681 containerd[1481]: time="2025-07-06T23:30:32.662615003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:32.664305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745-shm.mount: Deactivated successfully. Jul 6 23:30:32.673948 systemd[1]: cri-containerd-750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745.scope: Deactivated successfully. Jul 6 23:30:32.690210 containerd[1481]: time="2025-07-06T23:30:32.690163095Z" level=info msg="StopContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" returns successfully" Jul 6 23:30:32.690681 containerd[1481]: time="2025-07-06T23:30:32.690656806Z" level=info msg="StopPodSandbox for \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\"" Jul 6 23:30:32.690772 containerd[1481]: time="2025-07-06T23:30:32.690698133Z" level=info msg="Container to stop \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.690809 containerd[1481]: time="2025-07-06T23:30:32.690770771Z" level=info msg="Container to stop \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.690809 containerd[1481]: time="2025-07-06T23:30:32.690781541Z" level=info msg="Container to stop \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.690809 containerd[1481]: time="2025-07-06T23:30:32.690792061Z" level=info msg="Container to stop \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.690809 containerd[1481]: time="2025-07-06T23:30:32.690801248Z" level=info msg="Container to stop \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:30:32.693529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf-shm.mount: Deactivated successfully. Jul 6 23:30:32.698248 systemd[1]: cri-containerd-ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf.scope: Deactivated successfully. Jul 6 23:30:32.703178 containerd[1481]: time="2025-07-06T23:30:32.703116104Z" level=info msg="shim disconnected" id=750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745 namespace=k8s.io Jul 6 23:30:32.703595 containerd[1481]: time="2025-07-06T23:30:32.703561474Z" level=warning msg="cleaning up after shim disconnected" id=750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745 namespace=k8s.io Jul 6 23:30:32.703595 containerd[1481]: time="2025-07-06T23:30:32.703578105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:32.721170 containerd[1481]: time="2025-07-06T23:30:32.721111765Z" level=info msg="TearDown network for sandbox \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\" successfully" Jul 6 23:30:32.721170 containerd[1481]: time="2025-07-06T23:30:32.721156430Z" level=info msg="StopPodSandbox for \"750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745\" returns successfully" Jul 6 23:30:32.722460 containerd[1481]: time="2025-07-06T23:30:32.722354257Z" level=info msg="shim disconnected" id=ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf namespace=k8s.io Jul 6 23:30:32.722865 containerd[1481]: time="2025-07-06T23:30:32.722583068Z" level=warning msg="cleaning up after shim disconnected" id=ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf namespace=k8s.io Jul 6 23:30:32.722865 containerd[1481]: time="2025-07-06T23:30:32.722649724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:32.741294 containerd[1481]: time="2025-07-06T23:30:32.741232813Z" level=info msg="TearDown network for sandbox \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" successfully" Jul 6 23:30:32.741294 containerd[1481]: time="2025-07-06T23:30:32.741282546Z" level=info msg="StopPodSandbox for \"ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf\" returns successfully" Jul 6 23:30:32.841577 kubelet[2660]: I0706 23:30:32.841419 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1295b88d-b598-4d9a-acc3-5414fd8a7322-cilium-config-path\") pod \"1295b88d-b598-4d9a-acc3-5414fd8a7322\" (UID: \"1295b88d-b598-4d9a-acc3-5414fd8a7322\") " Jul 6 23:30:32.841577 kubelet[2660]: I0706 23:30:32.841480 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v42v\" (UniqueName: \"kubernetes.io/projected/1295b88d-b598-4d9a-acc3-5414fd8a7322-kube-api-access-2v42v\") pod \"1295b88d-b598-4d9a-acc3-5414fd8a7322\" (UID: \"1295b88d-b598-4d9a-acc3-5414fd8a7322\") " Jul 6 23:30:32.845874 kubelet[2660]: I0706 23:30:32.845832 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1295b88d-b598-4d9a-acc3-5414fd8a7322-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1295b88d-b598-4d9a-acc3-5414fd8a7322" (UID: "1295b88d-b598-4d9a-acc3-5414fd8a7322"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:30:32.846376 kubelet[2660]: I0706 23:30:32.846335 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1295b88d-b598-4d9a-acc3-5414fd8a7322-kube-api-access-2v42v" (OuterVolumeSpecName: "kube-api-access-2v42v") pod "1295b88d-b598-4d9a-acc3-5414fd8a7322" (UID: "1295b88d-b598-4d9a-acc3-5414fd8a7322"). InnerVolumeSpecName "kube-api-access-2v42v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:32.933336 kubelet[2660]: I0706 23:30:32.933297 2660 scope.go:117] "RemoveContainer" containerID="cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0" Jul 6 23:30:32.940773 systemd[1]: Removed slice kubepods-besteffort-pod1295b88d_b598_4d9a_acc3_5414fd8a7322.slice - libcontainer container kubepods-besteffort-pod1295b88d_b598_4d9a_acc3_5414fd8a7322.slice. Jul 6 23:30:32.941710 kubelet[2660]: I0706 23:30:32.941633 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-net\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.941771 kubelet[2660]: I0706 23:30:32.941735 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941796 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-config-path\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941857 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hostproc\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941886 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8qfr\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-kube-api-access-x8qfr\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941926 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hostproc" (OuterVolumeSpecName: "hostproc") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941961 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-run\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942170 kubelet[2660]: I0706 23:30:32.941983 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-kernel\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942586 kubelet[2660]: I0706 23:30:32.942487 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hubble-tls\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942586 kubelet[2660]: I0706 23:30:32.942518 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-clustermesh-secrets\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942586 kubelet[2660]: I0706 23:30:32.942548 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-etc-cni-netd\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942586 kubelet[2660]: I0706 23:30:32.942582 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-cgroup\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942723 kubelet[2660]: I0706 23:30:32.942603 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-bpf-maps\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942723 kubelet[2660]: I0706 23:30:32.942624 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-lib-modules\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942723 kubelet[2660]: I0706 23:30:32.942646 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-xtables-lock\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942723 kubelet[2660]: I0706 23:30:32.942672 2660 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cni-path\") pod \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\" (UID: \"01e4dbbb-86de-4e8f-ad9f-178fa607eaf0\") " Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942731 2660 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942748 2660 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942760 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1295b88d-b598-4d9a-acc3-5414fd8a7322-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942773 2660 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2v42v\" (UniqueName: \"kubernetes.io/projected/1295b88d-b598-4d9a-acc3-5414fd8a7322-kube-api-access-2v42v\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942808 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cni-path" (OuterVolumeSpecName: "cni-path") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.942853 kubelet[2660]: I0706 23:30:32.942833 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.943065 kubelet[2660]: I0706 23:30:32.942856 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.943065 kubelet[2660]: I0706 23:30:32.942905 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.944322 kubelet[2660]: I0706 23:30:32.943803 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.944322 kubelet[2660]: I0706 23:30:32.943835 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.944322 kubelet[2660]: I0706 23:30:32.943857 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.944322 kubelet[2660]: I0706 23:30:32.944178 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:30:32.945169 containerd[1481]: time="2025-07-06T23:30:32.945113251Z" level=info msg="RemoveContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\"" Jul 6 23:30:32.948886 kubelet[2660]: I0706 23:30:32.948820 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:30:32.953502 kubelet[2660]: I0706 23:30:32.953443 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:30:32.956199 containerd[1481]: time="2025-07-06T23:30:32.956152263Z" level=info msg="RemoveContainer for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" returns successfully" Jul 6 23:30:32.956742 kubelet[2660]: I0706 23:30:32.956652 2660 scope.go:117] "RemoveContainer" containerID="cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0" Jul 6 23:30:32.957429 containerd[1481]: time="2025-07-06T23:30:32.956913147Z" level=error msg="ContainerStatus for \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\": not found" Jul 6 23:30:32.965360 kubelet[2660]: I0706 23:30:32.965293 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-kube-api-access-x8qfr" (OuterVolumeSpecName: "kube-api-access-x8qfr") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "kube-api-access-x8qfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:32.965782 kubelet[2660]: I0706 23:30:32.965740 2660 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" (UID: "01e4dbbb-86de-4e8f-ad9f-178fa607eaf0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:30:32.965887 kubelet[2660]: E0706 23:30:32.965781 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\": not found" containerID="cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0" Jul 6 23:30:32.965936 kubelet[2660]: I0706 23:30:32.965833 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0"} err="failed to get container status \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb2a982547dc91e3c56b0a800539bfd1f4664374dce21b06373dec4faf702ea0\": not found" Jul 6 23:30:32.965936 kubelet[2660]: I0706 23:30:32.965917 2660 scope.go:117] "RemoveContainer" containerID="43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a" Jul 6 23:30:32.972820 containerd[1481]: time="2025-07-06T23:30:32.971976954Z" level=info msg="RemoveContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\"" Jul 6 23:30:32.977530 containerd[1481]: time="2025-07-06T23:30:32.977477268Z" level=info msg="RemoveContainer for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" returns successfully" Jul 6 23:30:32.977691 kubelet[2660]: I0706 23:30:32.977661 2660 scope.go:117] "RemoveContainer" containerID="948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0" Jul 6 23:30:32.979065 containerd[1481]: time="2025-07-06T23:30:32.979029704Z" level=info msg="RemoveContainer for \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\"" Jul 6 23:30:32.981191 systemd[1]: Removed slice kubepods-burstable-pod01e4dbbb_86de_4e8f_ad9f_178fa607eaf0.slice - libcontainer container kubepods-burstable-pod01e4dbbb_86de_4e8f_ad9f_178fa607eaf0.slice. Jul 6 23:30:32.981563 systemd[1]: kubepods-burstable-pod01e4dbbb_86de_4e8f_ad9f_178fa607eaf0.slice: Consumed 8.022s CPU time, 125.2M memory peak, 148K read from disk, 13.3M written to disk. Jul 6 23:30:32.984164 containerd[1481]: time="2025-07-06T23:30:32.984119405Z" level=info msg="RemoveContainer for \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\" returns successfully" Jul 6 23:30:32.984789 kubelet[2660]: I0706 23:30:32.984370 2660 scope.go:117] "RemoveContainer" containerID="e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403" Jul 6 23:30:32.987053 containerd[1481]: time="2025-07-06T23:30:32.986966090Z" level=info msg="RemoveContainer for \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\"" Jul 6 23:30:32.991997 containerd[1481]: time="2025-07-06T23:30:32.991963286Z" level=info msg="RemoveContainer for \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\" returns successfully" Jul 6 23:30:32.992245 kubelet[2660]: I0706 23:30:32.992206 2660 scope.go:117] "RemoveContainer" containerID="92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494" Jul 6 23:30:32.993466 containerd[1481]: time="2025-07-06T23:30:32.993439638Z" level=info msg="RemoveContainer for \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\"" Jul 6 23:30:32.997820 containerd[1481]: time="2025-07-06T23:30:32.997757996Z" level=info msg="RemoveContainer for \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\" returns successfully" Jul 6 23:30:32.999098 kubelet[2660]: I0706 23:30:32.999066 2660 scope.go:117] "RemoveContainer" containerID="d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb" Jul 6 23:30:33.004658 containerd[1481]: time="2025-07-06T23:30:33.004604076Z" level=info msg="RemoveContainer for \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\"" Jul 6 23:30:33.008915 containerd[1481]: time="2025-07-06T23:30:33.008879673Z" level=info msg="RemoveContainer for \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\" returns successfully" Jul 6 23:30:33.009129 kubelet[2660]: I0706 23:30:33.009093 2660 scope.go:117] "RemoveContainer" containerID="43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a" Jul 6 23:30:33.009420 containerd[1481]: time="2025-07-06T23:30:33.009350761Z" level=error msg="ContainerStatus for \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\": not found" Jul 6 23:30:33.009602 kubelet[2660]: E0706 23:30:33.009569 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\": not found" containerID="43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a" Jul 6 23:30:33.009653 kubelet[2660]: I0706 23:30:33.009602 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a"} err="failed to get container status \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"43e1d6f588e876ba9f5b801d1019aec6fdfc9f4096a5dda708effe5ae06d4d6a\": not found" Jul 6 23:30:33.009653 kubelet[2660]: I0706 23:30:33.009625 2660 scope.go:117] "RemoveContainer" containerID="948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0" Jul 6 23:30:33.009848 containerd[1481]: time="2025-07-06T23:30:33.009809195Z" level=error msg="ContainerStatus for \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\": not found" Jul 6 23:30:33.009958 kubelet[2660]: E0706 23:30:33.009933 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\": not found" containerID="948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0" Jul 6 23:30:33.009999 kubelet[2660]: I0706 23:30:33.009957 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0"} err="failed to get container status \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"948adc40e0b62c640bf0de4e8e2f05680717c4c4a0ca36869b984b3a2d3924a0\": not found" Jul 6 23:30:33.009999 kubelet[2660]: I0706 23:30:33.009975 2660 scope.go:117] "RemoveContainer" containerID="e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403" Jul 6 23:30:33.010168 containerd[1481]: time="2025-07-06T23:30:33.010129649Z" level=error msg="ContainerStatus for \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\": not found" Jul 6 23:30:33.010331 kubelet[2660]: E0706 23:30:33.010300 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\": not found" containerID="e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403" Jul 6 23:30:33.010416 kubelet[2660]: I0706 23:30:33.010339 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403"} err="failed to get container status \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3530664687f407427634f19535428bf444b4c6680bda8d1fd41e82d24182403\": not found" Jul 6 23:30:33.010416 kubelet[2660]: I0706 23:30:33.010368 2660 scope.go:117] "RemoveContainer" containerID="92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494" Jul 6 23:30:33.010606 containerd[1481]: time="2025-07-06T23:30:33.010572513Z" level=error msg="ContainerStatus for \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\": not found" Jul 6 23:30:33.010721 kubelet[2660]: E0706 23:30:33.010681 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\": not found" containerID="92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494" Jul 6 23:30:33.010779 kubelet[2660]: I0706 23:30:33.010720 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494"} err="failed to get container status \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\": rpc error: code = NotFound desc = an error occurred when try to find container \"92f3106705d378a199e1940660767151b381287a6de5da7917af1a8771da2494\": not found" Jul 6 23:30:33.010779 kubelet[2660]: I0706 23:30:33.010742 2660 scope.go:117] "RemoveContainer" containerID="d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb" Jul 6 23:30:33.010925 containerd[1481]: time="2025-07-06T23:30:33.010893428Z" level=error msg="ContainerStatus for \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\": not found" Jul 6 23:30:33.011022 kubelet[2660]: E0706 23:30:33.011000 2660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\": not found" containerID="d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb" Jul 6 23:30:33.011073 kubelet[2660]: I0706 23:30:33.011020 2660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb"} err="failed to get container status \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0ce6be8d215677752b6770bf74a73564e46a35d690490b72b7b6925941489bb\": not found" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043242 2660 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043264 2660 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x8qfr\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-kube-api-access-x8qfr\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043276 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043286 2660 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043294 2660 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043302 2660 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043291 kubelet[2660]: I0706 23:30:33.043309 2660 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043601 kubelet[2660]: I0706 23:30:33.043319 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043601 kubelet[2660]: I0706 23:30:33.043327 2660 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043601 kubelet[2660]: I0706 23:30:33.043335 2660 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043601 kubelet[2660]: I0706 23:30:33.043343 2660 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.043601 kubelet[2660]: I0706 23:30:33.043350 2660 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:30:33.218827 kubelet[2660]: I0706 23:30:33.218761 2660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" path="/var/lib/kubelet/pods/01e4dbbb-86de-4e8f-ad9f-178fa607eaf0/volumes" Jul 6 23:30:33.219883 kubelet[2660]: I0706 23:30:33.219847 2660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1295b88d-b598-4d9a-acc3-5414fd8a7322" path="/var/lib/kubelet/pods/1295b88d-b598-4d9a-acc3-5414fd8a7322/volumes" Jul 6 23:30:33.568294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-750d11976117da5a7a1a8ff7002ffaaf6951c6992bae3e285f50786a1961d745-rootfs.mount: Deactivated successfully. Jul 6 23:30:33.568465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec7c115394db638cffce015a68a741360e0828ec2eaf3237491bdb0993080aaf-rootfs.mount: Deactivated successfully. Jul 6 23:30:33.568555 systemd[1]: var-lib-kubelet-pods-1295b88d\x2db598\x2d4d9a\x2dacc3\x2d5414fd8a7322-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2v42v.mount: Deactivated successfully. Jul 6 23:30:33.568653 systemd[1]: var-lib-kubelet-pods-01e4dbbb\x2d86de\x2d4e8f\x2dad9f\x2d178fa607eaf0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx8qfr.mount: Deactivated successfully. Jul 6 23:30:33.568750 systemd[1]: var-lib-kubelet-pods-01e4dbbb\x2d86de\x2d4e8f\x2dad9f\x2d178fa607eaf0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:30:33.568845 systemd[1]: var-lib-kubelet-pods-01e4dbbb\x2d86de\x2d4e8f\x2dad9f\x2d178fa607eaf0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:30:34.524211 sshd[4398]: Connection closed by 10.0.0.1 port 35760 Jul 6 23:30:34.524661 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:34.537644 systemd[1]: sshd@30-10.0.0.54:22-10.0.0.1:35760.service: Deactivated successfully. Jul 6 23:30:34.539957 systemd[1]: session-31.scope: Deactivated successfully. Jul 6 23:30:34.541762 systemd-logind[1464]: Session 31 logged out. Waiting for processes to exit. Jul 6 23:30:34.550745 systemd[1]: Started sshd@31-10.0.0.54:22-10.0.0.1:35776.service - OpenSSH per-connection server daemon (10.0.0.1:35776). Jul 6 23:30:34.552273 systemd-logind[1464]: Removed session 31. Jul 6 23:30:34.591290 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 35776 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:34.592984 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:34.597756 systemd-logind[1464]: New session 32 of user core. Jul 6 23:30:34.612564 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 6 23:30:35.206588 sshd[4561]: Connection closed by 10.0.0.1 port 35776 Jul 6 23:30:35.211630 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:35.223773 kubelet[2660]: I0706 23:30:35.223721 2660 memory_manager.go:355] "RemoveStaleState removing state" podUID="01e4dbbb-86de-4e8f-ad9f-178fa607eaf0" containerName="cilium-agent" Jul 6 23:30:35.223773 kubelet[2660]: I0706 23:30:35.223755 2660 memory_manager.go:355] "RemoveStaleState removing state" podUID="1295b88d-b598-4d9a-acc3-5414fd8a7322" containerName="cilium-operator" Jul 6 23:30:35.227333 systemd[1]: sshd@31-10.0.0.54:22-10.0.0.1:35776.service: Deactivated successfully. Jul 6 23:30:35.231614 systemd[1]: session-32.scope: Deactivated successfully. Jul 6 23:30:35.234128 systemd-logind[1464]: Session 32 logged out. Waiting for processes to exit. Jul 6 23:30:35.248928 systemd[1]: Started sshd@32-10.0.0.54:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Jul 6 23:30:35.250108 systemd-logind[1464]: Removed session 32. Jul 6 23:30:35.268353 systemd[1]: Created slice kubepods-burstable-pod49d40fc1_2059_41cf_8637_84b1f3bfb01e.slice - libcontainer container kubepods-burstable-pod49d40fc1_2059_41cf_8637_84b1f3bfb01e.slice. Jul 6 23:30:35.299329 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:35.301924 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:35.321756 systemd-logind[1464]: New session 33 of user core. Jul 6 23:30:35.328682 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 6 23:30:35.358021 kubelet[2660]: I0706 23:30:35.357950 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-cni-path\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358021 kubelet[2660]: I0706 23:30:35.358018 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-host-proc-sys-kernel\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358210 kubelet[2660]: I0706 23:30:35.358052 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49d40fc1-2059-41cf-8637-84b1f3bfb01e-hubble-tls\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358210 kubelet[2660]: I0706 23:30:35.358080 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2x69\" (UniqueName: \"kubernetes.io/projected/49d40fc1-2059-41cf-8637-84b1f3bfb01e-kube-api-access-b2x69\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358210 kubelet[2660]: I0706 23:30:35.358103 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49d40fc1-2059-41cf-8637-84b1f3bfb01e-clustermesh-secrets\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358210 kubelet[2660]: I0706 23:30:35.358149 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-etc-cni-netd\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358210 kubelet[2660]: I0706 23:30:35.358197 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-hostproc\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358219 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-lib-modules\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358237 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49d40fc1-2059-41cf-8637-84b1f3bfb01e-cilium-config-path\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358257 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/49d40fc1-2059-41cf-8637-84b1f3bfb01e-cilium-ipsec-secrets\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358276 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-cilium-run\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358294 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-bpf-maps\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358365 kubelet[2660]: I0706 23:30:35.358311 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-xtables-lock\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358615 kubelet[2660]: I0706 23:30:35.358333 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-cilium-cgroup\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.358615 kubelet[2660]: I0706 23:30:35.358350 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49d40fc1-2059-41cf-8637-84b1f3bfb01e-host-proc-sys-net\") pod \"cilium-7vfts\" (UID: \"49d40fc1-2059-41cf-8637-84b1f3bfb01e\") " pod="kube-system/cilium-7vfts" Jul 6 23:30:35.381588 sshd[4575]: Connection closed by 10.0.0.1 port 35790 Jul 6 23:30:35.382039 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:35.397881 systemd[1]: sshd@32-10.0.0.54:22-10.0.0.1:35790.service: Deactivated successfully. Jul 6 23:30:35.401139 systemd[1]: session-33.scope: Deactivated successfully. Jul 6 23:30:35.403768 systemd-logind[1464]: Session 33 logged out. Waiting for processes to exit. Jul 6 23:30:35.412996 systemd[1]: Started sshd@33-10.0.0.54:22-10.0.0.1:35806.service - OpenSSH per-connection server daemon (10.0.0.1:35806). Jul 6 23:30:35.413995 systemd-logind[1464]: Removed session 33. Jul 6 23:30:35.452587 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 35806 ssh2: RSA SHA256:Lexr/gUUGeuk2kY95erYKAia8vn31CKMT/FHm4ycpOo Jul 6 23:30:35.454677 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:30:35.461822 systemd-logind[1464]: New session 34 of user core. Jul 6 23:30:35.469724 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 6 23:30:35.580153 kubelet[2660]: E0706 23:30:35.580109 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:35.580779 containerd[1481]: time="2025-07-06T23:30:35.580742022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vfts,Uid:49d40fc1-2059-41cf-8637-84b1f3bfb01e,Namespace:kube-system,Attempt:0,}" Jul 6 23:30:35.665075 containerd[1481]: time="2025-07-06T23:30:35.664946080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:30:35.665884 containerd[1481]: time="2025-07-06T23:30:35.665836387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:30:35.665943 containerd[1481]: time="2025-07-06T23:30:35.665910888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:35.666133 containerd[1481]: time="2025-07-06T23:30:35.666097409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:30:35.692615 systemd[1]: Started cri-containerd-1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c.scope - libcontainer container 1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c. Jul 6 23:30:35.724548 containerd[1481]: time="2025-07-06T23:30:35.724038831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vfts,Uid:49d40fc1-2059-41cf-8637-84b1f3bfb01e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\"" Jul 6 23:30:35.725324 kubelet[2660]: E0706 23:30:35.725294 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:35.728554 containerd[1481]: time="2025-07-06T23:30:35.728500228Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:30:35.746246 containerd[1481]: time="2025-07-06T23:30:35.746181192Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117\"" Jul 6 23:30:35.746910 containerd[1481]: time="2025-07-06T23:30:35.746864761Z" level=info msg="StartContainer for \"bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117\"" Jul 6 23:30:35.783687 systemd[1]: Started cri-containerd-bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117.scope - libcontainer container bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117. Jul 6 23:30:35.814433 containerd[1481]: time="2025-07-06T23:30:35.814354183Z" level=info msg="StartContainer for \"bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117\" returns successfully" Jul 6 23:30:35.824929 systemd[1]: cri-containerd-bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117.scope: Deactivated successfully. Jul 6 23:30:35.919841 containerd[1481]: time="2025-07-06T23:30:35.919762172Z" level=info msg="shim disconnected" id=bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117 namespace=k8s.io Jul 6 23:30:35.919841 containerd[1481]: time="2025-07-06T23:30:35.919833126Z" level=warning msg="cleaning up after shim disconnected" id=bbd65bea3bb7860315481c52e16cce5ef154c7f3e24b06ad1a14615242ae2117 namespace=k8s.io Jul 6 23:30:35.919841 containerd[1481]: time="2025-07-06T23:30:35.919845630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:35.975234 kubelet[2660]: E0706 23:30:35.975063 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:35.979743 containerd[1481]: time="2025-07-06T23:30:35.979693855Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:30:36.004841 containerd[1481]: time="2025-07-06T23:30:36.004750096Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b\"" Jul 6 23:30:36.005606 containerd[1481]: time="2025-07-06T23:30:36.005542449Z" level=info msg="StartContainer for \"43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b\"" Jul 6 23:30:36.039625 systemd[1]: Started cri-containerd-43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b.scope - libcontainer container 43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b. Jul 6 23:30:36.070868 containerd[1481]: time="2025-07-06T23:30:36.070821473Z" level=info msg="StartContainer for \"43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b\" returns successfully" Jul 6 23:30:36.079177 systemd[1]: cri-containerd-43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b.scope: Deactivated successfully. Jul 6 23:30:36.113468 containerd[1481]: time="2025-07-06T23:30:36.113367524Z" level=info msg="shim disconnected" id=43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b namespace=k8s.io Jul 6 23:30:36.113468 containerd[1481]: time="2025-07-06T23:30:36.113459537Z" level=warning msg="cleaning up after shim disconnected" id=43d9c1bf726a28d4e16a01e2842c239e6c7e3bad7eea1911551946ed9bb7f89b namespace=k8s.io Jul 6 23:30:36.113468 containerd[1481]: time="2025-07-06T23:30:36.113469546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:36.976594 kubelet[2660]: E0706 23:30:36.976533 2660 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:30:36.979181 kubelet[2660]: E0706 23:30:36.979150 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:36.981299 containerd[1481]: time="2025-07-06T23:30:36.981170399Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:30:37.012495 containerd[1481]: time="2025-07-06T23:30:37.012446567Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c\"" Jul 6 23:30:37.013190 containerd[1481]: time="2025-07-06T23:30:37.013143850Z" level=info msg="StartContainer for \"97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c\"" Jul 6 23:30:37.050614 systemd[1]: Started cri-containerd-97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c.scope - libcontainer container 97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c. Jul 6 23:30:37.091874 containerd[1481]: time="2025-07-06T23:30:37.091815121Z" level=info msg="StartContainer for \"97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c\" returns successfully" Jul 6 23:30:37.094436 systemd[1]: cri-containerd-97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c.scope: Deactivated successfully. Jul 6 23:30:37.126293 containerd[1481]: time="2025-07-06T23:30:37.125955563Z" level=info msg="shim disconnected" id=97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c namespace=k8s.io Jul 6 23:30:37.126293 containerd[1481]: time="2025-07-06T23:30:37.126031717Z" level=warning msg="cleaning up after shim disconnected" id=97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c namespace=k8s.io Jul 6 23:30:37.126293 containerd[1481]: time="2025-07-06T23:30:37.126043880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:37.472939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97d4cb1b40999ac5576219d62f130a3052c74f9709de13dfaa71213b297e4a5c-rootfs.mount: Deactivated successfully. Jul 6 23:30:37.985743 kubelet[2660]: E0706 23:30:37.985701 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:37.989805 containerd[1481]: time="2025-07-06T23:30:37.989745496Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:30:38.024021 containerd[1481]: time="2025-07-06T23:30:38.023944537Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3\"" Jul 6 23:30:38.025449 containerd[1481]: time="2025-07-06T23:30:38.024844482Z" level=info msg="StartContainer for \"36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3\"" Jul 6 23:30:38.075847 systemd[1]: Started cri-containerd-36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3.scope - libcontainer container 36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3. Jul 6 23:30:38.118759 systemd[1]: cri-containerd-36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3.scope: Deactivated successfully. Jul 6 23:30:38.123145 containerd[1481]: time="2025-07-06T23:30:38.122851546Z" level=info msg="StartContainer for \"36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3\" returns successfully" Jul 6 23:30:38.158176 containerd[1481]: time="2025-07-06T23:30:38.158082861Z" level=info msg="shim disconnected" id=36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3 namespace=k8s.io Jul 6 23:30:38.158176 containerd[1481]: time="2025-07-06T23:30:38.158166688Z" level=warning msg="cleaning up after shim disconnected" id=36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3 namespace=k8s.io Jul 6 23:30:38.158176 containerd[1481]: time="2025-07-06T23:30:38.158177219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:30:38.472310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36aa64da079ca56cba3c582dc0f002e676e097722d19c30b87f0f90621ae9dc3-rootfs.mount: Deactivated successfully. Jul 6 23:30:38.989359 kubelet[2660]: E0706 23:30:38.989306 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:38.991655 containerd[1481]: time="2025-07-06T23:30:38.991102062Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:30:39.044828 containerd[1481]: time="2025-07-06T23:30:39.044756362Z" level=info msg="CreateContainer within sandbox \"1b4c564a786c2d5414128b862e447985f6d3a3b51350b1b6b271b5084cf0a29c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915\"" Jul 6 23:30:39.045386 containerd[1481]: time="2025-07-06T23:30:39.045353918Z" level=info msg="StartContainer for \"c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915\"" Jul 6 23:30:39.078728 systemd[1]: Started cri-containerd-c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915.scope - libcontainer container c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915. Jul 6 23:30:39.113825 containerd[1481]: time="2025-07-06T23:30:39.113775999Z" level=info msg="StartContainer for \"c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915\" returns successfully" Jul 6 23:30:39.596447 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:30:39.993864 kubelet[2660]: E0706 23:30:39.993823 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:41.581982 kubelet[2660]: E0706 23:30:41.581930 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:42.929551 systemd-networkd[1404]: lxc_health: Link UP Jul 6 23:30:42.937814 systemd-networkd[1404]: lxc_health: Gained carrier Jul 6 23:30:43.582058 kubelet[2660]: E0706 23:30:43.582001 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:43.610736 kubelet[2660]: I0706 23:30:43.610640 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7vfts" podStartSLOduration=8.610618747 podStartE2EDuration="8.610618747s" podCreationTimestamp="2025-07-06 23:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:30:40.072966266 +0000 UTC m=+128.958237679" watchObservedRunningTime="2025-07-06 23:30:43.610618747 +0000 UTC m=+132.495890160" Jul 6 23:30:44.004606 kubelet[2660]: E0706 23:30:44.004559 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:44.043605 systemd-networkd[1404]: lxc_health: Gained IPv6LL Jul 6 23:30:44.100441 systemd[1]: run-containerd-runc-k8s.io-c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915-runc.UPbenk.mount: Deactivated successfully. Jul 6 23:30:45.006409 kubelet[2660]: E0706 23:30:45.006350 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:46.216871 kubelet[2660]: E0706 23:30:46.216459 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:30:48.334075 systemd[1]: run-containerd-runc-k8s.io-c67bf794ea0d9de1cd531f03e18a506756f7b3a83a8a251632c92558ed4b8915-runc.ZqIdw5.mount: Deactivated successfully. Jul 6 23:30:52.602081 sshd[4588]: Connection closed by 10.0.0.1 port 35806 Jul 6 23:30:52.603360 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Jul 6 23:30:52.609115 systemd[1]: sshd@33-10.0.0.54:22-10.0.0.1:35806.service: Deactivated successfully. Jul 6 23:30:52.611712 systemd[1]: session-34.scope: Deactivated successfully. Jul 6 23:30:52.612502 systemd-logind[1464]: Session 34 logged out. Waiting for processes to exit. Jul 6 23:30:52.613721 systemd-logind[1464]: Removed session 34.