Sep 8 23:54:21.965526 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:08:00 -00 2025 Sep 8 23:54:21.965548 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:54:21.965560 kernel: BIOS-provided physical RAM map: Sep 8 23:54:21.965567 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:54:21.965573 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:54:21.965580 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:54:21.965588 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:54:21.965595 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:54:21.965601 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:54:21.965608 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:54:21.965615 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 8 23:54:21.965624 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:54:21.965634 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:54:21.965641 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:54:21.965652 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:54:21.965660 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:54:21.965669 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:54:21.965677 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:54:21.965684 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:54:21.965691 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:54:21.965698 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:54:21.965705 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:54:21.965712 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:54:21.965720 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:54:21.965727 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:54:21.965734 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:54:21.965741 kernel: NX (Execute Disable) protection: active Sep 8 23:54:21.965751 kernel: APIC: Static calls initialized Sep 8 23:54:21.965758 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:54:21.965765 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 8 23:54:21.965772 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:54:21.965780 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 8 23:54:21.965786 kernel: extended physical RAM map: Sep 8 23:54:21.965794 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 8 23:54:21.965801 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 8 23:54:21.965808 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 8 23:54:21.965815 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 8 23:54:21.965822 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 8 23:54:21.965830 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 8 23:54:21.965840 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 8 23:54:21.965851 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 8 23:54:21.965858 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 8 23:54:21.965866 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 8 23:54:21.965873 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 8 23:54:21.965881 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 8 23:54:21.965893 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 8 23:54:21.965901 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 8 23:54:21.965908 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 8 23:54:21.965916 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 8 23:54:21.965923 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 8 23:54:21.965931 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 8 23:54:21.965938 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 8 23:54:21.965946 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 8 23:54:21.965953 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 8 23:54:21.965963 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 8 23:54:21.965970 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 8 23:54:21.965978 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 8 23:54:21.965985 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 8 23:54:21.966002 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 8 23:54:21.966010 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 8 23:54:21.966017 kernel: efi: EFI v2.7 by EDK II Sep 8 23:54:21.966025 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 8 23:54:21.966033 kernel: random: crng init done Sep 8 23:54:21.966040 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 8 23:54:21.966048 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 8 23:54:21.966058 kernel: secureboot: Secure boot disabled Sep 8 23:54:21.966069 kernel: SMBIOS 2.8 present. Sep 8 23:54:21.966076 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 8 23:54:21.966084 kernel: Hypervisor detected: KVM Sep 8 23:54:21.966091 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 8 23:54:21.966099 kernel: kvm-clock: using sched offset of 3478040169 cycles Sep 8 23:54:21.966107 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 8 23:54:21.966115 kernel: tsc: Detected 2794.748 MHz processor Sep 8 23:54:21.966123 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 8 23:54:21.966131 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 8 23:54:21.966138 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 8 23:54:21.966148 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 8 23:54:21.966156 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 8 23:54:21.966164 kernel: Using GB pages for direct mapping Sep 8 23:54:21.966172 kernel: ACPI: Early table checksum verification disabled Sep 8 23:54:21.966179 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 8 23:54:21.966187 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:54:21.966208 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966216 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966223 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 8 23:54:21.966234 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966242 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966249 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966257 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:54:21.966265 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 8 23:54:21.966273 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 8 23:54:21.966280 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 8 23:54:21.966288 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 8 23:54:21.966298 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 8 23:54:21.966306 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 8 23:54:21.966313 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 8 23:54:21.966321 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 8 23:54:21.966328 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 8 23:54:21.966336 kernel: No NUMA configuration found Sep 8 23:54:21.966343 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 8 23:54:21.966351 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 8 23:54:21.966359 kernel: Zone ranges: Sep 8 23:54:21.966366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 8 23:54:21.966377 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 8 23:54:21.966384 kernel: Normal empty Sep 8 23:54:21.966395 kernel: Movable zone start for each node Sep 8 23:54:21.966403 kernel: Early memory node ranges Sep 8 23:54:21.966410 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 8 23:54:21.966418 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 8 23:54:21.966425 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 8 23:54:21.966433 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 8 23:54:21.966441 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 8 23:54:21.966451 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 8 23:54:21.966458 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 8 23:54:21.966466 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 8 23:54:21.966474 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 8 23:54:21.966481 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:54:21.966489 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 8 23:54:21.966505 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 8 23:54:21.966515 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 8 23:54:21.966523 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 8 23:54:21.966531 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 8 23:54:21.966539 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 8 23:54:21.966549 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 8 23:54:21.966560 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 8 23:54:21.966568 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 8 23:54:21.966576 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 8 23:54:21.966584 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 8 23:54:21.966592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 8 23:54:21.966603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 8 23:54:21.966611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 8 23:54:21.966619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 8 23:54:21.966626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 8 23:54:21.966635 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 8 23:54:21.966643 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 8 23:54:21.966651 kernel: TSC deadline timer available Sep 8 23:54:21.966659 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 8 23:54:21.966667 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 8 23:54:21.966677 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 8 23:54:21.966685 kernel: kvm-guest: setup PV sched yield Sep 8 23:54:21.966693 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 8 23:54:21.966701 kernel: Booting paravirtualized kernel on KVM Sep 8 23:54:21.966709 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 8 23:54:21.966717 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 8 23:54:21.966725 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 8 23:54:21.966733 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 8 23:54:21.966741 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 8 23:54:21.966751 kernel: kvm-guest: PV spinlocks enabled Sep 8 23:54:21.966759 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 8 23:54:21.966768 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:54:21.966777 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:54:21.966785 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:54:21.966796 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:54:21.966804 kernel: Fallback order for Node 0: 0 Sep 8 23:54:21.966812 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 8 23:54:21.966822 kernel: Policy zone: DMA32 Sep 8 23:54:21.966830 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:54:21.966838 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43504K init, 1572K bss, 177824K reserved, 0K cma-reserved) Sep 8 23:54:21.966847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:54:21.966854 kernel: ftrace: allocating 37943 entries in 149 pages Sep 8 23:54:21.966863 kernel: ftrace: allocated 149 pages with 4 groups Sep 8 23:54:21.966871 kernel: Dynamic Preempt: voluntary Sep 8 23:54:21.966879 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:54:21.966891 kernel: rcu: RCU event tracing is enabled. Sep 8 23:54:21.966902 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:54:21.966910 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:54:21.966918 kernel: Rude variant of Tasks RCU enabled. Sep 8 23:54:21.966926 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:54:21.966934 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:54:21.966942 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:54:21.966950 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 8 23:54:21.966959 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:54:21.966966 kernel: Console: colour dummy device 80x25 Sep 8 23:54:21.966977 kernel: printk: console [ttyS0] enabled Sep 8 23:54:21.966985 kernel: ACPI: Core revision 20230628 Sep 8 23:54:21.966993 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 8 23:54:21.967008 kernel: APIC: Switch to symmetric I/O mode setup Sep 8 23:54:21.967016 kernel: x2apic enabled Sep 8 23:54:21.967024 kernel: APIC: Switched APIC routing to: physical x2apic Sep 8 23:54:21.967035 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 8 23:54:21.967044 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 8 23:54:21.967052 kernel: kvm-guest: setup PV IPIs Sep 8 23:54:21.967062 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 8 23:54:21.967070 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 8 23:54:21.967078 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 8 23:54:21.967086 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 8 23:54:21.967094 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 8 23:54:21.967102 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 8 23:54:21.967110 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 8 23:54:21.967118 kernel: Spectre V2 : Mitigation: Retpolines Sep 8 23:54:21.967126 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 8 23:54:21.967137 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 8 23:54:21.967144 kernel: active return thunk: retbleed_return_thunk Sep 8 23:54:21.967152 kernel: RETBleed: Mitigation: untrained return thunk Sep 8 23:54:21.967160 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 8 23:54:21.967168 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 8 23:54:21.967177 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 8 23:54:21.967185 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 8 23:54:21.967205 kernel: active return thunk: srso_return_thunk Sep 8 23:54:21.967214 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 8 23:54:21.967224 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 8 23:54:21.967232 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 8 23:54:21.967240 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 8 23:54:21.967248 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 8 23:54:21.967256 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 8 23:54:21.967264 kernel: Freeing SMP alternatives memory: 32K Sep 8 23:54:21.967272 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:54:21.967280 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:54:21.967291 kernel: landlock: Up and running. Sep 8 23:54:21.967299 kernel: SELinux: Initializing. Sep 8 23:54:21.967307 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:54:21.967315 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:54:21.967323 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 8 23:54:21.967331 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:54:21.967339 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:54:21.967347 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:54:21.967355 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 8 23:54:21.967365 kernel: ... version: 0 Sep 8 23:54:21.967373 kernel: ... bit width: 48 Sep 8 23:54:21.967381 kernel: ... generic registers: 6 Sep 8 23:54:21.967389 kernel: ... value mask: 0000ffffffffffff Sep 8 23:54:21.967397 kernel: ... max period: 00007fffffffffff Sep 8 23:54:21.967405 kernel: ... fixed-purpose events: 0 Sep 8 23:54:21.967413 kernel: ... event mask: 000000000000003f Sep 8 23:54:21.967421 kernel: signal: max sigframe size: 1776 Sep 8 23:54:21.967429 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:54:21.967439 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:54:21.967447 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:54:21.967455 kernel: smpboot: x86: Booting SMP configuration: Sep 8 23:54:21.967463 kernel: .... node #0, CPUs: #1 #2 #3 Sep 8 23:54:21.967471 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:54:21.967479 kernel: smpboot: Max logical packages: 1 Sep 8 23:54:21.967487 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 8 23:54:21.967495 kernel: devtmpfs: initialized Sep 8 23:54:21.967502 kernel: x86/mm: Memory block size: 128MB Sep 8 23:54:21.967510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 8 23:54:21.967521 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 8 23:54:21.967529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 8 23:54:21.967537 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 8 23:54:21.967545 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 8 23:54:21.967553 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 8 23:54:21.967561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:54:21.967569 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:54:21.967577 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:54:21.967588 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:54:21.967596 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:54:21.967604 kernel: audit: type=2000 audit(1757375660.932:1): state=initialized audit_enabled=0 res=1 Sep 8 23:54:21.967612 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:54:21.967620 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 8 23:54:21.967628 kernel: cpuidle: using governor menu Sep 8 23:54:21.967636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:54:21.967644 kernel: dca service started, version 1.12.1 Sep 8 23:54:21.967652 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 8 23:54:21.967662 kernel: PCI: Using configuration type 1 for base access Sep 8 23:54:21.967670 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 8 23:54:21.967678 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:54:21.967687 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:54:21.967695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:54:21.967703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:54:21.967711 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:54:21.967718 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:54:21.967726 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:54:21.967737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:54:21.967745 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 8 23:54:21.967753 kernel: ACPI: Interpreter enabled Sep 8 23:54:21.967761 kernel: ACPI: PM: (supports S0 S3 S5) Sep 8 23:54:21.967771 kernel: ACPI: Using IOAPIC for interrupt routing Sep 8 23:54:21.967780 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 8 23:54:21.967789 kernel: PCI: Using E820 reservations for host bridge windows Sep 8 23:54:21.967798 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 8 23:54:21.967806 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:54:21.968031 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:54:21.968184 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 8 23:54:21.968333 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 8 23:54:21.968344 kernel: PCI host bridge to bus 0000:00 Sep 8 23:54:21.968514 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 8 23:54:21.968638 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 8 23:54:21.968759 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 8 23:54:21.968895 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 8 23:54:21.969036 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 8 23:54:21.969162 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:54:21.969299 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:54:21.969465 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 8 23:54:21.969618 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 8 23:54:21.969758 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 8 23:54:21.969891 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 8 23:54:21.970033 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 8 23:54:21.970164 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 8 23:54:21.970317 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 8 23:54:21.970480 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:54:21.970617 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 8 23:54:21.970755 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 8 23:54:21.970887 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 8 23:54:21.971043 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 8 23:54:21.971187 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 8 23:54:21.971341 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 8 23:54:21.971472 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 8 23:54:21.971626 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 8 23:54:21.971770 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 8 23:54:21.971903 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 8 23:54:21.972045 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 8 23:54:21.972179 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 8 23:54:21.972343 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 8 23:54:21.972477 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 8 23:54:21.972627 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 8 23:54:21.972766 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 8 23:54:21.972913 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 8 23:54:21.973118 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 8 23:54:21.973266 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 8 23:54:21.973278 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 8 23:54:21.973286 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 8 23:54:21.973294 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 8 23:54:21.973307 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 8 23:54:21.973315 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 8 23:54:21.973323 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 8 23:54:21.973331 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 8 23:54:21.973340 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 8 23:54:21.973348 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 8 23:54:21.973356 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 8 23:54:21.973364 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 8 23:54:21.973372 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 8 23:54:21.973382 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 8 23:54:21.973390 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 8 23:54:21.973398 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 8 23:54:21.973406 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 8 23:54:21.973414 kernel: iommu: Default domain type: Translated Sep 8 23:54:21.973422 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 8 23:54:21.973429 kernel: efivars: Registered efivars operations Sep 8 23:54:21.973437 kernel: PCI: Using ACPI for IRQ routing Sep 8 23:54:21.973446 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 8 23:54:21.973456 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 8 23:54:21.973464 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 8 23:54:21.973472 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 8 23:54:21.973480 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 8 23:54:21.973488 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 8 23:54:21.973495 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 8 23:54:21.973503 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 8 23:54:21.973511 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 8 23:54:21.973649 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 8 23:54:21.973780 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 8 23:54:21.973910 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 8 23:54:21.973921 kernel: vgaarb: loaded Sep 8 23:54:21.973929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 8 23:54:21.973937 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 8 23:54:21.973945 kernel: clocksource: Switched to clocksource kvm-clock Sep 8 23:54:21.973953 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:54:21.973962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:54:21.973973 kernel: pnp: PnP ACPI init Sep 8 23:54:21.974146 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 8 23:54:21.974159 kernel: pnp: PnP ACPI: found 6 devices Sep 8 23:54:21.974168 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 8 23:54:21.974176 kernel: NET: Registered PF_INET protocol family Sep 8 23:54:21.974221 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:54:21.974233 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:54:21.974241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:54:21.974252 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:54:21.974260 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:54:21.974269 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:54:21.974277 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:54:21.974286 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:54:21.974297 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:54:21.974306 kernel: NET: Registered PF_XDP protocol family Sep 8 23:54:21.974451 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 8 23:54:21.974600 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 8 23:54:21.974731 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 8 23:54:21.974854 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 8 23:54:21.974985 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 8 23:54:21.975119 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 8 23:54:21.975255 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 8 23:54:21.975381 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 8 23:54:21.975393 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:54:21.975406 kernel: Initialise system trusted keyrings Sep 8 23:54:21.975414 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:54:21.975428 kernel: Key type asymmetric registered Sep 8 23:54:21.975438 kernel: Asymmetric key parser 'x509' registered Sep 8 23:54:21.975446 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 8 23:54:21.975454 kernel: io scheduler mq-deadline registered Sep 8 23:54:21.975463 kernel: io scheduler kyber registered Sep 8 23:54:21.975471 kernel: io scheduler bfq registered Sep 8 23:54:21.975479 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 8 23:54:21.975488 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 8 23:54:21.975499 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 8 23:54:21.975510 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 8 23:54:21.975519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:54:21.975527 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 8 23:54:21.975536 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 8 23:54:21.975547 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 8 23:54:21.975555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 8 23:54:21.975727 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 8 23:54:21.975741 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 8 23:54:21.975884 kernel: rtc_cmos 00:04: registered as rtc0 Sep 8 23:54:21.976023 kernel: rtc_cmos 00:04: setting system clock to 2025-09-08T23:54:21 UTC (1757375661) Sep 8 23:54:21.976149 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 8 23:54:21.976161 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 8 23:54:21.976173 kernel: efifb: probing for efifb Sep 8 23:54:21.976182 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 8 23:54:21.976190 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 8 23:54:21.976211 kernel: efifb: scrolling: redraw Sep 8 23:54:21.976220 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 8 23:54:21.976228 kernel: Console: switching to colour frame buffer device 160x50 Sep 8 23:54:21.976236 kernel: fb0: EFI VGA frame buffer device Sep 8 23:54:21.976245 kernel: pstore: Using crash dump compression: deflate Sep 8 23:54:21.976253 kernel: pstore: Registered efi_pstore as persistent store backend Sep 8 23:54:21.976264 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:54:21.976272 kernel: Segment Routing with IPv6 Sep 8 23:54:21.976281 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:54:21.976289 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:54:21.976297 kernel: Key type dns_resolver registered Sep 8 23:54:21.976305 kernel: IPI shorthand broadcast: enabled Sep 8 23:54:21.976313 kernel: sched_clock: Marking stable (1056003426, 154240505)->(1237185586, -26941655) Sep 8 23:54:21.976322 kernel: registered taskstats version 1 Sep 8 23:54:21.976330 kernel: Loading compiled-in X.509 certificates Sep 8 23:54:21.976341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: c16a276a56169aed770943c7e14b6e7e5f4f7133' Sep 8 23:54:21.976349 kernel: Key type .fscrypt registered Sep 8 23:54:21.976357 kernel: Key type fscrypt-provisioning registered Sep 8 23:54:21.976365 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:54:21.976374 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:54:21.976382 kernel: ima: No architecture policies found Sep 8 23:54:21.976391 kernel: clk: Disabling unused clocks Sep 8 23:54:21.976402 kernel: Freeing unused kernel image (initmem) memory: 43504K Sep 8 23:54:21.976412 kernel: Write protecting the kernel read-only data: 38912k Sep 8 23:54:21.976425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 8 23:54:21.976435 kernel: Run /init as init process Sep 8 23:54:21.976445 kernel: with arguments: Sep 8 23:54:21.976456 kernel: /init Sep 8 23:54:21.976464 kernel: with environment: Sep 8 23:54:21.976472 kernel: HOME=/ Sep 8 23:54:21.976480 kernel: TERM=linux Sep 8 23:54:21.976488 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:54:21.976498 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:54:21.976513 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:54:21.976522 systemd[1]: Detected virtualization kvm. Sep 8 23:54:21.976531 systemd[1]: Detected architecture x86-64. Sep 8 23:54:21.976539 systemd[1]: Running in initrd. Sep 8 23:54:21.976548 systemd[1]: No hostname configured, using default hostname. Sep 8 23:54:21.976557 systemd[1]: Hostname set to . Sep 8 23:54:21.976566 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:54:21.976578 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:54:21.976595 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:54:21.976610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:54:21.976623 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:54:21.976636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:54:21.976648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:54:21.976663 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:54:21.976684 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:54:21.976697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:54:21.976709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:54:21.976720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:54:21.976732 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:54:21.976743 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:54:21.976754 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:54:21.976766 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:54:21.976782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:54:21.976793 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:54:21.976802 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:54:21.976811 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:54:21.976819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:54:21.976828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:54:21.976837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:54:21.976846 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:54:21.976854 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:54:21.976866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:54:21.976875 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:54:21.976883 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:54:21.976892 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:54:21.976901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:54:21.976910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:54:21.976918 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:54:21.976927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:54:21.976939 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:54:21.976980 systemd-journald[194]: Collecting audit messages is disabled. Sep 8 23:54:21.977014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:54:21.977023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:21.977032 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:54:21.977041 systemd-journald[194]: Journal started Sep 8 23:54:21.977065 systemd-journald[194]: Runtime Journal (/run/log/journal/263f4c6abf9342a5bec699044632d68f) is 6M, max 48.2M, 42.2M free. Sep 8 23:54:21.961270 systemd-modules-load[195]: Inserted module 'overlay' Sep 8 23:54:21.980264 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:54:21.985188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:54:21.989165 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:54:21.991921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:54:21.999258 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:54:22.002170 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 8 23:54:22.003656 kernel: Bridge firewalling registered Sep 8 23:54:22.003961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:54:22.009668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:54:22.013455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:54:22.015042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:54:22.026348 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:22.036363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:54:22.039991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:22.043711 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:54:22.061981 dracut-cmdline[232]: dracut-dracut-053 Sep 8 23:54:22.066298 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=614c4ef85422d1b24559f161a4ad89cb626bb862dd1c761ed2d77c8a0665a1ae Sep 8 23:54:22.078237 systemd-resolved[228]: Positive Trust Anchors: Sep 8 23:54:22.078754 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:54:22.078787 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:54:22.081531 systemd-resolved[228]: Defaulting to hostname 'linux'. Sep 8 23:54:22.082739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:54:22.087499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:54:22.173240 kernel: SCSI subsystem initialized Sep 8 23:54:22.183235 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:54:22.214222 kernel: iscsi: registered transport (tcp) Sep 8 23:54:22.237247 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:54:22.237311 kernel: QLogic iSCSI HBA Driver Sep 8 23:54:22.302728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:54:22.320604 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:54:22.369588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:54:22.369709 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:54:22.370912 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:54:22.414246 kernel: raid6: avx2x4 gen() 25085 MB/s Sep 8 23:54:22.431227 kernel: raid6: avx2x2 gen() 27979 MB/s Sep 8 23:54:22.448379 kernel: raid6: avx2x1 gen() 23583 MB/s Sep 8 23:54:22.448453 kernel: raid6: using algorithm avx2x2 gen() 27979 MB/s Sep 8 23:54:22.478249 kernel: raid6: .... xor() 15941 MB/s, rmw enabled Sep 8 23:54:22.478330 kernel: raid6: using avx2x2 recovery algorithm Sep 8 23:54:22.520256 kernel: xor: automatically using best checksumming function avx Sep 8 23:54:22.678255 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:54:22.694238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:54:22.705383 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:54:22.725540 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 8 23:54:22.733853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:54:22.742381 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:54:22.759329 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Sep 8 23:54:22.802377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:54:22.820841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:54:22.919902 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:54:22.931338 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:54:22.946604 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:54:22.950079 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:54:22.952797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:54:22.955927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:54:22.967507 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:54:22.974904 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 8 23:54:22.978611 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:54:22.979178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:54:22.979226 kernel: GPT:9289727 != 19775487 Sep 8 23:54:22.980481 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:54:22.980539 kernel: GPT:9289727 != 19775487 Sep 8 23:54:22.980564 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:54:22.980693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:54:22.983931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:54:22.989221 kernel: cryptd: max_cpu_qlen set to 1000 Sep 8 23:54:23.002242 kernel: libata version 3.00 loaded. Sep 8 23:54:23.007239 kernel: AVX2 version of gcm_enc/dec engaged. Sep 8 23:54:23.007324 kernel: AES CTR mode by8 optimization enabled Sep 8 23:54:23.014236 kernel: ahci 0000:00:1f.2: version 3.0 Sep 8 23:54:23.014583 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 8 23:54:23.014602 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 8 23:54:23.016366 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 8 23:54:23.021222 kernel: scsi host0: ahci Sep 8 23:54:23.021494 kernel: scsi host1: ahci Sep 8 23:54:23.023214 kernel: scsi host2: ahci Sep 8 23:54:23.025219 kernel: scsi host3: ahci Sep 8 23:54:23.027470 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:54:23.027616 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:23.034447 kernel: scsi host4: ahci Sep 8 23:54:23.034670 kernel: scsi host5: ahci Sep 8 23:54:23.034831 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 8 23:54:23.034843 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 8 23:54:23.034854 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 8 23:54:23.034864 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 8 23:54:23.034874 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 8 23:54:23.034885 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 8 23:54:23.040494 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:54:23.044712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:54:23.053287 kernel: BTRFS: device fsid 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Sep 8 23:54:23.053308 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (459) Sep 8 23:54:23.044861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:23.050407 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:54:23.057340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:54:23.060010 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:54:23.073995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:23.090013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:54:23.100172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:54:23.116498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:54:23.124814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:54:23.126151 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:54:23.142380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:54:23.144482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:54:23.167308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:23.333593 disk-uuid[557]: Primary Header is updated. Sep 8 23:54:23.333593 disk-uuid[557]: Secondary Entries is updated. Sep 8 23:54:23.333593 disk-uuid[557]: Secondary Header is updated. Sep 8 23:54:23.337538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:54:23.343249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:54:23.347215 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 8 23:54:23.347242 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 8 23:54:23.347254 kernel: ata3.00: applying bridge limits Sep 8 23:54:23.347265 kernel: ata3.00: configured for UDMA/100 Sep 8 23:54:23.348222 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 8 23:54:23.351066 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 8 23:54:23.352578 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 8 23:54:23.357240 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 8 23:54:23.357308 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 8 23:54:23.358270 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 8 23:54:23.409247 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 8 23:54:23.409581 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 8 23:54:23.427232 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 8 23:54:24.345250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:54:24.346252 disk-uuid[566]: The operation has completed successfully. Sep 8 23:54:24.389462 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:54:24.389588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:54:24.430378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:54:24.438183 sh[593]: Success Sep 8 23:54:24.449223 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 8 23:54:24.488631 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:54:24.506511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:54:24.509953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:54:24.522235 kernel: BTRFS info (device dm-0): first mount of filesystem 49c9ae6f-f48b-4b7d-8773-9ddfd8ce7dbf Sep 8 23:54:24.522266 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:54:24.524018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:54:24.524041 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:54:24.524770 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:54:24.530984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:54:24.532380 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:54:24.542377 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:54:24.543610 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:54:24.562920 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:54:24.562989 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:54:24.563001 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:24.566211 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:24.572282 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:54:24.664343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:54:24.700533 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:54:24.729125 systemd-networkd[769]: lo: Link UP Sep 8 23:54:24.729137 systemd-networkd[769]: lo: Gained carrier Sep 8 23:54:24.730996 systemd-networkd[769]: Enumeration completed Sep 8 23:54:24.731140 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:54:24.731394 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:24.731399 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:54:24.735693 systemd-networkd[769]: eth0: Link UP Sep 8 23:54:24.735697 systemd-networkd[769]: eth0: Gained carrier Sep 8 23:54:24.735704 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:24.737524 systemd[1]: Reached target network.target - Network. Sep 8 23:54:24.783259 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:54:24.895525 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:54:24.907451 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:54:24.962749 ignition[774]: Ignition 2.20.0 Sep 8 23:54:24.962765 ignition[774]: Stage: fetch-offline Sep 8 23:54:24.962820 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:24.962831 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:24.962947 ignition[774]: parsed url from cmdline: "" Sep 8 23:54:24.962951 ignition[774]: no config URL provided Sep 8 23:54:24.962957 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:54:24.962968 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:54:24.963000 ignition[774]: op(1): [started] loading QEMU firmware config module Sep 8 23:54:24.963005 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:54:24.970339 ignition[774]: op(1): [finished] loading QEMU firmware config module Sep 8 23:54:24.972902 ignition[774]: parsing config with SHA512: 9d68d6fd01b0c670c68a5f5538a4feca1ac04dc4e3e09f4d852a6d3246bb4e948bb89a2dbab78feb7a9a7b5408ae8d37f790263d08861e9b22cd237f28c9743e Sep 8 23:54:24.975657 unknown[774]: fetched base config from "system" Sep 8 23:54:24.975671 unknown[774]: fetched user config from "qemu" Sep 8 23:54:24.976001 ignition[774]: fetch-offline: fetch-offline passed Sep 8 23:54:24.977092 systemd-resolved[228]: Detected conflict on linux IN A 10.0.0.69 Sep 8 23:54:24.976106 ignition[774]: Ignition finished successfully Sep 8 23:54:24.977104 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 8 23:54:24.978809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:54:24.981027 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:54:24.987378 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:54:25.003492 ignition[784]: Ignition 2.20.0 Sep 8 23:54:25.003505 ignition[784]: Stage: kargs Sep 8 23:54:25.003664 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:25.003676 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:25.004301 ignition[784]: kargs: kargs passed Sep 8 23:54:25.004349 ignition[784]: Ignition finished successfully Sep 8 23:54:25.007798 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:54:25.021495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:54:25.032765 ignition[793]: Ignition 2.20.0 Sep 8 23:54:25.032777 ignition[793]: Stage: disks Sep 8 23:54:25.032957 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:25.032969 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:25.033620 ignition[793]: disks: disks passed Sep 8 23:54:25.033665 ignition[793]: Ignition finished successfully Sep 8 23:54:25.058250 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:54:25.058798 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:54:25.060827 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:54:25.061227 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:54:25.065512 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:54:25.067669 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:54:25.081360 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:54:25.124618 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:54:25.169050 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:54:25.177294 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:54:25.277322 kernel: EXT4-fs (vda9): mounted filesystem 4436772e-5166-41e3-9cb5-50bbb91cbcf6 r/w with ordered data mode. Quota mode: none. Sep 8 23:54:25.277500 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:54:25.278525 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:54:25.291277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:54:25.292497 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:54:25.293774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:54:25.293820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:54:25.293847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:54:25.301395 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:54:25.309397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:54:25.311515 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (812) Sep 8 23:54:25.314418 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:54:25.314454 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:54:25.314470 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:25.318246 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:25.321118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:54:25.344799 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:54:25.350869 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:54:25.355614 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:54:25.360659 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:54:25.452676 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:54:25.464283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:54:25.465963 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:54:25.473228 kernel: BTRFS info (device vda6): last unmount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:54:25.495004 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:54:25.506517 ignition[927]: INFO : Ignition 2.20.0 Sep 8 23:54:25.506517 ignition[927]: INFO : Stage: mount Sep 8 23:54:25.508390 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:25.508390 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:25.511363 ignition[927]: INFO : mount: mount passed Sep 8 23:54:25.512193 ignition[927]: INFO : Ignition finished successfully Sep 8 23:54:25.515529 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:54:25.523534 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:54:25.526375 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:54:25.530468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:54:25.541218 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (941) Sep 8 23:54:25.543264 kernel: BTRFS info (device vda6): first mount of filesystem b6f932a0-9de5-471f-a098-137127c01576 Sep 8 23:54:25.543288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 8 23:54:25.543299 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:25.546231 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:25.548170 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:54:25.578533 ignition[958]: INFO : Ignition 2.20.0 Sep 8 23:54:25.578533 ignition[958]: INFO : Stage: files Sep 8 23:54:25.580521 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:25.580521 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:25.580521 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:54:25.580521 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:54:25.580521 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:54:25.588348 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:54:25.589927 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:54:25.591783 unknown[958]: wrote ssh authorized keys file for user: core Sep 8 23:54:25.593058 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:54:25.593058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:54:25.593058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:54:25.598094 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 8 23:54:25.923674 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 8 23:54:26.837129 systemd-networkd[769]: eth0: Gained IPv6LL Sep 8 23:54:28.107706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 8 23:54:28.107706 ignition[958]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 8 23:54:28.112922 ignition[958]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:54:28.112922 ignition[958]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:54:28.112922 ignition[958]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 8 23:54:28.112922 ignition[958]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:54:28.131923 ignition[958]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:54:28.138505 ignition[958]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:54:28.140618 ignition[958]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:54:28.140618 ignition[958]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:54:28.140618 ignition[958]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:54:28.140618 ignition[958]: INFO : files: files passed Sep 8 23:54:28.140618 ignition[958]: INFO : Ignition finished successfully Sep 8 23:54:28.151928 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:54:28.170634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:54:28.173253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:54:28.176674 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:54:28.176813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:54:28.186405 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:54:28.190647 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:28.192666 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:28.192666 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:28.200043 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:54:28.202061 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:54:28.222505 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:54:28.270601 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:54:28.270813 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:54:28.275425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:54:28.277698 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:54:28.280178 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:54:28.294665 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:54:28.319112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:54:28.334597 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:54:28.353512 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:54:28.356123 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:54:28.356670 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:54:28.357233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:54:28.357396 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:54:28.358712 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:54:28.359125 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:54:28.359700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:54:28.360097 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:54:28.360681 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:54:28.361094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:54:28.361677 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:54:28.363647 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:54:28.364014 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:54:28.364582 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:54:28.365006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:54:28.365209 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:54:28.366141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:54:28.366645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:54:28.366960 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:54:28.367677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:54:28.368080 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:54:28.419080 ignition[1013]: INFO : Ignition 2.20.0 Sep 8 23:54:28.419080 ignition[1013]: INFO : Stage: umount Sep 8 23:54:28.419080 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:28.419080 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:28.368254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:54:28.425113 ignition[1013]: INFO : umount: umount passed Sep 8 23:54:28.425113 ignition[1013]: INFO : Ignition finished successfully Sep 8 23:54:28.368925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:54:28.369087 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:54:28.369712 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:54:28.370075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:54:28.370325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:54:28.370756 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:54:28.371100 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:54:28.371572 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:54:28.371704 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:54:28.372084 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:54:28.372218 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:54:28.372598 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:54:28.372779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:54:28.373160 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:54:28.373328 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:54:28.395649 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:54:28.399190 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:54:28.400611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:54:28.400886 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:54:28.403917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:54:28.404135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:54:28.411635 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:54:28.411802 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:54:28.425748 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:54:28.426664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:54:28.428904 systemd[1]: Stopped target network.target - Network. Sep 8 23:54:28.430008 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:54:28.430114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:54:28.432339 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:54:28.432408 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:54:28.437701 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:54:28.437810 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:54:28.440912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:54:28.441005 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:54:28.443357 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:54:28.445497 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:54:28.454733 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:54:28.455836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:54:28.461537 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:54:28.461954 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:54:28.462160 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:54:28.465974 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:54:28.467534 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:54:28.467618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:54:28.492616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:54:28.494734 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:54:28.494878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:54:28.498813 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:54:28.498942 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:28.501495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:54:28.501593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:54:28.506834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:54:28.506940 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:54:28.510250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:54:28.514395 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:54:28.514511 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:54:28.528767 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:54:28.528982 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:54:28.535638 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:54:28.535986 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:54:28.539611 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:54:28.539747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:54:28.541416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:54:28.541475 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:54:28.543997 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:54:28.544076 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:54:28.547089 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:54:28.547173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:54:28.549778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:54:28.549894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:28.558584 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:54:28.561093 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:54:28.561250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:54:28.565854 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 8 23:54:28.565968 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:54:28.570917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:54:28.571028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:54:28.576011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:54:28.576122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:28.594665 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:54:28.594777 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:54:28.595461 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:54:28.596100 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:54:29.177889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:54:29.184836 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:54:29.186692 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:54:29.196239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:54:29.205991 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:54:29.206150 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:54:29.229596 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:54:29.248538 systemd[1]: Switching root. Sep 8 23:54:29.282159 systemd-journald[194]: Journal stopped Sep 8 23:54:31.368329 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 8 23:54:31.368399 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:54:31.368420 kernel: SELinux: policy capability open_perms=1 Sep 8 23:54:31.368432 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:54:31.368458 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:54:31.368486 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:54:31.368501 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:54:31.368524 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:54:31.368539 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:54:31.368554 kernel: audit: type=1403 audit(1757375670.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:54:31.368570 systemd[1]: Successfully loaded SELinux policy in 74.221ms. Sep 8 23:54:31.368595 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 29.911ms. Sep 8 23:54:31.368610 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:54:31.368623 systemd[1]: Detected virtualization kvm. Sep 8 23:54:31.368636 systemd[1]: Detected architecture x86-64. Sep 8 23:54:31.368655 systemd[1]: Detected first boot. Sep 8 23:54:31.368668 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:54:31.368681 zram_generator::config[1060]: No configuration found. Sep 8 23:54:31.368700 kernel: Guest personality initialized and is inactive Sep 8 23:54:31.368712 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 8 23:54:31.368729 kernel: Initialized host personality Sep 8 23:54:31.368741 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:54:31.368767 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:54:31.368781 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:54:31.368799 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:54:31.368812 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:54:31.368825 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:54:31.368838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:54:31.368851 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:54:31.368864 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:54:31.368877 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:54:31.368890 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:54:31.368905 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:54:31.368919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:54:31.368932 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:54:31.368949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:54:31.368967 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:54:31.368984 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:54:31.369000 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:54:31.369014 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:54:31.369031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:54:31.369055 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 8 23:54:31.369073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:54:31.369089 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:54:31.369104 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:54:31.369118 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:54:31.369131 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:54:31.369143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:54:31.369156 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:54:31.369172 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:54:31.369184 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:54:31.369213 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:54:31.369227 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:54:31.369240 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:54:31.369252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:54:31.369265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:54:31.369279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:54:31.369314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:54:31.369337 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:54:31.369355 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:54:31.369371 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:54:31.369385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:31.369400 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:54:31.369412 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:54:31.369425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:54:31.369438 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:54:31.369453 systemd[1]: Reached target machines.target - Containers. Sep 8 23:54:31.369474 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:54:31.369490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:31.369506 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:54:31.369522 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:54:31.369544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:31.369560 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:54:31.369576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:31.369592 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:54:31.369612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:31.369628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:54:31.369644 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:54:31.369660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:54:31.369674 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:54:31.369687 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:54:31.369701 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:31.369713 kernel: fuse: init (API version 7.39) Sep 8 23:54:31.369730 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:54:31.369742 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:54:31.369766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:54:31.369779 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:54:31.369792 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:54:31.369805 kernel: ACPI: bus type drm_connector registered Sep 8 23:54:31.369842 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:54:31.369874 kernel: loop: module loaded Sep 8 23:54:31.369887 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:54:31.369900 systemd[1]: Stopped verity-setup.service. Sep 8 23:54:31.369915 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:31.369933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:54:31.369951 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:54:31.369967 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:54:31.369990 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:54:31.370008 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:54:31.370025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:54:31.370042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:54:31.370062 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:54:31.370101 systemd-journald[1128]: Collecting audit messages is disabled. Sep 8 23:54:31.370124 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:54:31.370143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:31.370160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:31.370179 systemd-journald[1128]: Journal started Sep 8 23:54:31.370228 systemd-journald[1128]: Runtime Journal (/run/log/journal/263f4c6abf9342a5bec699044632d68f) is 6M, max 48.2M, 42.2M free. Sep 8 23:54:31.091984 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:54:31.107809 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:54:31.108465 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:54:31.374336 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:54:31.375542 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:54:31.375794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:54:31.377635 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:54:31.379148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:31.379403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:31.381042 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:54:31.381280 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:54:31.382925 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:31.383151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:31.384590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:54:31.386095 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:54:31.387803 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:54:31.389462 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:54:31.405730 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:54:31.413360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:54:31.415931 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:54:31.417158 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:54:31.417276 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:54:31.419597 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:54:31.422302 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:54:31.424637 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:54:31.425858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:31.430259 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:54:31.432592 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:54:31.433873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:54:31.436674 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:54:31.437789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:54:31.439368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:54:31.445459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:54:31.449562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:54:31.451964 systemd-journald[1128]: Time spent on flushing to /var/log/journal/263f4c6abf9342a5bec699044632d68f is 1.076562s for 1041 entries. Sep 8 23:54:31.451964 systemd-journald[1128]: System Journal (/var/log/journal/263f4c6abf9342a5bec699044632d68f) is 8M, max 195.6M, 187.6M free. Sep 8 23:54:33.218526 systemd-journald[1128]: Received client request to flush runtime journal. Sep 8 23:54:33.218613 kernel: loop0: detected capacity change from 0 to 138176 Sep 8 23:54:33.218650 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:54:33.218679 kernel: loop1: detected capacity change from 0 to 147912 Sep 8 23:54:33.218730 kernel: loop2: detected capacity change from 0 to 224512 Sep 8 23:54:33.218755 kernel: loop3: detected capacity change from 0 to 138176 Sep 8 23:54:33.218783 kernel: loop4: detected capacity change from 0 to 147912 Sep 8 23:54:31.452836 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:54:31.454370 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:54:31.455930 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:54:31.469373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:54:31.545503 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:54:31.557282 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:54:31.641494 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 8 23:54:31.641512 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 8 23:54:31.649573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:54:31.811019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:54:31.812940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:31.854148 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:54:31.862105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:54:31.864440 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:54:31.882516 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:54:31.887474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:54:31.914121 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 8 23:54:31.914140 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 8 23:54:31.921474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:54:33.220508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:54:33.243238 kernel: loop5: detected capacity change from 0 to 224512 Sep 8 23:54:33.254179 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:54:33.255112 (sd-merge)[1202]: Merged extensions into '/usr'. Sep 8 23:54:33.260874 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:54:33.261047 systemd[1]: Reloading... Sep 8 23:54:33.355237 zram_generator::config[1232]: No configuration found. Sep 8 23:54:33.564659 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:54:34.115925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:34.186921 systemd[1]: Reloading finished in 925 ms. Sep 8 23:54:34.207028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:54:34.229419 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:54:34.231337 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:54:34.233154 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:54:34.254487 systemd[1]: Starting ensure-sysext.service... Sep 8 23:54:34.257190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:54:34.382601 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:54:34.382628 systemd[1]: Reloading... Sep 8 23:54:34.391584 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:54:34.391885 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:54:34.392902 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:54:34.393191 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 8 23:54:34.393289 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 8 23:54:34.397334 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:54:34.397346 systemd-tmpfiles[1275]: Skipping /boot Sep 8 23:54:34.414034 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:54:34.414049 systemd-tmpfiles[1275]: Skipping /boot Sep 8 23:54:34.456234 zram_generator::config[1304]: No configuration found. Sep 8 23:54:34.614332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:34.689803 systemd[1]: Reloading finished in 306 ms. Sep 8 23:54:34.701726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:54:34.732136 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:54:34.760018 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:54:34.763600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:54:34.767974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:54:34.771695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:54:34.777112 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.777390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:34.782032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:34.786502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:34.790317 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:34.791681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:34.791893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:34.792049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.793446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:34.793719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:34.798141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:34.798401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:34.800932 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:34.801218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:34.810504 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.810824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:34.819630 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:34.823890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:34.828521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:34.832526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:34.832714 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:34.835689 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:54:34.837309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.839236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:54:34.843015 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:54:34.844408 augenrules[1376]: No rules Sep 8 23:54:34.846159 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:54:34.846444 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:54:34.848345 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:54:34.850386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:34.850623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:34.852574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:34.852819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:34.854694 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:34.854925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:34.856780 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:54:34.870274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.883638 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:54:34.884887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:34.886470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:34.892280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:54:34.895174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:34.898724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:34.900483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:34.900609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:34.904613 augenrules[1389]: /sbin/augenrules: No change Sep 8 23:54:34.905272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:54:34.907880 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:54:34.908998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:54:34.909099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 8 23:54:34.910816 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:54:34.915026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:34.915358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:34.917698 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:54:34.917937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:54:34.924693 augenrules[1416]: No rules Sep 8 23:54:34.927546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:34.927805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:34.930004 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:54:34.930279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:54:34.931904 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:34.932149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:34.935511 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:54:34.937142 systemd[1]: Finished ensure-sysext.service. Sep 8 23:54:34.947413 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:54:34.947516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:54:34.958536 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:54:34.965704 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Sep 8 23:54:34.991440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:54:35.001828 systemd-resolved[1346]: Positive Trust Anchors: Sep 8 23:54:35.001861 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:54:35.001904 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:54:35.005454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:54:35.010246 systemd-resolved[1346]: Defaulting to hostname 'linux'. Sep 8 23:54:35.014446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:54:35.016130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:54:35.043661 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:54:35.045053 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:54:35.085853 systemd-networkd[1435]: lo: Link UP Sep 8 23:54:35.085867 systemd-networkd[1435]: lo: Gained carrier Sep 8 23:54:35.114809 systemd-networkd[1435]: Enumeration completed Sep 8 23:54:35.115092 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:54:35.116491 systemd[1]: Reached target network.target - Network. Sep 8 23:54:35.127438 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:54:35.137476 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:54:35.140170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1434) Sep 8 23:54:35.140926 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 8 23:54:35.143474 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:35.143490 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:54:35.144215 systemd-networkd[1435]: eth0: Link UP Sep 8 23:54:35.144227 systemd-networkd[1435]: eth0: Gained carrier Sep 8 23:54:35.144241 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:35.175288 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:54:35.176231 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Sep 8 23:54:35.602752 systemd-resolved[1346]: Clock change detected. Flushing caches. Sep 8 23:54:35.602818 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:54:35.602869 systemd-timesyncd[1429]: Initial clock synchronization to Mon 2025-09-08 23:54:35.602710 UTC. Sep 8 23:54:35.610658 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:54:35.613114 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 8 23:54:35.620036 kernel: ACPI: button: Power Button [PWRF] Sep 8 23:54:35.629366 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:54:35.635316 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 8 23:54:35.635647 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 8 23:54:35.635838 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 8 23:54:35.636106 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 8 23:54:35.645241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:54:35.653036 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 8 23:54:35.667067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:54:35.684091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:54:35.751087 kernel: mousedev: PS/2 mouse device common for all mice Sep 8 23:54:35.767517 kernel: kvm_amd: TSC scaling supported Sep 8 23:54:35.767616 kernel: kvm_amd: Nested Virtualization enabled Sep 8 23:54:35.767630 kernel: kvm_amd: Nested Paging enabled Sep 8 23:54:35.768031 kernel: kvm_amd: LBR virtualization supported Sep 8 23:54:35.769099 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 8 23:54:35.769127 kernel: kvm_amd: Virtual GIF supported Sep 8 23:54:35.794042 kernel: EDAC MC: Ver: 3.0.0 Sep 8 23:54:35.800611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:35.831454 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:54:35.844356 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:54:35.853646 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:54:35.892755 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:54:35.894373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:54:35.895475 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:54:35.896629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:54:35.897900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:54:35.899344 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:54:35.900581 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:54:35.901777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:54:35.902954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:54:35.902983 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:54:35.903902 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:54:35.905752 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:54:35.908536 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:54:35.912544 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:54:35.913961 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:54:35.915190 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:54:35.919137 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:54:35.920531 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:54:35.922890 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:54:35.924547 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:54:35.925696 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:54:35.926761 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:54:35.927752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:54:35.927788 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:54:35.928888 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:54:35.931070 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:54:35.935196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:54:35.938287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:54:35.939331 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:54:35.940066 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:54:35.941750 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:54:35.946398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:54:35.947141 jq[1485]: false Sep 8 23:54:35.949249 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:54:35.954400 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:54:35.956477 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:54:35.957090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:54:35.959220 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:54:35.962415 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:54:35.969581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:54:35.969857 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:54:35.970257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:54:35.970521 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:54:35.970663 dbus-daemon[1484]: [system] SELinux support is enabled Sep 8 23:54:35.971996 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:54:35.973683 jq[1495]: true Sep 8 23:54:35.976742 extend-filesystems[1486]: Found loop3 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found loop4 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found loop5 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found sr0 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda1 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda2 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda3 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found usr Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda4 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda6 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda7 Sep 8 23:54:35.977951 extend-filesystems[1486]: Found vda9 Sep 8 23:54:35.977951 extend-filesystems[1486]: Checking size of /dev/vda9 Sep 8 23:54:36.004141 extend-filesystems[1486]: Resized partition /dev/vda9 Sep 8 23:54:35.984208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:54:35.984309 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:54:35.988190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:54:35.988210 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:54:35.997858 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:54:36.013043 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:54:36.026633 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:54:36.026689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1437) Sep 8 23:54:36.026708 update_engine[1491]: I20250908 23:54:36.011500 1491 main.cc:92] Flatcar Update Engine starting Sep 8 23:54:36.026708 update_engine[1491]: I20250908 23:54:36.020187 1491 update_check_scheduler.cc:74] Next update check in 3m40s Sep 8 23:54:36.017281 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:54:36.017300 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:54:36.027593 jq[1499]: true Sep 8 23:54:36.017578 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:54:36.022307 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:54:36.027847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:54:36.047435 systemd-logind[1490]: Watching system buttons on /dev/input/event1 (Power Button) Sep 8 23:54:36.047482 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 8 23:54:36.048430 systemd-logind[1490]: New seat seat0. Sep 8 23:54:36.053024 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:54:36.065465 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:54:36.911595 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:54:36.911595 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:54:36.911595 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:54:36.922624 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Sep 8 23:54:36.914986 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:54:36.915515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:54:36.925381 systemd-networkd[1435]: eth0: Gained IPv6LL Sep 8 23:54:36.940714 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:54:36.952491 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:54:36.954189 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:54:36.960816 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:54:36.965676 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:54:36.967395 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:54:36.977476 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:54:36.988810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:36.998672 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:54:37.000523 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:54:37.041305 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:54:37.041786 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:54:37.043858 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:54:37.058079 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:54:37.066441 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:54:37.099482 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:54:37.140627 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:54:37.145272 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:40766.service - OpenSSH per-connection server daemon (10.0.0.1:40766). Sep 8 23:54:37.151514 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:54:37.151834 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:54:37.163025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:54:37.214587 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:54:37.225632 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:54:37.229064 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 8 23:54:37.230598 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:54:37.248196 containerd[1507]: time="2025-09-08T23:54:37.248063461Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:54:37.262613 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 40766 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:37.263474 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:37.270685 containerd[1507]: time="2025-09-08T23:54:37.270641831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.272701635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.272736019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.272756488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.272969868Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.272989945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273106995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273124438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273443055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273464696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273483231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274552 containerd[1507]: time="2025-09-08T23:54:37.273495955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.273614226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.273904391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.274135404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.274154590Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.274294573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:54:37.274889 containerd[1507]: time="2025-09-08T23:54:37.274371266Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:54:37.278102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:54:37.292341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:54:37.306723 systemd-logind[1490]: New session 1 of user core. Sep 8 23:54:37.325813 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:54:37.331384 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:54:37.342152 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:54:37.345252 systemd-logind[1490]: New session c1 of user core. Sep 8 23:54:37.357422 containerd[1507]: time="2025-09-08T23:54:37.357342256Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:54:37.357535 containerd[1507]: time="2025-09-08T23:54:37.357448275Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:54:37.357535 containerd[1507]: time="2025-09-08T23:54:37.357466559Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:54:37.357535 containerd[1507]: time="2025-09-08T23:54:37.357482690Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:54:37.357535 containerd[1507]: time="2025-09-08T23:54:37.357500854Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.357689999Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.357906605Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358033954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358049172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358062157Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358074770Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358088015Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358099537Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358112421Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358126658Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358140724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358153057Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358165080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:54:37.358296 containerd[1507]: time="2025-09-08T23:54:37.358184686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358206988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358221506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358234440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358246432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358259397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358270447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358281969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358293761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358310563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358321994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358332814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358347642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358366217Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358390192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.358699 containerd[1507]: time="2025-09-08T23:54:37.358402996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358424196Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358464622Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358482104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358492414Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358503535Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358512692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358529974Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358540634Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:54:37.359099 containerd[1507]: time="2025-09-08T23:54:37.358551044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:54:37.359347 containerd[1507]: time="2025-09-08T23:54:37.358825969Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:54:37.359347 containerd[1507]: time="2025-09-08T23:54:37.358902212Z" level=info msg="Connect containerd service" Sep 8 23:54:37.359347 containerd[1507]: time="2025-09-08T23:54:37.358933150Z" level=info msg="using legacy CRI server" Sep 8 23:54:37.359347 containerd[1507]: time="2025-09-08T23:54:37.358939983Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:54:37.359347 containerd[1507]: time="2025-09-08T23:54:37.359056161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:54:37.359683 containerd[1507]: time="2025-09-08T23:54:37.359657820Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:54:37.359953 containerd[1507]: time="2025-09-08T23:54:37.359923668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:54:37.359991 containerd[1507]: time="2025-09-08T23:54:37.359977008Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:54:37.360103 containerd[1507]: time="2025-09-08T23:54:37.359977189Z" level=info msg="Start subscribing containerd event" Sep 8 23:54:37.360399 containerd[1507]: time="2025-09-08T23:54:37.360371729Z" level=info msg="Start recovering state" Sep 8 23:54:37.360696 containerd[1507]: time="2025-09-08T23:54:37.360473670Z" level=info msg="Start event monitor" Sep 8 23:54:37.360696 containerd[1507]: time="2025-09-08T23:54:37.360561304Z" level=info msg="Start snapshots syncer" Sep 8 23:54:37.360696 containerd[1507]: time="2025-09-08T23:54:37.360574639Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:54:37.360696 containerd[1507]: time="2025-09-08T23:54:37.360590679Z" level=info msg="Start streaming server" Sep 8 23:54:37.360696 containerd[1507]: time="2025-09-08T23:54:37.360671541Z" level=info msg="containerd successfully booted in 0.115148s" Sep 8 23:54:37.366145 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:54:37.505930 systemd[1590]: Queued start job for default target default.target. Sep 8 23:54:37.518550 systemd[1590]: Created slice app.slice - User Application Slice. Sep 8 23:54:37.518578 systemd[1590]: Reached target paths.target - Paths. Sep 8 23:54:37.518712 systemd[1590]: Reached target timers.target - Timers. Sep 8 23:54:37.520548 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:54:37.534280 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:54:37.534463 systemd[1590]: Reached target sockets.target - Sockets. Sep 8 23:54:37.534520 systemd[1590]: Reached target basic.target - Basic System. Sep 8 23:54:37.534580 systemd[1590]: Reached target default.target - Main User Target. Sep 8 23:54:37.534633 systemd[1590]: Startup finished in 181ms. Sep 8 23:54:37.535156 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:54:37.551175 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:54:37.635416 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:40774.service - OpenSSH per-connection server daemon (10.0.0.1:40774). Sep 8 23:54:37.676243 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 40774 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:37.677862 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:37.682413 systemd-logind[1490]: New session 2 of user core. Sep 8 23:54:37.696221 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:54:37.751418 sshd[1604]: Connection closed by 10.0.0.1 port 40774 Sep 8 23:54:37.752326 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:37.765995 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:40774.service: Deactivated successfully. Sep 8 23:54:37.768295 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:54:37.769161 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:54:37.780376 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Sep 8 23:54:37.783170 systemd-logind[1490]: Removed session 2. Sep 8 23:54:37.816311 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:37.817956 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:37.822861 systemd-logind[1490]: New session 3 of user core. Sep 8 23:54:37.831166 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:54:37.886151 sshd[1613]: Connection closed by 10.0.0.1 port 40782 Sep 8 23:54:37.886516 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:37.890569 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:40782.service: Deactivated successfully. Sep 8 23:54:37.892713 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:54:37.893447 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:54:37.894449 systemd-logind[1490]: Removed session 3. Sep 8 23:54:37.988346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:37.989934 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:54:37.992982 systemd[1]: Startup finished in 1.213s (kernel) + 8.362s (initrd) + 7.507s (userspace) = 17.083s. Sep 8 23:54:37.995648 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:54:39.466150 kubelet[1623]: E0908 23:54:39.466068 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:54:39.471280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:54:39.471580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:54:39.472026 systemd[1]: kubelet.service: Consumed 2.240s CPU time, 266.4M memory peak. Sep 8 23:54:47.920644 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:52594.service - OpenSSH per-connection server daemon (10.0.0.1:52594). Sep 8 23:54:47.967792 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 52594 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:47.970506 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:47.977645 systemd-logind[1490]: New session 4 of user core. Sep 8 23:54:47.988420 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:54:48.058511 sshd[1638]: Connection closed by 10.0.0.1 port 52594 Sep 8 23:54:48.059295 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.077237 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:52594.service: Deactivated successfully. Sep 8 23:54:48.080466 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:54:48.084635 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:54:48.099680 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:52598.service - OpenSSH per-connection server daemon (10.0.0.1:52598). Sep 8 23:54:48.101313 systemd-logind[1490]: Removed session 4. Sep 8 23:54:48.158284 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:48.160681 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:48.171670 systemd-logind[1490]: New session 5 of user core. Sep 8 23:54:48.182707 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:54:48.249242 sshd[1646]: Connection closed by 10.0.0.1 port 52598 Sep 8 23:54:48.247882 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.261494 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:52598.service: Deactivated successfully. Sep 8 23:54:48.264746 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:54:48.266076 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:54:48.269214 systemd-logind[1490]: Removed session 5. Sep 8 23:54:48.276597 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:52606.service - OpenSSH per-connection server daemon (10.0.0.1:52606). Sep 8 23:54:48.328103 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 52606 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:48.330415 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:48.345727 systemd-logind[1490]: New session 6 of user core. Sep 8 23:54:48.362382 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:54:48.427972 sshd[1654]: Connection closed by 10.0.0.1 port 52606 Sep 8 23:54:48.427850 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.445417 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:52606.service: Deactivated successfully. Sep 8 23:54:48.448629 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:54:48.451980 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:54:48.467866 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:52618.service - OpenSSH per-connection server daemon (10.0.0.1:52618). Sep 8 23:54:48.469587 systemd-logind[1490]: Removed session 6. Sep 8 23:54:48.510367 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 52618 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:48.512786 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:48.519958 systemd-logind[1490]: New session 7 of user core. Sep 8 23:54:48.529440 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:54:48.600310 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:54:48.600761 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:54:48.628616 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:48.631351 sshd[1662]: Connection closed by 10.0.0.1 port 52618 Sep 8 23:54:48.631955 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.649239 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:52618.service: Deactivated successfully. Sep 8 23:54:48.652236 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:54:48.655243 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:54:48.670551 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:52626.service - OpenSSH per-connection server daemon (10.0.0.1:52626). Sep 8 23:54:48.674467 systemd-logind[1490]: Removed session 7. Sep 8 23:54:48.721888 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 52626 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:48.723683 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:48.735390 systemd-logind[1490]: New session 8 of user core. Sep 8 23:54:48.744526 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:54:48.812278 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:54:48.813431 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:54:48.821290 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:48.831318 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:54:48.831874 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:54:48.860793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:54:48.917917 augenrules[1695]: No rules Sep 8 23:54:48.919451 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:54:48.919850 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:54:48.923240 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:48.927589 sshd[1671]: Connection closed by 10.0.0.1 port 52626 Sep 8 23:54:48.927905 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:48.940584 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:52626.service: Deactivated successfully. Sep 8 23:54:48.944001 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:54:48.947133 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:54:48.962045 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:52640.service - OpenSSH per-connection server daemon (10.0.0.1:52640). Sep 8 23:54:48.963881 systemd-logind[1490]: Removed session 8. Sep 8 23:54:49.021874 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 52640 ssh2: RSA SHA256:Mo3Uhh9vViLk0b2Y5w5LoSqiy/3VEqHEGZedifChO3A Sep 8 23:54:49.025703 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:49.036115 systemd-logind[1490]: New session 9 of user core. Sep 8 23:54:49.051414 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:54:49.111536 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:54:49.111936 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:54:49.138571 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:54:49.171764 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:54:49.172204 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:54:49.552704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:54:49.570493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:49.890847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:49.899602 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:54:49.998122 kubelet[1737]: E0908 23:54:49.997404 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:54:50.010386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:54:50.010673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:54:50.011452 systemd[1]: kubelet.service: Consumed 332ms CPU time, 115.6M memory peak. Sep 8 23:54:50.398075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:50.398342 systemd[1]: kubelet.service: Consumed 332ms CPU time, 115.6M memory peak. Sep 8 23:54:50.415587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:50.453530 systemd[1]: Reload requested from client PID 1765 ('systemctl') (unit session-9.scope)... Sep 8 23:54:50.453564 systemd[1]: Reloading... Sep 8 23:54:50.624116 zram_generator::config[1811]: No configuration found. Sep 8 23:54:51.891957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:51.998896 systemd[1]: Reloading finished in 1544 ms. Sep 8 23:54:52.059587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:52.064506 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:52.066724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:52.067251 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:54:52.067671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:52.067733 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98.2M memory peak. Sep 8 23:54:52.072305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:52.257801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:52.263033 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:52.305568 kubelet[1859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:52.305568 kubelet[1859]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:54:52.305568 kubelet[1859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:52.306067 kubelet[1859]: I0908 23:54:52.305617 1859 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:54:52.813710 kubelet[1859]: I0908 23:54:52.813641 1859 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:54:52.813710 kubelet[1859]: I0908 23:54:52.813680 1859 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:54:52.813980 kubelet[1859]: I0908 23:54:52.813954 1859 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:54:52.835334 kubelet[1859]: I0908 23:54:52.835266 1859 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:54:52.843874 kubelet[1859]: E0908 23:54:52.843817 1859 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:54:52.843874 kubelet[1859]: I0908 23:54:52.843864 1859 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:54:52.851757 kubelet[1859]: I0908 23:54:52.851708 1859 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:54:52.852777 kubelet[1859]: I0908 23:54:52.852712 1859 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:54:52.853193 kubelet[1859]: I0908 23:54:52.852772 1859 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:54:52.853337 kubelet[1859]: I0908 23:54:52.853212 1859 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:54:52.853337 kubelet[1859]: I0908 23:54:52.853231 1859 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:54:52.853460 kubelet[1859]: I0908 23:54:52.853436 1859 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:52.857050 kubelet[1859]: I0908 23:54:52.857030 1859 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:54:52.858937 kubelet[1859]: I0908 23:54:52.858869 1859 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:54:52.858937 kubelet[1859]: I0908 23:54:52.858922 1859 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:54:52.858937 kubelet[1859]: I0908 23:54:52.858938 1859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:54:52.859565 kubelet[1859]: E0908 23:54:52.859512 1859 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:52.859643 kubelet[1859]: E0908 23:54:52.859590 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:52.862208 kubelet[1859]: I0908 23:54:52.862175 1859 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:54:52.862724 kubelet[1859]: I0908 23:54:52.862696 1859 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:54:52.863359 kubelet[1859]: W0908 23:54:52.863309 1859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 8 23:54:52.863421 kubelet[1859]: E0908 23:54:52.863368 1859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.69\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 8 23:54:52.863421 kubelet[1859]: W0908 23:54:52.863392 1859 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:54:52.863504 kubelet[1859]: W0908 23:54:52.863443 1859 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 8 23:54:52.863504 kubelet[1859]: E0908 23:54:52.863481 1859 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 8 23:54:52.865698 kubelet[1859]: I0908 23:54:52.865664 1859 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:54:52.865741 kubelet[1859]: I0908 23:54:52.865719 1859 server.go:1287] "Started kubelet" Sep 8 23:54:52.866194 kubelet[1859]: I0908 23:54:52.865831 1859 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:54:52.869057 kubelet[1859]: I0908 23:54:52.867348 1859 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:54:52.869057 kubelet[1859]: I0908 23:54:52.867422 1859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:54:52.869057 kubelet[1859]: I0908 23:54:52.867802 1859 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:54:52.872043 kubelet[1859]: I0908 23:54:52.869736 1859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:54:52.872043 kubelet[1859]: I0908 23:54:52.869750 1859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:54:52.872043 kubelet[1859]: I0908 23:54:52.869866 1859 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:54:52.872043 kubelet[1859]: E0908 23:54:52.870030 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:52.872043 kubelet[1859]: I0908 23:54:52.870454 1859 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:54:52.872043 kubelet[1859]: I0908 23:54:52.870580 1859 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:54:52.883152 kubelet[1859]: I0908 23:54:52.883113 1859 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:54:52.883333 kubelet[1859]: I0908 23:54:52.883221 1859 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:54:52.884106 kubelet[1859]: E0908 23:54:52.883841 1859 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.69\" not found" node="10.0.0.69" Sep 8 23:54:52.884825 kubelet[1859]: E0908 23:54:52.884796 1859 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:54:52.885030 kubelet[1859]: I0908 23:54:52.884930 1859 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:54:52.904665 kubelet[1859]: I0908 23:54:52.904275 1859 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:54:52.904665 kubelet[1859]: I0908 23:54:52.904299 1859 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:54:52.904665 kubelet[1859]: I0908 23:54:52.904335 1859 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:52.970821 kubelet[1859]: E0908 23:54:52.970732 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.071332 kubelet[1859]: E0908 23:54:53.071122 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.171483 kubelet[1859]: E0908 23:54:53.171388 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.248344 kubelet[1859]: E0908 23:54:53.248256 1859 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.69" not found Sep 8 23:54:53.272551 kubelet[1859]: E0908 23:54:53.272491 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.279622 kubelet[1859]: I0908 23:54:53.279169 1859 policy_none.go:49] "None policy: Start" Sep 8 23:54:53.279622 kubelet[1859]: I0908 23:54:53.279443 1859 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:54:53.280863 kubelet[1859]: I0908 23:54:53.280207 1859 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:54:53.293443 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:54:53.305526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:54:53.311156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:54:53.312211 kubelet[1859]: I0908 23:54:53.312166 1859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:54:53.314188 kubelet[1859]: I0908 23:54:53.314141 1859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:54:53.314188 kubelet[1859]: I0908 23:54:53.314193 1859 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:54:53.314340 kubelet[1859]: I0908 23:54:53.314226 1859 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:54:53.314340 kubelet[1859]: I0908 23:54:53.314236 1859 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:54:53.314450 kubelet[1859]: E0908 23:54:53.314420 1859 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:54:53.319203 kubelet[1859]: I0908 23:54:53.319168 1859 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:54:53.319430 kubelet[1859]: I0908 23:54:53.319412 1859 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:54:53.319501 kubelet[1859]: I0908 23:54:53.319429 1859 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:54:53.320121 kubelet[1859]: I0908 23:54:53.319718 1859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:54:53.320702 kubelet[1859]: E0908 23:54:53.320669 1859 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:54:53.320765 kubelet[1859]: E0908 23:54:53.320747 1859 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.69\" not found" Sep 8 23:54:53.420906 kubelet[1859]: I0908 23:54:53.420531 1859 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.69" Sep 8 23:54:53.426383 kubelet[1859]: I0908 23:54:53.426343 1859 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.69" Sep 8 23:54:53.426383 kubelet[1859]: E0908 23:54:53.426387 1859 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.69\": node \"10.0.0.69\" not found" Sep 8 23:54:53.441290 kubelet[1859]: E0908 23:54:53.441216 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.541700 kubelet[1859]: E0908 23:54:53.541604 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.642528 kubelet[1859]: E0908 23:54:53.642447 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.743268 kubelet[1859]: E0908 23:54:53.743098 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.814198 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:53.815880 kubelet[1859]: I0908 23:54:53.815817 1859 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 8 23:54:53.816085 sshd[1706]: Connection closed by 10.0.0.1 port 52640 Sep 8 23:54:53.816470 kubelet[1859]: W0908 23:54:53.816091 1859 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 8 23:54:53.816661 kubelet[1859]: W0908 23:54:53.816611 1859 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 8 23:54:53.816678 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:53.821630 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:52640.service: Deactivated successfully. Sep 8 23:54:53.824692 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:54:53.824992 systemd[1]: session-9.scope: Consumed 826ms CPU time, 77.8M memory peak. Sep 8 23:54:53.826719 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:54:53.827696 systemd-logind[1490]: Removed session 9. Sep 8 23:54:53.844241 kubelet[1859]: E0908 23:54:53.844172 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:53.860581 kubelet[1859]: E0908 23:54:53.860493 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:53.944905 kubelet[1859]: E0908 23:54:53.944812 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.045990 kubelet[1859]: E0908 23:54:54.045835 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.146552 kubelet[1859]: E0908 23:54:54.146464 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.247168 kubelet[1859]: E0908 23:54:54.247084 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.347515 kubelet[1859]: E0908 23:54:54.347448 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.448440 kubelet[1859]: E0908 23:54:54.448352 1859 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.69\" not found" Sep 8 23:54:54.549959 kubelet[1859]: I0908 23:54:54.549913 1859 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 8 23:54:54.550349 containerd[1507]: time="2025-09-08T23:54:54.550282428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:54:54.550803 kubelet[1859]: I0908 23:54:54.550530 1859 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 8 23:54:54.861115 kubelet[1859]: I0908 23:54:54.861072 1859 apiserver.go:52] "Watching apiserver" Sep 8 23:54:54.861115 kubelet[1859]: E0908 23:54:54.861090 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:54.871303 kubelet[1859]: I0908 23:54:54.871265 1859 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:54:54.871531 systemd[1]: Created slice kubepods-besteffort-pod61ce0db3_b5d5_4132_b248_37b90d55eb42.slice - libcontainer container kubepods-besteffort-pod61ce0db3_b5d5_4132_b248_37b90d55eb42.slice. Sep 8 23:54:54.883661 kubelet[1859]: I0908 23:54:54.883474 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-lib-modules\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883661 kubelet[1859]: I0908 23:54:54.883531 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-kernel\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883661 kubelet[1859]: I0908 23:54:54.883562 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mx7s\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-kube-api-access-9mx7s\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883661 kubelet[1859]: I0908 23:54:54.883599 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61ce0db3-b5d5-4132-b248-37b90d55eb42-lib-modules\") pod \"kube-proxy-k9qzn\" (UID: \"61ce0db3-b5d5-4132-b248-37b90d55eb42\") " pod="kube-system/kube-proxy-k9qzn" Sep 8 23:54:54.883661 kubelet[1859]: I0908 23:54:54.883625 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-etc-cni-netd\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883650 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-config-path\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883678 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-run\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883708 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cni-path\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883735 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-cgroup\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883756 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-xtables-lock\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.883956 kubelet[1859]: I0908 23:54:54.883781 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-net\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883806 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61ce0db3-b5d5-4132-b248-37b90d55eb42-xtables-lock\") pod \"kube-proxy-k9qzn\" (UID: \"61ce0db3-b5d5-4132-b248-37b90d55eb42\") " pod="kube-system/kube-proxy-k9qzn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883843 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-bpf-maps\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883864 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-hostproc\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883905 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f62415d-3bb0-41ac-bb6e-8b207f413368-clustermesh-secrets\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883930 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-hubble-tls\") pod \"cilium-wrdfn\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " pod="kube-system/cilium-wrdfn" Sep 8 23:54:54.884117 kubelet[1859]: I0908 23:54:54.883966 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61ce0db3-b5d5-4132-b248-37b90d55eb42-kube-proxy\") pod \"kube-proxy-k9qzn\" (UID: \"61ce0db3-b5d5-4132-b248-37b90d55eb42\") " pod="kube-system/kube-proxy-k9qzn" Sep 8 23:54:54.884253 kubelet[1859]: I0908 23:54:54.883992 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkmw\" (UniqueName: \"kubernetes.io/projected/61ce0db3-b5d5-4132-b248-37b90d55eb42-kube-api-access-dfkmw\") pod \"kube-proxy-k9qzn\" (UID: \"61ce0db3-b5d5-4132-b248-37b90d55eb42\") " pod="kube-system/kube-proxy-k9qzn" Sep 8 23:54:54.886827 systemd[1]: Created slice kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice - libcontainer container kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice. Sep 8 23:54:55.184341 kubelet[1859]: E0908 23:54:55.184173 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:55.185381 containerd[1507]: time="2025-09-08T23:54:55.185334925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9qzn,Uid:61ce0db3-b5d5-4132-b248-37b90d55eb42,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:55.200578 kubelet[1859]: E0908 23:54:55.200533 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:55.201217 containerd[1507]: time="2025-09-08T23:54:55.201167242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrdfn,Uid:4f62415d-3bb0-41ac-bb6e-8b207f413368,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:55.861285 kubelet[1859]: E0908 23:54:55.861211 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:55.989469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395030578.mount: Deactivated successfully. Sep 8 23:54:55.997658 containerd[1507]: time="2025-09-08T23:54:55.997603639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:55.999487 containerd[1507]: time="2025-09-08T23:54:55.999387715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 8 23:54:56.000342 containerd[1507]: time="2025-09-08T23:54:56.000303773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:56.001342 containerd[1507]: time="2025-09-08T23:54:56.001286917Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:56.001995 containerd[1507]: time="2025-09-08T23:54:56.001955882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:54:56.004069 containerd[1507]: time="2025-09-08T23:54:56.004040532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:56.004807 containerd[1507]: time="2025-09-08T23:54:56.004773788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.274655ms" Sep 8 23:54:56.006858 containerd[1507]: time="2025-09-08T23:54:56.006835976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 805.557606ms" Sep 8 23:54:56.288499 containerd[1507]: time="2025-09-08T23:54:56.287936009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:56.288499 containerd[1507]: time="2025-09-08T23:54:56.288074349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:56.288499 containerd[1507]: time="2025-09-08T23:54:56.288090299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:56.288499 containerd[1507]: time="2025-09-08T23:54:56.288203341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:56.291833 containerd[1507]: time="2025-09-08T23:54:56.290907563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:56.291833 containerd[1507]: time="2025-09-08T23:54:56.291442416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:56.291833 containerd[1507]: time="2025-09-08T23:54:56.291468365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:56.291833 containerd[1507]: time="2025-09-08T23:54:56.291661577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:56.600283 systemd[1]: Started cri-containerd-9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3.scope - libcontainer container 9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3. Sep 8 23:54:56.609709 systemd[1]: Started cri-containerd-f10ed4269cdb39701cd6edb5a2e8a700734004c731b22571490f83f6906a0a11.scope - libcontainer container f10ed4269cdb39701cd6edb5a2e8a700734004c731b22571490f83f6906a0a11. Sep 8 23:54:56.644354 containerd[1507]: time="2025-09-08T23:54:56.644297582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrdfn,Uid:4f62415d-3bb0-41ac-bb6e-8b207f413368,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\"" Sep 8 23:54:56.653051 kubelet[1859]: E0908 23:54:56.652984 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:56.654318 containerd[1507]: time="2025-09-08T23:54:56.654267623Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:54:56.671973 containerd[1507]: time="2025-09-08T23:54:56.671901399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9qzn,Uid:61ce0db3-b5d5-4132-b248-37b90d55eb42,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10ed4269cdb39701cd6edb5a2e8a700734004c731b22571490f83f6906a0a11\"" Sep 8 23:54:56.672949 kubelet[1859]: E0908 23:54:56.672719 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:54:56.862338 kubelet[1859]: E0908 23:54:56.862212 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:57.863082 kubelet[1859]: E0908 23:54:57.863041 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:58.864135 kubelet[1859]: E0908 23:54:58.864099 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:54:59.922307 kubelet[1859]: E0908 23:54:59.922254 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:00.923343 kubelet[1859]: E0908 23:55:00.923285 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:01.445571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934576049.mount: Deactivated successfully. Sep 8 23:55:01.924215 kubelet[1859]: E0908 23:55:01.924140 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:02.924991 kubelet[1859]: E0908 23:55:02.924917 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:03.954001 kubelet[1859]: E0908 23:55:03.953661 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:04.954478 kubelet[1859]: E0908 23:55:04.954381 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:05.954908 kubelet[1859]: E0908 23:55:05.954831 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:06.915862 containerd[1507]: time="2025-09-08T23:55:06.915777923Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:06.916513 containerd[1507]: time="2025-09-08T23:55:06.916383278Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 8 23:55:06.917583 containerd[1507]: time="2025-09-08T23:55:06.917538365Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:06.919442 containerd[1507]: time="2025-09-08T23:55:06.919376102Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.265056051s" Sep 8 23:55:06.919442 containerd[1507]: time="2025-09-08T23:55:06.919422699Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 8 23:55:06.921000 containerd[1507]: time="2025-09-08T23:55:06.920798360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:55:06.922233 containerd[1507]: time="2025-09-08T23:55:06.922201251Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:55:06.938947 containerd[1507]: time="2025-09-08T23:55:06.938878413Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\"" Sep 8 23:55:06.939737 containerd[1507]: time="2025-09-08T23:55:06.939682521Z" level=info msg="StartContainer for \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\"" Sep 8 23:55:06.955269 kubelet[1859]: E0908 23:55:06.955207 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:07.008283 systemd[1]: Started cri-containerd-8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea.scope - libcontainer container 8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea. Sep 8 23:55:07.066476 containerd[1507]: time="2025-09-08T23:55:07.066412871Z" level=info msg="StartContainer for \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\" returns successfully" Sep 8 23:55:07.105385 systemd[1]: cri-containerd-8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea.scope: Deactivated successfully. Sep 8 23:55:07.467894 kubelet[1859]: E0908 23:55:07.467847 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:07.680792 containerd[1507]: time="2025-09-08T23:55:07.680471189Z" level=info msg="shim disconnected" id=8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea namespace=k8s.io Sep 8 23:55:07.680792 containerd[1507]: time="2025-09-08T23:55:07.680547772Z" level=warning msg="cleaning up after shim disconnected" id=8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea namespace=k8s.io Sep 8 23:55:07.680792 containerd[1507]: time="2025-09-08T23:55:07.680561017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:07.935045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea-rootfs.mount: Deactivated successfully. Sep 8 23:55:07.956319 kubelet[1859]: E0908 23:55:07.956225 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:08.476365 kubelet[1859]: E0908 23:55:08.476306 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:08.480291 containerd[1507]: time="2025-09-08T23:55:08.480211729Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:55:08.521378 containerd[1507]: time="2025-09-08T23:55:08.521281425Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\"" Sep 8 23:55:08.525350 containerd[1507]: time="2025-09-08T23:55:08.525272355Z" level=info msg="StartContainer for \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\"" Sep 8 23:55:08.738332 systemd[1]: Started cri-containerd-d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7.scope - libcontainer container d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7. Sep 8 23:55:08.924795 containerd[1507]: time="2025-09-08T23:55:08.924703603Z" level=info msg="StartContainer for \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\" returns successfully" Sep 8 23:55:08.951508 systemd[1]: run-containerd-runc-k8s.io-d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7-runc.WgxHXp.mount: Deactivated successfully. Sep 8 23:55:08.956482 kubelet[1859]: E0908 23:55:08.956406 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:08.978291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:55:08.979083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:55:08.984251 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:55:08.998996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:55:09.002280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:55:09.003204 systemd[1]: cri-containerd-d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7.scope: Deactivated successfully. Sep 8 23:55:09.071642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7-rootfs.mount: Deactivated successfully. Sep 8 23:55:09.079029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:55:09.119437 containerd[1507]: time="2025-09-08T23:55:09.119351954Z" level=info msg="shim disconnected" id=d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7 namespace=k8s.io Sep 8 23:55:09.119797 containerd[1507]: time="2025-09-08T23:55:09.119726453Z" level=warning msg="cleaning up after shim disconnected" id=d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7 namespace=k8s.io Sep 8 23:55:09.119797 containerd[1507]: time="2025-09-08T23:55:09.119757302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:09.499972 kubelet[1859]: E0908 23:55:09.499805 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:09.502940 containerd[1507]: time="2025-09-08T23:55:09.502879078Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:55:09.658558 containerd[1507]: time="2025-09-08T23:55:09.658461246Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\"" Sep 8 23:55:09.659451 containerd[1507]: time="2025-09-08T23:55:09.659401572Z" level=info msg="StartContainer for \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\"" Sep 8 23:55:09.878112 systemd[1]: Started cri-containerd-cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6.scope - libcontainer container cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6. Sep 8 23:55:09.959109 kubelet[1859]: E0908 23:55:09.958810 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:10.014177 systemd[1]: cri-containerd-cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6.scope: Deactivated successfully. Sep 8 23:55:10.023475 containerd[1507]: time="2025-09-08T23:55:10.023377934Z" level=info msg="StartContainer for \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\" returns successfully" Sep 8 23:55:10.097708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6-rootfs.mount: Deactivated successfully. Sep 8 23:55:10.229493 containerd[1507]: time="2025-09-08T23:55:10.229303183Z" level=info msg="shim disconnected" id=cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6 namespace=k8s.io Sep 8 23:55:10.230111 containerd[1507]: time="2025-09-08T23:55:10.229850563Z" level=warning msg="cleaning up after shim disconnected" id=cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6 namespace=k8s.io Sep 8 23:55:10.230111 containerd[1507]: time="2025-09-08T23:55:10.229875552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:10.271443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416628943.mount: Deactivated successfully. Sep 8 23:55:10.533900 kubelet[1859]: E0908 23:55:10.533714 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:10.536092 containerd[1507]: time="2025-09-08T23:55:10.535984184Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:55:10.563844 containerd[1507]: time="2025-09-08T23:55:10.563782921Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\"" Sep 8 23:55:10.564454 containerd[1507]: time="2025-09-08T23:55:10.564409202Z" level=info msg="StartContainer for \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\"" Sep 8 23:55:10.631263 systemd[1]: Started cri-containerd-58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123.scope - libcontainer container 58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123. Sep 8 23:55:10.672990 systemd[1]: cri-containerd-58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123.scope: Deactivated successfully. Sep 8 23:55:10.674838 containerd[1507]: time="2025-09-08T23:55:10.674507846Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice/cri-containerd-58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123.scope/memory.events\": no such file or directory" Sep 8 23:55:10.678451 containerd[1507]: time="2025-09-08T23:55:10.678406064Z" level=info msg="StartContainer for \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\" returns successfully" Sep 8 23:55:10.959305 kubelet[1859]: E0908 23:55:10.959217 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:11.216919 containerd[1507]: time="2025-09-08T23:55:11.216760925Z" level=info msg="shim disconnected" id=58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123 namespace=k8s.io Sep 8 23:55:11.216919 containerd[1507]: time="2025-09-08T23:55:11.216823846Z" level=warning msg="cleaning up after shim disconnected" id=58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123 namespace=k8s.io Sep 8 23:55:11.216919 containerd[1507]: time="2025-09-08T23:55:11.216835568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:11.222698 containerd[1507]: time="2025-09-08T23:55:11.222624939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:11.223551 containerd[1507]: time="2025-09-08T23:55:11.223476951Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 8 23:55:11.224855 containerd[1507]: time="2025-09-08T23:55:11.224798482Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:11.228186 containerd[1507]: time="2025-09-08T23:55:11.228126998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:11.229222 containerd[1507]: time="2025-09-08T23:55:11.229191057Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 4.308365576s" Sep 8 23:55:11.229297 containerd[1507]: time="2025-09-08T23:55:11.229229060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 8 23:55:11.231829 containerd[1507]: time="2025-09-08T23:55:11.231796458Z" level=info msg="CreateContainer within sandbox \"f10ed4269cdb39701cd6edb5a2e8a700734004c731b22571490f83f6906a0a11\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:55:11.251166 containerd[1507]: time="2025-09-08T23:55:11.251113673Z" level=info msg="CreateContainer within sandbox \"f10ed4269cdb39701cd6edb5a2e8a700734004c731b22571490f83f6906a0a11\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"515108c417ed9b04eb1f37e4a1ee63e2403e0173d3791c1d799584ce74520fb6\"" Sep 8 23:55:11.251727 containerd[1507]: time="2025-09-08T23:55:11.251686190Z" level=info msg="StartContainer for \"515108c417ed9b04eb1f37e4a1ee63e2403e0173d3791c1d799584ce74520fb6\"" Sep 8 23:55:11.286161 systemd[1]: Started cri-containerd-515108c417ed9b04eb1f37e4a1ee63e2403e0173d3791c1d799584ce74520fb6.scope - libcontainer container 515108c417ed9b04eb1f37e4a1ee63e2403e0173d3791c1d799584ce74520fb6. Sep 8 23:55:11.322535 containerd[1507]: time="2025-09-08T23:55:11.322480196Z" level=info msg="StartContainer for \"515108c417ed9b04eb1f37e4a1ee63e2403e0173d3791c1d799584ce74520fb6\" returns successfully" Sep 8 23:55:11.537119 kubelet[1859]: E0908 23:55:11.536952 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:11.540120 kubelet[1859]: E0908 23:55:11.540085 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:11.541975 containerd[1507]: time="2025-09-08T23:55:11.541933071Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:55:11.548999 kubelet[1859]: I0908 23:55:11.548907 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9qzn" podStartSLOduration=3.992063947 podStartE2EDuration="18.548878246s" podCreationTimestamp="2025-09-08 23:54:53 +0000 UTC" firstStartedPulling="2025-09-08 23:54:56.673432921 +0000 UTC m=+4.406292847" lastFinishedPulling="2025-09-08 23:55:11.23024722 +0000 UTC m=+18.963107146" observedRunningTime="2025-09-08 23:55:11.548433554 +0000 UTC m=+19.281293480" watchObservedRunningTime="2025-09-08 23:55:11.548878246 +0000 UTC m=+19.281738172" Sep 8 23:55:11.564396 containerd[1507]: time="2025-09-08T23:55:11.564327261Z" level=info msg="CreateContainer within sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\"" Sep 8 23:55:11.565876 containerd[1507]: time="2025-09-08T23:55:11.564840845Z" level=info msg="StartContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\"" Sep 8 23:55:11.596149 systemd[1]: Started cri-containerd-3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e.scope - libcontainer container 3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e. Sep 8 23:55:11.634675 containerd[1507]: time="2025-09-08T23:55:11.634618695Z" level=info msg="StartContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" returns successfully" Sep 8 23:55:11.766417 kubelet[1859]: I0908 23:55:11.766384 1859 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:55:11.959673 kubelet[1859]: E0908 23:55:11.959600 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:12.185054 kernel: Initializing XFRM netlink socket Sep 8 23:55:12.547034 kubelet[1859]: E0908 23:55:12.544279 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:12.547034 kubelet[1859]: E0908 23:55:12.544505 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:12.558740 kubelet[1859]: I0908 23:55:12.558674 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wrdfn" podStartSLOduration=9.29180008 podStartE2EDuration="19.558645658s" podCreationTimestamp="2025-09-08 23:54:53 +0000 UTC" firstStartedPulling="2025-09-08 23:54:56.653763447 +0000 UTC m=+4.386623373" lastFinishedPulling="2025-09-08 23:55:06.920609025 +0000 UTC m=+14.653468951" observedRunningTime="2025-09-08 23:55:12.558500109 +0000 UTC m=+20.291360035" watchObservedRunningTime="2025-09-08 23:55:12.558645658 +0000 UTC m=+20.291505584" Sep 8 23:55:12.859832 kubelet[1859]: E0908 23:55:12.859752 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:12.959953 kubelet[1859]: E0908 23:55:12.959867 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:13.545656 kubelet[1859]: E0908 23:55:13.545607 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:13.907960 systemd-networkd[1435]: cilium_host: Link UP Sep 8 23:55:13.909634 systemd-networkd[1435]: cilium_net: Link UP Sep 8 23:55:13.909727 systemd-networkd[1435]: cilium_net: Gained carrier Sep 8 23:55:13.910339 systemd-networkd[1435]: cilium_host: Gained carrier Sep 8 23:55:13.910765 systemd-networkd[1435]: cilium_host: Gained IPv6LL Sep 8 23:55:13.960493 kubelet[1859]: E0908 23:55:13.960419 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:14.031398 systemd-networkd[1435]: cilium_vxlan: Link UP Sep 8 23:55:14.031410 systemd-networkd[1435]: cilium_vxlan: Gained carrier Sep 8 23:55:14.269043 kernel: NET: Registered PF_ALG protocol family Sep 8 23:55:14.549458 kubelet[1859]: E0908 23:55:14.549417 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:14.685270 systemd-networkd[1435]: cilium_net: Gained IPv6LL Sep 8 23:55:14.960823 kubelet[1859]: E0908 23:55:14.960746 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:15.184210 systemd-networkd[1435]: lxc_health: Link UP Sep 8 23:55:15.184765 systemd-networkd[1435]: lxc_health: Gained carrier Sep 8 23:55:15.551439 kubelet[1859]: E0908 23:55:15.551375 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:15.561889 systemd[1]: Created slice kubepods-besteffort-pod1baabeaa_02cd_4527_b751_5461b91a05f8.slice - libcontainer container kubepods-besteffort-pod1baabeaa_02cd_4527_b751_5461b91a05f8.slice. Sep 8 23:55:15.665315 kubelet[1859]: I0908 23:55:15.665254 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc52q\" (UniqueName: \"kubernetes.io/projected/1baabeaa-02cd-4527-b751-5461b91a05f8-kube-api-access-hc52q\") pod \"nginx-deployment-7fcdb87857-9nqdf\" (UID: \"1baabeaa-02cd-4527-b751-5461b91a05f8\") " pod="default/nginx-deployment-7fcdb87857-9nqdf" Sep 8 23:55:15.837202 systemd-networkd[1435]: cilium_vxlan: Gained IPv6LL Sep 8 23:55:15.867144 containerd[1507]: time="2025-09-08T23:55:15.867055678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-9nqdf,Uid:1baabeaa-02cd-4527-b751-5461b91a05f8,Namespace:default,Attempt:0,}" Sep 8 23:55:15.923046 kernel: eth0: renamed from tmpc6109 Sep 8 23:55:15.930101 systemd-networkd[1435]: lxcb26688f37cdb: Link UP Sep 8 23:55:15.932653 systemd-networkd[1435]: lxcb26688f37cdb: Gained carrier Sep 8 23:55:15.961515 kubelet[1859]: E0908 23:55:15.961443 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:16.553681 kubelet[1859]: E0908 23:55:16.553619 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:16.962697 kubelet[1859]: E0908 23:55:16.962486 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:17.245438 systemd-networkd[1435]: lxc_health: Gained IPv6LL Sep 8 23:55:17.556190 kubelet[1859]: E0908 23:55:17.556142 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:55:17.757298 systemd-networkd[1435]: lxcb26688f37cdb: Gained IPv6LL Sep 8 23:55:17.963315 kubelet[1859]: E0908 23:55:17.962993 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:18.963323 kubelet[1859]: E0908 23:55:18.963260 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:19.964176 kubelet[1859]: E0908 23:55:19.964090 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:20.470725 containerd[1507]: time="2025-09-08T23:55:20.470546596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:20.470725 containerd[1507]: time="2025-09-08T23:55:20.470637048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:20.470725 containerd[1507]: time="2025-09-08T23:55:20.470650703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:20.471363 containerd[1507]: time="2025-09-08T23:55:20.470781953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:20.501467 systemd[1]: Started cri-containerd-c6109f8177aef4adfe1199d6e0e99ef20c85ba85e1738bedb9507a6bae8ee9a6.scope - libcontainer container c6109f8177aef4adfe1199d6e0e99ef20c85ba85e1738bedb9507a6bae8ee9a6. Sep 8 23:55:20.518186 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:55:20.550216 containerd[1507]: time="2025-09-08T23:55:20.550154677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-9nqdf,Uid:1baabeaa-02cd-4527-b751-5461b91a05f8,Namespace:default,Attempt:0,} returns sandbox id \"c6109f8177aef4adfe1199d6e0e99ef20c85ba85e1738bedb9507a6bae8ee9a6\"" Sep 8 23:55:20.551825 containerd[1507]: time="2025-09-08T23:55:20.551773118Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 8 23:55:20.910481 update_engine[1491]: I20250908 23:55:20.910305 1491 update_attempter.cc:509] Updating boot flags... Sep 8 23:55:20.943087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2982) Sep 8 23:55:20.965100 kubelet[1859]: E0908 23:55:20.964410 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:20.987071 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2986) Sep 8 23:55:21.037405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2986) Sep 8 23:55:21.965659 kubelet[1859]: E0908 23:55:21.965558 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:22.966306 kubelet[1859]: E0908 23:55:22.966248 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:23.846177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount841358218.mount: Deactivated successfully. Sep 8 23:55:23.967182 kubelet[1859]: E0908 23:55:23.967102 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:24.967904 kubelet[1859]: E0908 23:55:24.967751 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:25.467984 containerd[1507]: time="2025-09-08T23:55:25.467899710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:25.469099 containerd[1507]: time="2025-09-08T23:55:25.468927475Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73307810" Sep 8 23:55:25.470439 containerd[1507]: time="2025-09-08T23:55:25.470388089Z" level=info msg="ImageCreate event name:\"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:25.473804 containerd[1507]: time="2025-09-08T23:55:25.473745500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:25.474987 containerd[1507]: time="2025-09-08T23:55:25.474925743Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"73307688\" in 4.923106527s" Sep 8 23:55:25.474987 containerd[1507]: time="2025-09-08T23:55:25.474977822Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 8 23:55:25.477494 containerd[1507]: time="2025-09-08T23:55:25.477457493Z" level=info msg="CreateContainer within sandbox \"c6109f8177aef4adfe1199d6e0e99ef20c85ba85e1738bedb9507a6bae8ee9a6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 8 23:55:25.495437 containerd[1507]: time="2025-09-08T23:55:25.495367216Z" level=info msg="CreateContainer within sandbox \"c6109f8177aef4adfe1199d6e0e99ef20c85ba85e1738bedb9507a6bae8ee9a6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bb8aaff8d34b4e60da7ea0889f876292f88cd95fe400e05c26fef87741748617\"" Sep 8 23:55:25.497075 containerd[1507]: time="2025-09-08T23:55:25.496056300Z" level=info msg="StartContainer for \"bb8aaff8d34b4e60da7ea0889f876292f88cd95fe400e05c26fef87741748617\"" Sep 8 23:55:25.541336 systemd[1]: Started cri-containerd-bb8aaff8d34b4e60da7ea0889f876292f88cd95fe400e05c26fef87741748617.scope - libcontainer container bb8aaff8d34b4e60da7ea0889f876292f88cd95fe400e05c26fef87741748617. Sep 8 23:55:25.585982 containerd[1507]: time="2025-09-08T23:55:25.585886579Z" level=info msg="StartContainer for \"bb8aaff8d34b4e60da7ea0889f876292f88cd95fe400e05c26fef87741748617\" returns successfully" Sep 8 23:55:25.968361 kubelet[1859]: E0908 23:55:25.968245 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:26.596742 kubelet[1859]: I0908 23:55:26.596606 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-9nqdf" podStartSLOduration=6.672069219 podStartE2EDuration="11.596584004s" podCreationTimestamp="2025-09-08 23:55:15 +0000 UTC" firstStartedPulling="2025-09-08 23:55:20.551308016 +0000 UTC m=+28.284167942" lastFinishedPulling="2025-09-08 23:55:25.475822801 +0000 UTC m=+33.208682727" observedRunningTime="2025-09-08 23:55:26.596332438 +0000 UTC m=+34.329192375" watchObservedRunningTime="2025-09-08 23:55:26.596584004 +0000 UTC m=+34.329443930" Sep 8 23:55:26.969408 kubelet[1859]: E0908 23:55:26.969218 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:27.970263 kubelet[1859]: E0908 23:55:27.970141 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:28.970776 kubelet[1859]: E0908 23:55:28.970697 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:29.971176 kubelet[1859]: E0908 23:55:29.971099 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:30.972363 kubelet[1859]: E0908 23:55:30.972289 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:31.972836 kubelet[1859]: E0908 23:55:31.972753 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:32.707441 systemd[1]: Created slice kubepods-besteffort-podaa3b6983_436f_4b16_bb1d_a360d061f5c3.slice - libcontainer container kubepods-besteffort-podaa3b6983_436f_4b16_bb1d_a360d061f5c3.slice. Sep 8 23:55:32.799884 kubelet[1859]: I0908 23:55:32.799820 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/aa3b6983-436f-4b16-bb1d-a360d061f5c3-data\") pod \"nfs-server-provisioner-0\" (UID: \"aa3b6983-436f-4b16-bb1d-a360d061f5c3\") " pod="default/nfs-server-provisioner-0" Sep 8 23:55:32.799884 kubelet[1859]: I0908 23:55:32.799883 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2lj\" (UniqueName: \"kubernetes.io/projected/aa3b6983-436f-4b16-bb1d-a360d061f5c3-kube-api-access-7l2lj\") pod \"nfs-server-provisioner-0\" (UID: \"aa3b6983-436f-4b16-bb1d-a360d061f5c3\") " pod="default/nfs-server-provisioner-0" Sep 8 23:55:32.859525 kubelet[1859]: E0908 23:55:32.859473 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:32.973394 kubelet[1859]: E0908 23:55:32.973207 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:33.011111 containerd[1507]: time="2025-09-08T23:55:33.011048771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aa3b6983-436f-4b16-bb1d-a360d061f5c3,Namespace:default,Attempt:0,}" Sep 8 23:55:33.068080 kernel: eth0: renamed from tmpdcd5c Sep 8 23:55:33.076758 systemd-networkd[1435]: lxc62c8767cfbba: Link UP Sep 8 23:55:33.078298 systemd-networkd[1435]: lxc62c8767cfbba: Gained carrier Sep 8 23:55:33.581590 containerd[1507]: time="2025-09-08T23:55:33.581448936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:33.581590 containerd[1507]: time="2025-09-08T23:55:33.581524769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:33.581590 containerd[1507]: time="2025-09-08T23:55:33.581551149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:33.581799 containerd[1507]: time="2025-09-08T23:55:33.581655656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:33.602389 systemd[1]: Started cri-containerd-dcd5c51d95ca7194d474eab4e8df723c82879be668dec04bf489a2eddecc31d6.scope - libcontainer container dcd5c51d95ca7194d474eab4e8df723c82879be668dec04bf489a2eddecc31d6. Sep 8 23:55:33.618429 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:55:33.645376 containerd[1507]: time="2025-09-08T23:55:33.645320938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aa3b6983-436f-4b16-bb1d-a360d061f5c3,Namespace:default,Attempt:0,} returns sandbox id \"dcd5c51d95ca7194d474eab4e8df723c82879be668dec04bf489a2eddecc31d6\"" Sep 8 23:55:33.647151 containerd[1507]: time="2025-09-08T23:55:33.647110262Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 8 23:55:33.974650 kubelet[1859]: E0908 23:55:33.974426 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:34.845373 systemd-networkd[1435]: lxc62c8767cfbba: Gained IPv6LL Sep 8 23:55:34.975391 kubelet[1859]: E0908 23:55:34.975330 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:35.976276 kubelet[1859]: E0908 23:55:35.976223 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:36.371354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438365552.mount: Deactivated successfully. Sep 8 23:55:36.976972 kubelet[1859]: E0908 23:55:36.976918 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:37.978284 kubelet[1859]: E0908 23:55:37.978132 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:38.978792 kubelet[1859]: E0908 23:55:38.978722 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:38.981684 containerd[1507]: time="2025-09-08T23:55:38.981590140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:38.982423 containerd[1507]: time="2025-09-08T23:55:38.982353747Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Sep 8 23:55:38.983676 containerd[1507]: time="2025-09-08T23:55:38.983631032Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:38.986264 containerd[1507]: time="2025-09-08T23:55:38.986227891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:38.987320 containerd[1507]: time="2025-09-08T23:55:38.987279341Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.34013258s" Sep 8 23:55:38.987388 containerd[1507]: time="2025-09-08T23:55:38.987320278Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 8 23:55:38.989698 containerd[1507]: time="2025-09-08T23:55:38.989671846Z" level=info msg="CreateContainer within sandbox \"dcd5c51d95ca7194d474eab4e8df723c82879be668dec04bf489a2eddecc31d6\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 8 23:55:39.003225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083275249.mount: Deactivated successfully. Sep 8 23:55:39.007384 containerd[1507]: time="2025-09-08T23:55:39.007312254Z" level=info msg="CreateContainer within sandbox \"dcd5c51d95ca7194d474eab4e8df723c82879be668dec04bf489a2eddecc31d6\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"70dd6cb3bf6c52ed9e588ca4a0590b5d5eb9172d6e4dc946452d4f258d80d458\"" Sep 8 23:55:39.007963 containerd[1507]: time="2025-09-08T23:55:39.007907955Z" level=info msg="StartContainer for \"70dd6cb3bf6c52ed9e588ca4a0590b5d5eb9172d6e4dc946452d4f258d80d458\"" Sep 8 23:55:39.091192 systemd[1]: Started cri-containerd-70dd6cb3bf6c52ed9e588ca4a0590b5d5eb9172d6e4dc946452d4f258d80d458.scope - libcontainer container 70dd6cb3bf6c52ed9e588ca4a0590b5d5eb9172d6e4dc946452d4f258d80d458. Sep 8 23:55:39.383936 containerd[1507]: time="2025-09-08T23:55:39.383622932Z" level=info msg="StartContainer for \"70dd6cb3bf6c52ed9e588ca4a0590b5d5eb9172d6e4dc946452d4f258d80d458\" returns successfully" Sep 8 23:55:39.979207 kubelet[1859]: E0908 23:55:39.979124 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:39.997628 kubelet[1859]: I0908 23:55:39.997500 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.655875705 podStartE2EDuration="7.997479656s" podCreationTimestamp="2025-09-08 23:55:32 +0000 UTC" firstStartedPulling="2025-09-08 23:55:33.646736708 +0000 UTC m=+41.379596634" lastFinishedPulling="2025-09-08 23:55:38.988340659 +0000 UTC m=+46.721200585" observedRunningTime="2025-09-08 23:55:39.997332879 +0000 UTC m=+47.730192805" watchObservedRunningTime="2025-09-08 23:55:39.997479656 +0000 UTC m=+47.730339582" Sep 8 23:55:40.979588 kubelet[1859]: E0908 23:55:40.979496 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:41.979931 kubelet[1859]: E0908 23:55:41.979845 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:42.980175 kubelet[1859]: E0908 23:55:42.980069 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:43.980963 kubelet[1859]: E0908 23:55:43.980888 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:44.981441 kubelet[1859]: E0908 23:55:44.981355 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:45.982095 kubelet[1859]: E0908 23:55:45.981978 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:46.982379 kubelet[1859]: E0908 23:55:46.982296 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:47.982510 kubelet[1859]: E0908 23:55:47.982443 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:48.982862 kubelet[1859]: E0908 23:55:48.982798 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:49.332101 systemd[1]: Created slice kubepods-besteffort-podf86d857d_81b6_4e34_92de_cdae77c9c457.slice - libcontainer container kubepods-besteffort-podf86d857d_81b6_4e34_92de_cdae77c9c457.slice. Sep 8 23:55:49.510675 kubelet[1859]: I0908 23:55:49.510594 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d4fb362d-8a93-419e-9005-48295a0567bf\" (UniqueName: \"kubernetes.io/nfs/f86d857d-81b6-4e34-92de-cdae77c9c457-pvc-d4fb362d-8a93-419e-9005-48295a0567bf\") pod \"test-pod-1\" (UID: \"f86d857d-81b6-4e34-92de-cdae77c9c457\") " pod="default/test-pod-1" Sep 8 23:55:49.510675 kubelet[1859]: I0908 23:55:49.510661 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc65s\" (UniqueName: \"kubernetes.io/projected/f86d857d-81b6-4e34-92de-cdae77c9c457-kube-api-access-jc65s\") pod \"test-pod-1\" (UID: \"f86d857d-81b6-4e34-92de-cdae77c9c457\") " pod="default/test-pod-1" Sep 8 23:55:49.644141 kernel: FS-Cache: Loaded Sep 8 23:55:49.713754 kernel: RPC: Registered named UNIX socket transport module. Sep 8 23:55:49.713971 kernel: RPC: Registered udp transport module. Sep 8 23:55:49.714040 kernel: RPC: Registered tcp transport module. Sep 8 23:55:49.714460 kernel: RPC: Registered tcp-with-tls transport module. Sep 8 23:55:49.715301 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 8 23:55:49.933230 kernel: NFS: Registering the id_resolver key type Sep 8 23:55:49.933405 kernel: Key type id_resolver registered Sep 8 23:55:49.933469 kernel: Key type id_legacy registered Sep 8 23:55:49.960709 nfsidmap[3268]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 8 23:55:49.966211 nfsidmap[3271]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 8 23:55:49.983858 kubelet[1859]: E0908 23:55:49.983815 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:50.236209 containerd[1507]: time="2025-09-08T23:55:50.236065040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f86d857d-81b6-4e34-92de-cdae77c9c457,Namespace:default,Attempt:0,}" Sep 8 23:55:50.278058 kernel: eth0: renamed from tmp8f6d8 Sep 8 23:55:50.285262 systemd-networkd[1435]: lxc7bd3099690db: Link UP Sep 8 23:55:50.285758 systemd-networkd[1435]: lxc7bd3099690db: Gained carrier Sep 8 23:55:50.489832 containerd[1507]: time="2025-09-08T23:55:50.489448471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:50.489832 containerd[1507]: time="2025-09-08T23:55:50.489527600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:50.489832 containerd[1507]: time="2025-09-08T23:55:50.489542508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:50.489832 containerd[1507]: time="2025-09-08T23:55:50.489645983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:50.511212 systemd[1]: Started cri-containerd-8f6d8f49b8040c64df51df095126cfb8f6866ccda723581e2643f3d8c153aaee.scope - libcontainer container 8f6d8f49b8040c64df51df095126cfb8f6866ccda723581e2643f3d8c153aaee. Sep 8 23:55:50.528439 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:55:50.554699 containerd[1507]: time="2025-09-08T23:55:50.554660952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f86d857d-81b6-4e34-92de-cdae77c9c457,Namespace:default,Attempt:0,} returns sandbox id \"8f6d8f49b8040c64df51df095126cfb8f6866ccda723581e2643f3d8c153aaee\"" Sep 8 23:55:50.556199 containerd[1507]: time="2025-09-08T23:55:50.556160909Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 8 23:55:50.912080 containerd[1507]: time="2025-09-08T23:55:50.911995281Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:55:50.912814 containerd[1507]: time="2025-09-08T23:55:50.912762372Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 8 23:55:50.915704 containerd[1507]: time="2025-09-08T23:55:50.915659946Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"73307688\" in 359.456826ms" Sep 8 23:55:50.915704 containerd[1507]: time="2025-09-08T23:55:50.915688169Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 8 23:55:50.917941 containerd[1507]: time="2025-09-08T23:55:50.917905105Z" level=info msg="CreateContainer within sandbox \"8f6d8f49b8040c64df51df095126cfb8f6866ccda723581e2643f3d8c153aaee\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 8 23:55:50.933692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857334101.mount: Deactivated successfully. Sep 8 23:55:50.937014 containerd[1507]: time="2025-09-08T23:55:50.936946251Z" level=info msg="CreateContainer within sandbox \"8f6d8f49b8040c64df51df095126cfb8f6866ccda723581e2643f3d8c153aaee\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9ab8909e8e1905ac41d23b98ed1b154a5f0255bb81ed3e7635f8025df11db030\"" Sep 8 23:55:50.937660 containerd[1507]: time="2025-09-08T23:55:50.937620779Z" level=info msg="StartContainer for \"9ab8909e8e1905ac41d23b98ed1b154a5f0255bb81ed3e7635f8025df11db030\"" Sep 8 23:55:50.974285 systemd[1]: Started cri-containerd-9ab8909e8e1905ac41d23b98ed1b154a5f0255bb81ed3e7635f8025df11db030.scope - libcontainer container 9ab8909e8e1905ac41d23b98ed1b154a5f0255bb81ed3e7635f8025df11db030. Sep 8 23:55:50.984274 kubelet[1859]: E0908 23:55:50.984224 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:51.008704 containerd[1507]: time="2025-09-08T23:55:51.008641851Z" level=info msg="StartContainer for \"9ab8909e8e1905ac41d23b98ed1b154a5f0255bb81ed3e7635f8025df11db030\" returns successfully" Sep 8 23:55:51.689033 kubelet[1859]: I0908 23:55:51.688944 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.328381756 podStartE2EDuration="19.6889229s" podCreationTimestamp="2025-09-08 23:55:32 +0000 UTC" firstStartedPulling="2025-09-08 23:55:50.555879582 +0000 UTC m=+58.288739508" lastFinishedPulling="2025-09-08 23:55:50.916420726 +0000 UTC m=+58.649280652" observedRunningTime="2025-09-08 23:55:51.688892573 +0000 UTC m=+59.421752499" watchObservedRunningTime="2025-09-08 23:55:51.6889229 +0000 UTC m=+59.421782826" Sep 8 23:55:51.985151 kubelet[1859]: E0908 23:55:51.984914 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:52.317351 systemd-networkd[1435]: lxc7bd3099690db: Gained IPv6LL Sep 8 23:55:52.859438 kubelet[1859]: E0908 23:55:52.859368 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:52.985767 kubelet[1859]: E0908 23:55:52.985675 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:53.986350 kubelet[1859]: E0908 23:55:53.986284 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:54.986644 kubelet[1859]: E0908 23:55:54.986542 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:55.670406 containerd[1507]: time="2025-09-08T23:55:55.670321711Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:55:55.680997 containerd[1507]: time="2025-09-08T23:55:55.680943038Z" level=info msg="StopContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" with timeout 2 (s)" Sep 8 23:55:55.681371 containerd[1507]: time="2025-09-08T23:55:55.681317482Z" level=info msg="Stop container \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" with signal terminated" Sep 8 23:55:55.690199 systemd-networkd[1435]: lxc_health: Link DOWN Sep 8 23:55:55.690216 systemd-networkd[1435]: lxc_health: Lost carrier Sep 8 23:55:55.708748 systemd[1]: cri-containerd-3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e.scope: Deactivated successfully. Sep 8 23:55:55.709393 systemd[1]: cri-containerd-3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e.scope: Consumed 9.042s CPU time, 121M memory peak, 244K read from disk, 13.3M written to disk. Sep 8 23:55:55.731586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e-rootfs.mount: Deactivated successfully. Sep 8 23:55:55.741413 containerd[1507]: time="2025-09-08T23:55:55.741287538Z" level=info msg="shim disconnected" id=3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e namespace=k8s.io Sep 8 23:55:55.741413 containerd[1507]: time="2025-09-08T23:55:55.741359052Z" level=warning msg="cleaning up after shim disconnected" id=3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e namespace=k8s.io Sep 8 23:55:55.741413 containerd[1507]: time="2025-09-08T23:55:55.741368199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:55.765194 containerd[1507]: time="2025-09-08T23:55:55.765111531Z" level=info msg="StopContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" returns successfully" Sep 8 23:55:55.766020 containerd[1507]: time="2025-09-08T23:55:55.765968099Z" level=info msg="StopPodSandbox for \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\"" Sep 8 23:55:55.766119 containerd[1507]: time="2025-09-08T23:55:55.766047178Z" level=info msg="Container to stop \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:55.766164 containerd[1507]: time="2025-09-08T23:55:55.766118702Z" level=info msg="Container to stop \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:55.766164 containerd[1507]: time="2025-09-08T23:55:55.766135704Z" level=info msg="Container to stop \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:55.766164 containerd[1507]: time="2025-09-08T23:55:55.766149199Z" level=info msg="Container to stop \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:55.766299 containerd[1507]: time="2025-09-08T23:55:55.766163987Z" level=info msg="Container to stop \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:55.769090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3-shm.mount: Deactivated successfully. Sep 8 23:55:55.775801 systemd[1]: cri-containerd-9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3.scope: Deactivated successfully. Sep 8 23:55:55.802390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3-rootfs.mount: Deactivated successfully. Sep 8 23:55:55.808860 containerd[1507]: time="2025-09-08T23:55:55.808773490Z" level=info msg="shim disconnected" id=9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3 namespace=k8s.io Sep 8 23:55:55.808860 containerd[1507]: time="2025-09-08T23:55:55.808851577Z" level=warning msg="cleaning up after shim disconnected" id=9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3 namespace=k8s.io Sep 8 23:55:55.808860 containerd[1507]: time="2025-09-08T23:55:55.808864230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:55.829162 containerd[1507]: time="2025-09-08T23:55:55.829086108Z" level=info msg="TearDown network for sandbox \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" successfully" Sep 8 23:55:55.829162 containerd[1507]: time="2025-09-08T23:55:55.829139508Z" level=info msg="StopPodSandbox for \"9d408dfb46e22573a170eefcd6b8d1b107da1e2aedb94501a902e8e69b2be8f3\" returns successfully" Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956301 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-cgroup\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956360 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-xtables-lock\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956385 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-bpf-maps\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956406 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-hostproc\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956438 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-hubble-tls\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956465 kubelet[1859]: I0908 23:55:55.956470 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-lib-modules\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956757 kubelet[1859]: I0908 23:55:55.956491 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cni-path\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956757 kubelet[1859]: I0908 23:55:55.956480 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.956757 kubelet[1859]: I0908 23:55:55.956515 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.956757 kubelet[1859]: I0908 23:55:55.956515 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-run\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956757 kubelet[1859]: I0908 23:55:55.956480 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.956888 kubelet[1859]: I0908 23:55:55.956565 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.956888 kubelet[1859]: I0908 23:55:55.956564 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.956888 kubelet[1859]: I0908 23:55:55.956581 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mx7s\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-kube-api-access-9mx7s\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956888 kubelet[1859]: I0908 23:55:55.956602 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-config-path\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.956888 kubelet[1859]: I0908 23:55:55.956608 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956617 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-etc-cni-netd\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956634 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-net\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956654 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f62415d-3bb0-41ac-bb6e-8b207f413368-clustermesh-secrets\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956675 1859 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-kernel\") pod \"4f62415d-3bb0-41ac-bb6e-8b207f413368\" (UID: \"4f62415d-3bb0-41ac-bb6e-8b207f413368\") " Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956711 1859 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-cgroup\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956721 1859 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-lib-modules\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957035 kubelet[1859]: I0908 23:55:55.956731 1859 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cni-path\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957324 kubelet[1859]: I0908 23:55:55.956739 1859 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-xtables-lock\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957324 kubelet[1859]: I0908 23:55:55.956747 1859 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-hostproc\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957324 kubelet[1859]: I0908 23:55:55.956755 1859 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-run\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:55.957324 kubelet[1859]: I0908 23:55:55.956775 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.958421 kubelet[1859]: I0908 23:55:55.958371 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.958490 kubelet[1859]: I0908 23:55:55.958422 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.958490 kubelet[1859]: I0908 23:55:55.958450 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:55.960056 kubelet[1859]: I0908 23:55:55.959921 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:55.961566 kubelet[1859]: I0908 23:55:55.961517 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:55:55.961742 systemd[1]: var-lib-kubelet-pods-4f62415d\x2d3bb0\x2d41ac\x2dbb6e\x2d8b207f413368-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:55:55.963767 kubelet[1859]: I0908 23:55:55.963723 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f62415d-3bb0-41ac-bb6e-8b207f413368-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:55:55.963846 kubelet[1859]: I0908 23:55:55.963723 1859 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-kube-api-access-9mx7s" (OuterVolumeSpecName: "kube-api-access-9mx7s") pod "4f62415d-3bb0-41ac-bb6e-8b207f413368" (UID: "4f62415d-3bb0-41ac-bb6e-8b207f413368"). InnerVolumeSpecName "kube-api-access-9mx7s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:55.987032 kubelet[1859]: E0908 23:55:55.986931 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057287 1859 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9mx7s\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-kube-api-access-9mx7s\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057331 1859 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f62415d-3bb0-41ac-bb6e-8b207f413368-cilium-config-path\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057341 1859 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f62415d-3bb0-41ac-bb6e-8b207f413368-clustermesh-secrets\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057352 1859 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-kernel\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057361 1859 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-etc-cni-netd\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057370 1859 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-host-proc-sys-net\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057378 1859 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f62415d-3bb0-41ac-bb6e-8b207f413368-bpf-maps\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.057364 kubelet[1859]: I0908 23:55:56.057386 1859 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f62415d-3bb0-41ac-bb6e-8b207f413368-hubble-tls\") on node \"10.0.0.69\" DevicePath \"\"" Sep 8 23:55:56.654402 systemd[1]: var-lib-kubelet-pods-4f62415d\x2d3bb0\x2d41ac\x2dbb6e\x2d8b207f413368-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9mx7s.mount: Deactivated successfully. Sep 8 23:55:56.654545 systemd[1]: var-lib-kubelet-pods-4f62415d\x2d3bb0\x2d41ac\x2dbb6e\x2d8b207f413368-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:55:56.694353 kubelet[1859]: I0908 23:55:56.694303 1859 scope.go:117] "RemoveContainer" containerID="3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e" Sep 8 23:55:56.698374 containerd[1507]: time="2025-09-08T23:55:56.698322637Z" level=info msg="RemoveContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\"" Sep 8 23:55:56.701891 systemd[1]: Removed slice kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice - libcontainer container kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice. Sep 8 23:55:56.702016 systemd[1]: kubepods-burstable-pod4f62415d_3bb0_41ac_bb6e_8b207f413368.slice: Consumed 9.314s CPU time, 121.5M memory peak, 244K read from disk, 13.3M written to disk. Sep 8 23:55:56.702584 containerd[1507]: time="2025-09-08T23:55:56.702540698Z" level=info msg="RemoveContainer for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" returns successfully" Sep 8 23:55:56.702814 kubelet[1859]: I0908 23:55:56.702782 1859 scope.go:117] "RemoveContainer" containerID="58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123" Sep 8 23:55:56.703823 containerd[1507]: time="2025-09-08T23:55:56.703794563Z" level=info msg="RemoveContainer for \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\"" Sep 8 23:55:56.707862 containerd[1507]: time="2025-09-08T23:55:56.707822778Z" level=info msg="RemoveContainer for \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\" returns successfully" Sep 8 23:55:56.707991 kubelet[1859]: I0908 23:55:56.707959 1859 scope.go:117] "RemoveContainer" containerID="cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6" Sep 8 23:55:56.709049 containerd[1507]: time="2025-09-08T23:55:56.708826913Z" level=info msg="RemoveContainer for \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\"" Sep 8 23:55:56.712802 containerd[1507]: time="2025-09-08T23:55:56.712763226Z" level=info msg="RemoveContainer for \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\" returns successfully" Sep 8 23:55:56.712952 kubelet[1859]: I0908 23:55:56.712924 1859 scope.go:117] "RemoveContainer" containerID="d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7" Sep 8 23:55:56.714164 containerd[1507]: time="2025-09-08T23:55:56.714113622Z" level=info msg="RemoveContainer for \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\"" Sep 8 23:55:56.718430 containerd[1507]: time="2025-09-08T23:55:56.718395492Z" level=info msg="RemoveContainer for \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\" returns successfully" Sep 8 23:55:56.718633 kubelet[1859]: I0908 23:55:56.718604 1859 scope.go:117] "RemoveContainer" containerID="8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea" Sep 8 23:55:56.719647 containerd[1507]: time="2025-09-08T23:55:56.719590757Z" level=info msg="RemoveContainer for \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\"" Sep 8 23:55:56.723467 containerd[1507]: time="2025-09-08T23:55:56.723432040Z" level=info msg="RemoveContainer for \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\" returns successfully" Sep 8 23:55:56.723612 kubelet[1859]: I0908 23:55:56.723585 1859 scope.go:117] "RemoveContainer" containerID="3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e" Sep 8 23:55:56.723828 containerd[1507]: time="2025-09-08T23:55:56.723785444Z" level=error msg="ContainerStatus for \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\": not found" Sep 8 23:55:56.723959 kubelet[1859]: E0908 23:55:56.723934 1859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\": not found" containerID="3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e" Sep 8 23:55:56.724075 kubelet[1859]: I0908 23:55:56.723967 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e"} err="failed to get container status \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b8c330f57bb7415fad92329e34cc2140c4be2f3647210ba36a327cfa7e4cb9e\": not found" Sep 8 23:55:56.724075 kubelet[1859]: I0908 23:55:56.724074 1859 scope.go:117] "RemoveContainer" containerID="58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123" Sep 8 23:55:56.724256 containerd[1507]: time="2025-09-08T23:55:56.724226262Z" level=error msg="ContainerStatus for \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\": not found" Sep 8 23:55:56.724382 kubelet[1859]: E0908 23:55:56.724360 1859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\": not found" containerID="58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123" Sep 8 23:55:56.724430 kubelet[1859]: I0908 23:55:56.724387 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123"} err="failed to get container status \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\": rpc error: code = NotFound desc = an error occurred when try to find container \"58fb38f328488d27dffd473bdcca5fd80407a64085ae59dec2b1c8821a03e123\": not found" Sep 8 23:55:56.724430 kubelet[1859]: I0908 23:55:56.724403 1859 scope.go:117] "RemoveContainer" containerID="cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6" Sep 8 23:55:56.724562 containerd[1507]: time="2025-09-08T23:55:56.724530854Z" level=error msg="ContainerStatus for \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\": not found" Sep 8 23:55:56.724672 kubelet[1859]: E0908 23:55:56.724644 1859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\": not found" containerID="cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6" Sep 8 23:55:56.724728 kubelet[1859]: I0908 23:55:56.724672 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6"} err="failed to get container status \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf8b4a325d30887761c67e2bd5cdf4cb15c4ec27bd53bc237742be9e43dc86b6\": not found" Sep 8 23:55:56.724728 kubelet[1859]: I0908 23:55:56.724694 1859 scope.go:117] "RemoveContainer" containerID="d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7" Sep 8 23:55:56.724917 containerd[1507]: time="2025-09-08T23:55:56.724875671Z" level=error msg="ContainerStatus for \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\": not found" Sep 8 23:55:56.725077 kubelet[1859]: E0908 23:55:56.725051 1859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\": not found" containerID="d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7" Sep 8 23:55:56.725151 kubelet[1859]: I0908 23:55:56.725080 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7"} err="failed to get container status \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d15d9e1553832761df9813e6dd88221a0ab45b5c64e92129c5e999c8574ea4a7\": not found" Sep 8 23:55:56.725151 kubelet[1859]: I0908 23:55:56.725102 1859 scope.go:117] "RemoveContainer" containerID="8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea" Sep 8 23:55:56.725327 containerd[1507]: time="2025-09-08T23:55:56.725288947Z" level=error msg="ContainerStatus for \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\": not found" Sep 8 23:55:56.725508 kubelet[1859]: E0908 23:55:56.725477 1859 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\": not found" containerID="8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea" Sep 8 23:55:56.725560 kubelet[1859]: I0908 23:55:56.725518 1859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea"} err="failed to get container status \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c0bc645de790fbdaf8902770263e9768c552561c53113a47790ca0b741498ea\": not found" Sep 8 23:55:56.987751 kubelet[1859]: E0908 23:55:56.987579 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:57.317723 kubelet[1859]: I0908 23:55:57.317667 1859 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f62415d-3bb0-41ac-bb6e-8b207f413368" path="/var/lib/kubelet/pods/4f62415d-3bb0-41ac-bb6e-8b207f413368/volumes" Sep 8 23:55:57.988141 kubelet[1859]: E0908 23:55:57.988057 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:58.354903 kubelet[1859]: E0908 23:55:58.354852 1859 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:55:58.988531 kubelet[1859]: E0908 23:55:58.988454 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:55:59.826221 kubelet[1859]: I0908 23:55:59.826154 1859 memory_manager.go:355] "RemoveStaleState removing state" podUID="4f62415d-3bb0-41ac-bb6e-8b207f413368" containerName="cilium-agent" Sep 8 23:55:59.834119 systemd[1]: Created slice kubepods-besteffort-pod1be49590_a997_40f9_8db2_1be8b783e0c0.slice - libcontainer container kubepods-besteffort-pod1be49590_a997_40f9_8db2_1be8b783e0c0.slice. Sep 8 23:55:59.862075 systemd[1]: Created slice kubepods-burstable-pod1d821f05_4444_4c8b_9514_cd2f386a662d.slice - libcontainer container kubepods-burstable-pod1d821f05_4444_4c8b_9514_cd2f386a662d.slice. Sep 8 23:55:59.978428 kubelet[1859]: I0908 23:55:59.978337 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-cilium-run\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978428 kubelet[1859]: I0908 23:55:59.978388 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-bpf-maps\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978428 kubelet[1859]: I0908 23:55:59.978413 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-hostproc\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978428 kubelet[1859]: I0908 23:55:59.978434 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1d821f05-4444-4c8b-9514-cd2f386a662d-cilium-ipsec-secrets\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978428 kubelet[1859]: I0908 23:55:59.978462 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-host-proc-sys-net\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978780 kubelet[1859]: I0908 23:55:59.978482 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-cilium-cgroup\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978780 kubelet[1859]: I0908 23:55:59.978505 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-cni-path\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978780 kubelet[1859]: I0908 23:55:59.978523 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d821f05-4444-4c8b-9514-cd2f386a662d-cilium-config-path\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978780 kubelet[1859]: I0908 23:55:59.978543 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-host-proc-sys-kernel\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978780 kubelet[1859]: I0908 23:55:59.978563 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g58q\" (UniqueName: \"kubernetes.io/projected/1be49590-a997-40f9-8db2-1be8b783e0c0-kube-api-access-6g58q\") pod \"cilium-operator-6c4d7847fc-ck78s\" (UID: \"1be49590-a997-40f9-8db2-1be8b783e0c0\") " pod="kube-system/cilium-operator-6c4d7847fc-ck78s" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978643 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d821f05-4444-4c8b-9514-cd2f386a662d-clustermesh-secrets\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978696 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-etc-cni-netd\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978726 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-lib-modules\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978746 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d821f05-4444-4c8b-9514-cd2f386a662d-xtables-lock\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978769 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d821f05-4444-4c8b-9514-cd2f386a662d-hubble-tls\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.978939 kubelet[1859]: I0908 23:55:59.978810 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv589\" (UniqueName: \"kubernetes.io/projected/1d821f05-4444-4c8b-9514-cd2f386a662d-kube-api-access-wv589\") pod \"cilium-xfpsm\" (UID: \"1d821f05-4444-4c8b-9514-cd2f386a662d\") " pod="kube-system/cilium-xfpsm" Sep 8 23:55:59.979129 kubelet[1859]: I0908 23:55:59.978839 1859 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1be49590-a997-40f9-8db2-1be8b783e0c0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ck78s\" (UID: \"1be49590-a997-40f9-8db2-1be8b783e0c0\") " pod="kube-system/cilium-operator-6c4d7847fc-ck78s" Sep 8 23:55:59.988616 kubelet[1859]: E0908 23:55:59.988573 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:00.437352 kubelet[1859]: E0908 23:56:00.437268 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:00.438027 containerd[1507]: time="2025-09-08T23:56:00.437970364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ck78s,Uid:1be49590-a997-40f9-8db2-1be8b783e0c0,Namespace:kube-system,Attempt:0,}" Sep 8 23:56:00.474490 kubelet[1859]: E0908 23:56:00.474441 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:00.475116 containerd[1507]: time="2025-09-08T23:56:00.474953738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfpsm,Uid:1d821f05-4444-4c8b-9514-cd2f386a662d,Namespace:kube-system,Attempt:0,}" Sep 8 23:56:00.545845 containerd[1507]: time="2025-09-08T23:56:00.545538944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:56:00.545845 containerd[1507]: time="2025-09-08T23:56:00.545607963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:56:00.545845 containerd[1507]: time="2025-09-08T23:56:00.545621509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:56:00.545845 containerd[1507]: time="2025-09-08T23:56:00.545716487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:56:00.552614 containerd[1507]: time="2025-09-08T23:56:00.552399092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:56:00.552695 containerd[1507]: time="2025-09-08T23:56:00.552576236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:56:00.552759 containerd[1507]: time="2025-09-08T23:56:00.552720506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:56:00.553558 containerd[1507]: time="2025-09-08T23:56:00.553506722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:56:00.567173 systemd[1]: Started cri-containerd-4fcbe61f7861e8790ee9dff22640024643af14245a98f7a621c4beafc5075044.scope - libcontainer container 4fcbe61f7861e8790ee9dff22640024643af14245a98f7a621c4beafc5075044. Sep 8 23:56:00.572305 systemd[1]: Started cri-containerd-b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf.scope - libcontainer container b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf. Sep 8 23:56:00.600035 containerd[1507]: time="2025-09-08T23:56:00.599667575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfpsm,Uid:1d821f05-4444-4c8b-9514-cd2f386a662d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\"" Sep 8 23:56:00.600890 kubelet[1859]: E0908 23:56:00.600867 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:00.604288 containerd[1507]: time="2025-09-08T23:56:00.604254907Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:56:00.611134 containerd[1507]: time="2025-09-08T23:56:00.611081734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ck78s,Uid:1be49590-a997-40f9-8db2-1be8b783e0c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fcbe61f7861e8790ee9dff22640024643af14245a98f7a621c4beafc5075044\"" Sep 8 23:56:00.611879 kubelet[1859]: E0908 23:56:00.611813 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:00.612876 containerd[1507]: time="2025-09-08T23:56:00.612838312Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:56:00.622624 containerd[1507]: time="2025-09-08T23:56:00.622559272Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140\"" Sep 8 23:56:00.623160 containerd[1507]: time="2025-09-08T23:56:00.623131055Z" level=info msg="StartContainer for \"8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140\"" Sep 8 23:56:00.662437 systemd[1]: Started cri-containerd-8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140.scope - libcontainer container 8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140. Sep 8 23:56:00.695569 containerd[1507]: time="2025-09-08T23:56:00.695411305Z" level=info msg="StartContainer for \"8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140\" returns successfully" Sep 8 23:56:00.707459 kubelet[1859]: E0908 23:56:00.707397 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:00.712045 systemd[1]: cri-containerd-8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140.scope: Deactivated successfully. Sep 8 23:56:00.749247 containerd[1507]: time="2025-09-08T23:56:00.749169831Z" level=info msg="shim disconnected" id=8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140 namespace=k8s.io Sep 8 23:56:00.749247 containerd[1507]: time="2025-09-08T23:56:00.749234443Z" level=warning msg="cleaning up after shim disconnected" id=8ebd0e51bf33e1af88a0b5c59580749756bb5464966898969adf7b5eb33f2140 namespace=k8s.io Sep 8 23:56:00.749247 containerd[1507]: time="2025-09-08T23:56:00.749243680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:56:00.989776 kubelet[1859]: E0908 23:56:00.989630 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:01.711723 kubelet[1859]: E0908 23:56:01.711677 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:01.713162 containerd[1507]: time="2025-09-08T23:56:01.713119072Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:56:01.729272 containerd[1507]: time="2025-09-08T23:56:01.729220199Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e\"" Sep 8 23:56:01.729752 containerd[1507]: time="2025-09-08T23:56:01.729721941Z" level=info msg="StartContainer for \"9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e\"" Sep 8 23:56:01.759157 systemd[1]: Started cri-containerd-9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e.scope - libcontainer container 9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e. Sep 8 23:56:01.788625 containerd[1507]: time="2025-09-08T23:56:01.788574003Z" level=info msg="StartContainer for \"9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e\" returns successfully" Sep 8 23:56:01.797294 systemd[1]: cri-containerd-9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e.scope: Deactivated successfully. Sep 8 23:56:01.823109 containerd[1507]: time="2025-09-08T23:56:01.823037642Z" level=info msg="shim disconnected" id=9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e namespace=k8s.io Sep 8 23:56:01.823109 containerd[1507]: time="2025-09-08T23:56:01.823097635Z" level=warning msg="cleaning up after shim disconnected" id=9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e namespace=k8s.io Sep 8 23:56:01.823109 containerd[1507]: time="2025-09-08T23:56:01.823106802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:56:01.990122 kubelet[1859]: E0908 23:56:01.989874 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:02.088824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9101dae8f6d80c25bfb9179eb12666050dfc7e4b25002e5e1c21015462d44c0e-rootfs.mount: Deactivated successfully. Sep 8 23:56:02.209175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610119762.mount: Deactivated successfully. Sep 8 23:56:02.481777 containerd[1507]: time="2025-09-08T23:56:02.481724877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:02.482545 containerd[1507]: time="2025-09-08T23:56:02.482508298Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 8 23:56:02.483446 containerd[1507]: time="2025-09-08T23:56:02.483419489Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:02.484660 containerd[1507]: time="2025-09-08T23:56:02.484618590Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.871737518s" Sep 8 23:56:02.484660 containerd[1507]: time="2025-09-08T23:56:02.484646172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 8 23:56:02.486538 containerd[1507]: time="2025-09-08T23:56:02.486489211Z" level=info msg="CreateContainer within sandbox \"4fcbe61f7861e8790ee9dff22640024643af14245a98f7a621c4beafc5075044\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:56:02.500586 containerd[1507]: time="2025-09-08T23:56:02.500537913Z" level=info msg="CreateContainer within sandbox \"4fcbe61f7861e8790ee9dff22640024643af14245a98f7a621c4beafc5075044\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a05d3f15a8da05d177c86c79a241bac8765f33d2bdec0579db7e7750158f9929\"" Sep 8 23:56:02.501120 containerd[1507]: time="2025-09-08T23:56:02.501080602Z" level=info msg="StartContainer for \"a05d3f15a8da05d177c86c79a241bac8765f33d2bdec0579db7e7750158f9929\"" Sep 8 23:56:02.530169 systemd[1]: Started cri-containerd-a05d3f15a8da05d177c86c79a241bac8765f33d2bdec0579db7e7750158f9929.scope - libcontainer container a05d3f15a8da05d177c86c79a241bac8765f33d2bdec0579db7e7750158f9929. Sep 8 23:56:02.560639 containerd[1507]: time="2025-09-08T23:56:02.560554357Z" level=info msg="StartContainer for \"a05d3f15a8da05d177c86c79a241bac8765f33d2bdec0579db7e7750158f9929\" returns successfully" Sep 8 23:56:02.714599 kubelet[1859]: E0908 23:56:02.714546 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:02.715998 kubelet[1859]: E0908 23:56:02.715977 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:02.717539 containerd[1507]: time="2025-09-08T23:56:02.717484475Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:56:02.725106 kubelet[1859]: I0908 23:56:02.725062 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ck78s" podStartSLOduration=1.852219507 podStartE2EDuration="3.725048865s" podCreationTimestamp="2025-09-08 23:55:59 +0000 UTC" firstStartedPulling="2025-09-08 23:56:00.612469389 +0000 UTC m=+68.345329315" lastFinishedPulling="2025-09-08 23:56:02.485298747 +0000 UTC m=+70.218158673" observedRunningTime="2025-09-08 23:56:02.724693919 +0000 UTC m=+70.457553845" watchObservedRunningTime="2025-09-08 23:56:02.725048865 +0000 UTC m=+70.457908801" Sep 8 23:56:02.836498 containerd[1507]: time="2025-09-08T23:56:02.836436612Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0\"" Sep 8 23:56:02.837033 containerd[1507]: time="2025-09-08T23:56:02.836964874Z" level=info msg="StartContainer for \"c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0\"" Sep 8 23:56:02.875308 systemd[1]: Started cri-containerd-c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0.scope - libcontainer container c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0. Sep 8 23:56:02.911442 containerd[1507]: time="2025-09-08T23:56:02.911401807Z" level=info msg="StartContainer for \"c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0\" returns successfully" Sep 8 23:56:02.917189 systemd[1]: cri-containerd-c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0.scope: Deactivated successfully. Sep 8 23:56:02.917516 systemd[1]: cri-containerd-c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0.scope: Consumed 31ms CPU time, 5.7M memory peak, 1M read from disk. Sep 8 23:56:02.990946 kubelet[1859]: E0908 23:56:02.990826 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:03.084078 containerd[1507]: time="2025-09-08T23:56:03.083969434Z" level=info msg="shim disconnected" id=c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0 namespace=k8s.io Sep 8 23:56:03.084078 containerd[1507]: time="2025-09-08T23:56:03.084063020Z" level=warning msg="cleaning up after shim disconnected" id=c781b4305cb25a69ac382fcfa63f938ecf380cc26a78da97e47c360cfc3e92a0 namespace=k8s.io Sep 8 23:56:03.084078 containerd[1507]: time="2025-09-08T23:56:03.084074752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:56:03.355650 kubelet[1859]: E0908 23:56:03.355604 1859 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:56:03.720681 kubelet[1859]: E0908 23:56:03.720266 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:03.720681 kubelet[1859]: E0908 23:56:03.720538 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:03.722418 containerd[1507]: time="2025-09-08T23:56:03.722375740Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:56:03.739258 containerd[1507]: time="2025-09-08T23:56:03.739176378Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0\"" Sep 8 23:56:03.739991 containerd[1507]: time="2025-09-08T23:56:03.739938959Z" level=info msg="StartContainer for \"55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0\"" Sep 8 23:56:03.771230 systemd[1]: Started cri-containerd-55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0.scope - libcontainer container 55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0. Sep 8 23:56:03.796897 systemd[1]: cri-containerd-55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0.scope: Deactivated successfully. Sep 8 23:56:03.798744 containerd[1507]: time="2025-09-08T23:56:03.798688709Z" level=info msg="StartContainer for \"55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0\" returns successfully" Sep 8 23:56:03.822564 containerd[1507]: time="2025-09-08T23:56:03.822497600Z" level=info msg="shim disconnected" id=55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0 namespace=k8s.io Sep 8 23:56:03.822564 containerd[1507]: time="2025-09-08T23:56:03.822558695Z" level=warning msg="cleaning up after shim disconnected" id=55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0 namespace=k8s.io Sep 8 23:56:03.822564 containerd[1507]: time="2025-09-08T23:56:03.822570687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:56:03.991531 kubelet[1859]: E0908 23:56:03.991402 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:04.086332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55f9aefd3c7ec8644a4b9f15f1365c8c0d597fc71955725fd069a8fbe22921e0-rootfs.mount: Deactivated successfully. Sep 8 23:56:04.725399 kubelet[1859]: E0908 23:56:04.725353 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:04.727174 containerd[1507]: time="2025-09-08T23:56:04.727130961Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:56:04.745349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754276743.mount: Deactivated successfully. Sep 8 23:56:04.745838 containerd[1507]: time="2025-09-08T23:56:04.745722076Z" level=info msg="CreateContainer within sandbox \"b5738d678898abcf322511726e0dbe2b1029eb3f965715ae95863b9738fc1baf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada\"" Sep 8 23:56:04.746384 containerd[1507]: time="2025-09-08T23:56:04.746330689Z" level=info msg="StartContainer for \"71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada\"" Sep 8 23:56:04.785303 systemd[1]: Started cri-containerd-71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada.scope - libcontainer container 71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada. Sep 8 23:56:04.825217 containerd[1507]: time="2025-09-08T23:56:04.825152729Z" level=info msg="StartContainer for \"71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada\" returns successfully" Sep 8 23:56:04.992422 kubelet[1859]: E0908 23:56:04.992231 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:05.101651 kubelet[1859]: I0908 23:56:05.101574 1859 setters.go:602] "Node became not ready" node="10.0.0.69" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T23:56:05Z","lastTransitionTime":"2025-09-08T23:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 8 23:56:05.296048 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 8 23:56:05.730061 kubelet[1859]: E0908 23:56:05.730024 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:05.745124 kubelet[1859]: I0908 23:56:05.745052 1859 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xfpsm" podStartSLOduration=6.745030718 podStartE2EDuration="6.745030718s" podCreationTimestamp="2025-09-08 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:56:05.744465417 +0000 UTC m=+73.477325353" watchObservedRunningTime="2025-09-08 23:56:05.745030718 +0000 UTC m=+73.477890644" Sep 8 23:56:05.993556 kubelet[1859]: E0908 23:56:05.993387 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:06.731917 kubelet[1859]: E0908 23:56:06.731876 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:06.994415 kubelet[1859]: E0908 23:56:06.994277 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:07.994608 kubelet[1859]: E0908 23:56:07.994565 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:08.505989 systemd-networkd[1435]: lxc_health: Link UP Sep 8 23:56:08.511260 systemd-networkd[1435]: lxc_health: Gained carrier Sep 8 23:56:08.588763 systemd[1]: run-containerd-runc-k8s.io-71ee53448f99b1844c744ca6053a8066dd9940232a2a5c361f9216dd8074cada-runc.sehMhG.mount: Deactivated successfully. Sep 8 23:56:08.995232 kubelet[1859]: E0908 23:56:08.995160 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:09.853179 systemd-networkd[1435]: lxc_health: Gained IPv6LL Sep 8 23:56:09.996036 kubelet[1859]: E0908 23:56:09.995971 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:10.476233 kubelet[1859]: E0908 23:56:10.476145 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:10.739940 kubelet[1859]: E0908 23:56:10.739541 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:10.996534 kubelet[1859]: E0908 23:56:10.996379 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:11.750595 kubelet[1859]: E0908 23:56:11.749940 1859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:56:11.997202 kubelet[1859]: E0908 23:56:11.997115 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:12.859121 kubelet[1859]: E0908 23:56:12.858983 1859 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:13.004863 kubelet[1859]: E0908 23:56:13.004766 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:14.005416 kubelet[1859]: E0908 23:56:14.005272 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:15.006679 kubelet[1859]: E0908 23:56:15.006238 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:16.007121 kubelet[1859]: E0908 23:56:16.006831 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:56:17.007981 kubelet[1859]: E0908 23:56:17.007896 1859 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"