Oct 13 00:09:24.020019 kernel: Linux version 6.6.110-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:36:11 -00 2025 Oct 13 00:09:24.020052 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f22c322725201bc05beb6be7a3cc1733cdde87d870355f876093fa075b62debf Oct 13 00:09:24.020068 kernel: BIOS-provided physical RAM map: Oct 13 00:09:24.020077 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 00:09:24.020086 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 00:09:24.020095 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 00:09:24.020106 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 00:09:24.020115 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 00:09:24.020124 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 00:09:24.020133 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 00:09:24.020142 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 13 00:09:24.020155 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 00:09:24.020169 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 00:09:24.020178 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 00:09:24.020192 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 00:09:24.020203 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 00:09:24.020216 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Oct 13 00:09:24.020226 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Oct 13 00:09:24.020236 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Oct 13 00:09:24.020245 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Oct 13 00:09:24.020255 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 00:09:24.020265 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 00:09:24.020274 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 00:09:24.020284 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 00:09:24.020294 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 00:09:24.020303 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 00:09:24.020313 kernel: NX (Execute Disable) protection: active Oct 13 00:09:24.020326 kernel: APIC: Static calls initialized Oct 13 00:09:24.020336 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Oct 13 00:09:24.020346 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Oct 13 00:09:24.020356 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Oct 13 00:09:24.020366 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Oct 13 00:09:24.020375 kernel: extended physical RAM map: Oct 13 00:09:24.020385 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 00:09:24.020395 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 00:09:24.020404 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 00:09:24.020414 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 00:09:24.020424 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 00:09:24.020434 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 00:09:24.020447 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 00:09:24.020463 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Oct 13 00:09:24.020473 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Oct 13 00:09:24.020483 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Oct 13 00:09:24.020493 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Oct 13 00:09:24.020504 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Oct 13 00:09:24.020522 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 00:09:24.020532 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 00:09:24.020543 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 00:09:24.020553 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 00:09:24.020564 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 00:09:24.020574 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Oct 13 00:09:24.020585 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Oct 13 00:09:24.020595 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Oct 13 00:09:24.020605 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Oct 13 00:09:24.020619 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 00:09:24.020631 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 00:09:24.020642 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 00:09:24.020655 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 00:09:24.020669 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 00:09:24.020679 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 00:09:24.020689 kernel: efi: EFI v2.7 by EDK II Oct 13 00:09:24.020700 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Oct 13 00:09:24.020709 kernel: random: crng init done Oct 13 00:09:24.020749 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 13 00:09:24.020760 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 13 00:09:24.020775 kernel: secureboot: Secure boot disabled Oct 13 00:09:24.020789 kernel: SMBIOS 2.8 present. Oct 13 00:09:24.020799 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 00:09:24.020809 kernel: Hypervisor detected: KVM Oct 13 00:09:24.020820 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 00:09:24.020854 kernel: kvm-clock: using sched offset of 4621605576 cycles Oct 13 00:09:24.020866 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 00:09:24.020876 kernel: tsc: Detected 2794.746 MHz processor Oct 13 00:09:24.020887 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 00:09:24.020898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 00:09:24.020909 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 13 00:09:24.020925 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 00:09:24.020935 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 00:09:24.020946 kernel: Using GB pages for direct mapping Oct 13 00:09:24.020965 kernel: ACPI: Early table checksum verification disabled Oct 13 00:09:24.020976 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 13 00:09:24.020986 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 00:09:24.020998 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021008 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021019 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 13 00:09:24.021034 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021045 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021055 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021066 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 00:09:24.021077 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 00:09:24.021087 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 13 00:09:24.021098 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 13 00:09:24.021109 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 13 00:09:24.021119 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 13 00:09:24.021133 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 13 00:09:24.021144 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 13 00:09:24.021155 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 13 00:09:24.021165 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 13 00:09:24.021176 kernel: No NUMA configuration found Oct 13 00:09:24.021186 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 13 00:09:24.021196 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Oct 13 00:09:24.021207 kernel: Zone ranges: Oct 13 00:09:24.021218 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 00:09:24.021232 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 13 00:09:24.021242 kernel: Normal empty Oct 13 00:09:24.021256 kernel: Movable zone start for each node Oct 13 00:09:24.021267 kernel: Early memory node ranges Oct 13 00:09:24.021277 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 00:09:24.021288 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 13 00:09:24.021298 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 13 00:09:24.021309 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 13 00:09:24.021319 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 13 00:09:24.021330 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 13 00:09:24.021344 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Oct 13 00:09:24.021355 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Oct 13 00:09:24.021364 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 13 00:09:24.021375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 00:09:24.021386 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 00:09:24.021407 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 13 00:09:24.021421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 00:09:24.021432 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 13 00:09:24.021444 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 13 00:09:24.021454 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 00:09:24.021469 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 00:09:24.021484 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 13 00:09:24.021495 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 00:09:24.021506 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 00:09:24.021517 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 00:09:24.021528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 00:09:24.021543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 00:09:24.021553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 00:09:24.021565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 00:09:24.021576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 00:09:24.021587 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 00:09:24.021599 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 00:09:24.021610 kernel: TSC deadline timer available Oct 13 00:09:24.021621 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 13 00:09:24.021632 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 00:09:24.021647 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 00:09:24.021657 kernel: kvm-guest: setup PV sched yield Oct 13 00:09:24.021668 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 00:09:24.021679 kernel: Booting paravirtualized kernel on KVM Oct 13 00:09:24.021690 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 00:09:24.021702 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 00:09:24.021712 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 13 00:09:24.021722 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 13 00:09:24.021733 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 00:09:24.021749 kernel: kvm-guest: PV spinlocks enabled Oct 13 00:09:24.021760 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 00:09:24.021772 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f22c322725201bc05beb6be7a3cc1733cdde87d870355f876093fa075b62debf Oct 13 00:09:24.021783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:09:24.021793 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:09:24.021808 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:09:24.021818 kernel: Fallback order for Node 0: 0 Oct 13 00:09:24.021844 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Oct 13 00:09:24.021860 kernel: Policy zone: DMA32 Oct 13 00:09:24.021871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:09:24.021882 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43512K init, 1568K bss, 177824K reserved, 0K cma-reserved) Oct 13 00:09:24.021893 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 00:09:24.021917 kernel: ftrace: allocating 37951 entries in 149 pages Oct 13 00:09:24.021926 kernel: ftrace: allocated 149 pages with 4 groups Oct 13 00:09:24.021936 kernel: Dynamic Preempt: voluntary Oct 13 00:09:24.021945 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:09:24.021965 kernel: rcu: RCU event tracing is enabled. Oct 13 00:09:24.021980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 00:09:24.021989 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:09:24.021999 kernel: Rude variant of Tasks RCU enabled. Oct 13 00:09:24.022009 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:09:24.022018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:09:24.022028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 00:09:24.022037 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 00:09:24.022046 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:09:24.022056 kernel: Console: colour dummy device 80x25 Oct 13 00:09:24.022065 kernel: printk: console [ttyS0] enabled Oct 13 00:09:24.022077 kernel: ACPI: Core revision 20230628 Oct 13 00:09:24.022087 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 00:09:24.022097 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 00:09:24.022106 kernel: x2apic enabled Oct 13 00:09:24.022116 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 00:09:24.022128 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 00:09:24.022138 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 00:09:24.022147 kernel: kvm-guest: setup PV IPIs Oct 13 00:09:24.022157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 00:09:24.022169 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 13 00:09:24.022179 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Oct 13 00:09:24.022188 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 00:09:24.022198 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 00:09:24.022207 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 00:09:24.022217 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 00:09:24.022226 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 00:09:24.022236 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 00:09:24.022245 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 00:09:24.022257 kernel: active return thunk: retbleed_return_thunk Oct 13 00:09:24.022267 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 00:09:24.022277 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 00:09:24.022286 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 00:09:24.022296 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 00:09:24.022306 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 00:09:24.022318 kernel: active return thunk: srso_return_thunk Oct 13 00:09:24.022339 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 00:09:24.022381 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 00:09:24.022392 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 00:09:24.022402 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 00:09:24.022413 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 00:09:24.022423 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 00:09:24.022434 kernel: Freeing SMP alternatives memory: 32K Oct 13 00:09:24.022444 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:09:24.022455 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 13 00:09:24.022465 kernel: landlock: Up and running. Oct 13 00:09:24.022479 kernel: SELinux: Initializing. Oct 13 00:09:24.022490 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:09:24.022500 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:09:24.022511 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 00:09:24.022522 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:09:24.022532 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:09:24.022543 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 00:09:24.022553 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 00:09:24.022563 kernel: ... version: 0 Oct 13 00:09:24.022577 kernel: ... bit width: 48 Oct 13 00:09:24.022587 kernel: ... generic registers: 6 Oct 13 00:09:24.022598 kernel: ... value mask: 0000ffffffffffff Oct 13 00:09:24.022608 kernel: ... max period: 00007fffffffffff Oct 13 00:09:24.022618 kernel: ... fixed-purpose events: 0 Oct 13 00:09:24.022631 kernel: ... event mask: 000000000000003f Oct 13 00:09:24.022641 kernel: signal: max sigframe size: 1776 Oct 13 00:09:24.022653 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:09:24.022664 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:09:24.022679 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:09:24.022690 kernel: smpboot: x86: Booting SMP configuration: Oct 13 00:09:24.022701 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 00:09:24.022713 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 00:09:24.022724 kernel: smpboot: Max logical packages: 1 Oct 13 00:09:24.022735 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Oct 13 00:09:24.022747 kernel: devtmpfs: initialized Oct 13 00:09:24.022758 kernel: x86/mm: Memory block size: 128MB Oct 13 00:09:24.022769 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 13 00:09:24.022783 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 13 00:09:24.022801 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 13 00:09:24.022812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 13 00:09:24.022824 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Oct 13 00:09:24.022850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 13 00:09:24.022862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:09:24.022873 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 00:09:24.022885 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:09:24.022896 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:09:24.022912 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:09:24.022923 kernel: audit: type=2000 audit(1760314162.917:1): state=initialized audit_enabled=0 res=1 Oct 13 00:09:24.022935 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:09:24.022946 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 00:09:24.022965 kernel: cpuidle: using governor menu Oct 13 00:09:24.022976 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:09:24.022987 kernel: dca service started, version 1.12.1 Oct 13 00:09:24.022999 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Oct 13 00:09:24.023010 kernel: PCI: Using configuration type 1 for base access Oct 13 00:09:24.023025 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 00:09:24.023036 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:09:24.023047 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:09:24.023058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:09:24.023070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:09:24.023081 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:09:24.023092 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:09:24.023103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:09:24.023114 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:09:24.023129 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 13 00:09:24.023140 kernel: ACPI: Interpreter enabled Oct 13 00:09:24.023151 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 00:09:24.023162 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 00:09:24.023173 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 00:09:24.023185 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 00:09:24.023196 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 00:09:24.023207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 00:09:24.023484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 00:09:24.023666 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 00:09:24.023823 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 00:09:24.023853 kernel: PCI host bridge to bus 0000:00 Oct 13 00:09:24.024068 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 00:09:24.024215 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 00:09:24.024360 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 00:09:24.024509 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 00:09:24.024652 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 00:09:24.024825 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 00:09:24.025017 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 00:09:24.025211 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 13 00:09:24.025391 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 13 00:09:24.025555 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 13 00:09:24.025716 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 13 00:09:24.025893 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 13 00:09:24.026066 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 13 00:09:24.026223 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 00:09:24.026400 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 13 00:09:24.026559 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 13 00:09:24.026723 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 13 00:09:24.026902 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 00:09:24.027123 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 13 00:09:24.027288 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 13 00:09:24.027448 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 13 00:09:24.027605 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 00:09:24.027786 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 13 00:09:24.027984 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 13 00:09:24.028152 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 13 00:09:24.028313 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 00:09:24.028474 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 13 00:09:24.028656 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 13 00:09:24.028820 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 00:09:24.029067 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 13 00:09:24.029236 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 13 00:09:24.029392 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 13 00:09:24.029571 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 13 00:09:24.029728 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 13 00:09:24.029742 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 00:09:24.029753 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 00:09:24.029763 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 00:09:24.029779 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 00:09:24.029789 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 00:09:24.029800 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 00:09:24.029810 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 00:09:24.029821 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 00:09:24.029852 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 00:09:24.029864 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 00:09:24.029874 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 00:09:24.029885 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 00:09:24.029900 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 00:09:24.029910 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 00:09:24.029921 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 00:09:24.029932 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 00:09:24.029942 kernel: iommu: Default domain type: Translated Oct 13 00:09:24.029961 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 00:09:24.029972 kernel: efivars: Registered efivars operations Oct 13 00:09:24.029982 kernel: PCI: Using ACPI for IRQ routing Oct 13 00:09:24.029993 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 00:09:24.030007 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 13 00:09:24.030018 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 13 00:09:24.030028 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Oct 13 00:09:24.030038 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Oct 13 00:09:24.030049 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 13 00:09:24.030059 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 13 00:09:24.030069 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Oct 13 00:09:24.030080 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 13 00:09:24.030245 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 00:09:24.030407 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 00:09:24.030561 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 00:09:24.030574 kernel: vgaarb: loaded Oct 13 00:09:24.030585 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 00:09:24.030596 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 00:09:24.030606 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 00:09:24.030617 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:09:24.030628 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:09:24.030642 kernel: pnp: PnP ACPI init Oct 13 00:09:24.030850 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 00:09:24.030867 kernel: pnp: PnP ACPI: found 6 devices Oct 13 00:09:24.030878 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 00:09:24.030889 kernel: NET: Registered PF_INET protocol family Oct 13 00:09:24.030925 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:09:24.030939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:09:24.030959 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:09:24.030973 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:09:24.030984 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:09:24.030995 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:09:24.031006 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:09:24.031017 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:09:24.031027 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:09:24.031038 kernel: NET: Registered PF_XDP protocol family Oct 13 00:09:24.031201 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 13 00:09:24.031356 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 13 00:09:24.031503 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 00:09:24.031646 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 00:09:24.031786 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 00:09:24.031970 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 00:09:24.032115 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 00:09:24.032255 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 00:09:24.032268 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:09:24.032279 kernel: Initialise system trusted keyrings Oct 13 00:09:24.032296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:09:24.032307 kernel: Key type asymmetric registered Oct 13 00:09:24.032317 kernel: Asymmetric key parser 'x509' registered Oct 13 00:09:24.032328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 13 00:09:24.032339 kernel: io scheduler mq-deadline registered Oct 13 00:09:24.032350 kernel: io scheduler kyber registered Oct 13 00:09:24.032360 kernel: io scheduler bfq registered Oct 13 00:09:24.032371 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 00:09:24.032383 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 00:09:24.032397 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 00:09:24.032411 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 00:09:24.032422 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:09:24.032433 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 00:09:24.032444 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 00:09:24.032455 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 00:09:24.032469 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 00:09:24.032639 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 00:09:24.032655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 00:09:24.032800 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 00:09:24.033069 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T00:09:23 UTC (1760314163) Oct 13 00:09:24.033215 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 00:09:24.033229 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 00:09:24.033245 kernel: efifb: probing for efifb Oct 13 00:09:24.033259 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 00:09:24.033270 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 00:09:24.033281 kernel: efifb: scrolling: redraw Oct 13 00:09:24.033292 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 00:09:24.033303 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 00:09:24.033314 kernel: fb0: EFI VGA frame buffer device Oct 13 00:09:24.033325 kernel: pstore: Using crash dump compression: deflate Oct 13 00:09:24.033336 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 00:09:24.033347 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:09:24.033361 kernel: Segment Routing with IPv6 Oct 13 00:09:24.033372 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:09:24.033383 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:09:24.033394 kernel: Key type dns_resolver registered Oct 13 00:09:24.033404 kernel: IPI shorthand broadcast: enabled Oct 13 00:09:24.033415 kernel: sched_clock: Marking stable (1385002955, 294237540)->(1985364256, -306123761) Oct 13 00:09:24.033427 kernel: registered taskstats version 1 Oct 13 00:09:24.033438 kernel: Loading compiled-in X.509 certificates Oct 13 00:09:24.033449 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.110-flatcar: 50d5efdc867bacb346c7c22eb5069c0bfc15416d' Oct 13 00:09:24.033462 kernel: Key type .fscrypt registered Oct 13 00:09:24.033473 kernel: Key type fscrypt-provisioning registered Oct 13 00:09:24.033485 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:09:24.033495 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:09:24.033507 kernel: ima: No architecture policies found Oct 13 00:09:24.033517 kernel: clk: Disabling unused clocks Oct 13 00:09:24.033528 kernel: Freeing unused kernel image (initmem) memory: 43512K Oct 13 00:09:24.033539 kernel: Write protecting the kernel read-only data: 38912k Oct 13 00:09:24.033554 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Oct 13 00:09:24.033565 kernel: Run /init as init process Oct 13 00:09:24.033575 kernel: with arguments: Oct 13 00:09:24.033586 kernel: /init Oct 13 00:09:24.033597 kernel: with environment: Oct 13 00:09:24.033608 kernel: HOME=/ Oct 13 00:09:24.033618 kernel: TERM=linux Oct 13 00:09:24.033629 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:09:24.033641 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:09:24.033659 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:09:24.033672 systemd[1]: Detected virtualization kvm. Oct 13 00:09:24.033683 systemd[1]: Detected architecture x86-64. Oct 13 00:09:24.033695 systemd[1]: Running in initrd. Oct 13 00:09:24.033706 systemd[1]: No hostname configured, using default hostname. Oct 13 00:09:24.033718 systemd[1]: Hostname set to . Oct 13 00:09:24.033730 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:09:24.033741 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:09:24.033756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:09:24.033768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:09:24.033780 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:09:24.033792 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:09:24.033804 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:09:24.033817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:09:24.033848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:09:24.033874 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:09:24.033886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:09:24.033897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:09:24.033909 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:09:24.033921 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:09:24.033932 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:09:24.033944 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:09:24.033966 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:09:24.033982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:09:24.033994 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:09:24.034006 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:09:24.034018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:09:24.034029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:09:24.034041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:09:24.034053 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:09:24.034065 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:09:24.034079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:09:24.034091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:09:24.034102 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:09:24.034114 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:09:24.034126 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:09:24.034137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:09:24.034149 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:09:24.034161 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:09:24.034176 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:09:24.034221 systemd-journald[194]: Collecting audit messages is disabled. Oct 13 00:09:24.034252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:09:24.034264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:09:24.034277 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:09:24.034289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:09:24.034300 systemd-journald[194]: Journal started Oct 13 00:09:24.034328 systemd-journald[194]: Runtime Journal (/run/log/journal/6cf9039a06e948d68ac952c384a89c0a) is 6M, max 48.2M, 42.2M free. Oct 13 00:09:24.010573 systemd-modules-load[195]: Inserted module 'overlay' Oct 13 00:09:24.041422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:09:24.041448 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:09:24.047885 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:09:24.048615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:09:24.052749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:09:24.054098 kernel: Bridge firewalling registered Oct 13 00:09:24.053598 systemd-modules-load[195]: Inserted module 'br_netfilter' Oct 13 00:09:24.056530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:09:24.060474 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:09:24.073972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:09:24.074752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:09:24.078867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:09:24.091888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:09:24.100207 dracut-cmdline[224]: dracut-dracut-053 Oct 13 00:09:24.102962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:09:24.107515 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f22c322725201bc05beb6be7a3cc1733cdde87d870355f876093fa075b62debf Oct 13 00:09:24.146060 systemd-resolved[237]: Positive Trust Anchors: Oct 13 00:09:24.146074 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:09:24.146104 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:09:24.148658 systemd-resolved[237]: Defaulting to hostname 'linux'. Oct 13 00:09:24.149912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:09:24.183236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:09:24.253870 kernel: SCSI subsystem initialized Oct 13 00:09:24.263851 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:09:24.275861 kernel: iscsi: registered transport (tcp) Oct 13 00:09:24.297885 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:09:24.297950 kernel: QLogic iSCSI HBA Driver Oct 13 00:09:24.353497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:09:24.366995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:09:24.392308 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:09:24.392340 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:09:24.393886 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 13 00:09:24.448865 kernel: raid6: avx2x4 gen() 30641 MB/s Oct 13 00:09:24.465857 kernel: raid6: avx2x2 gen() 31330 MB/s Oct 13 00:09:24.483618 kernel: raid6: avx2x1 gen() 25350 MB/s Oct 13 00:09:24.483641 kernel: raid6: using algorithm avx2x2 gen() 31330 MB/s Oct 13 00:09:24.501621 kernel: raid6: .... xor() 19965 MB/s, rmw enabled Oct 13 00:09:24.501643 kernel: raid6: using avx2x2 recovery algorithm Oct 13 00:09:24.524875 kernel: xor: automatically using best checksumming function avx Oct 13 00:09:24.702914 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:09:24.721398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:09:24.729026 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:09:24.751723 systemd-udevd[417]: Using default interface naming scheme 'v255'. Oct 13 00:09:24.760872 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:09:24.770609 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:09:24.788293 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 13 00:09:24.833245 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:09:24.845133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:09:24.919118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:09:24.930290 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:09:24.942932 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:09:24.947487 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:09:24.952218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:09:24.956194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:09:24.963998 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:09:24.969874 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 00:09:24.973565 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 00:09:24.973590 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 00:09:24.979997 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 00:09:24.980030 kernel: GPT:9289727 != 19775487 Oct 13 00:09:24.980041 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 00:09:24.980052 kernel: GPT:9289727 != 19775487 Oct 13 00:09:24.980763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:09:24.985311 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 00:09:24.985334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:09:25.001494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:09:25.003549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:09:25.010449 kernel: libata version 3.00 loaded. Oct 13 00:09:25.010479 kernel: AVX2 version of gcm_enc/dec engaged. Oct 13 00:09:25.010490 kernel: AES CTR mode by8 optimization enabled Oct 13 00:09:25.010535 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:09:25.088946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:09:25.089291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:09:25.098361 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 00:09:25.098561 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 00:09:25.100106 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:09:25.110235 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (463) Oct 13 00:09:25.110269 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 13 00:09:25.110464 kernel: BTRFS: device fsid 1b3281fd-66ec-42df-bcbd-268fe4ae17be devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (470) Oct 13 00:09:25.110476 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 00:09:25.114328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:09:25.120527 kernel: scsi host0: ahci Oct 13 00:09:25.116827 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:09:25.123624 kernel: scsi host1: ahci Oct 13 00:09:25.125466 kernel: scsi host2: ahci Oct 13 00:09:25.125659 kernel: scsi host3: ahci Oct 13 00:09:25.127241 kernel: scsi host4: ahci Oct 13 00:09:25.127427 kernel: scsi host5: ahci Oct 13 00:09:25.130865 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 13 00:09:25.130895 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 13 00:09:25.130907 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 13 00:09:25.132816 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 13 00:09:25.132850 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 13 00:09:25.132861 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 13 00:09:25.138391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 00:09:25.142670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:09:25.166777 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 00:09:25.184886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:09:25.193795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 00:09:25.193903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 00:09:25.207990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:09:25.210625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:09:25.228754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:09:25.444057 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 00:09:25.444150 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 00:09:25.444855 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 00:09:25.446610 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 00:09:25.446865 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 00:09:25.447872 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 00:09:25.449591 kernel: ata3.00: applying bridge limits Oct 13 00:09:25.449867 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 00:09:25.451865 kernel: ata3.00: configured for UDMA/100 Oct 13 00:09:25.453857 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 00:09:25.501350 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 00:09:25.501672 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 00:09:25.513868 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 00:09:25.647443 disk-uuid[570]: Primary Header is updated. Oct 13 00:09:25.647443 disk-uuid[570]: Secondary Entries is updated. Oct 13 00:09:25.647443 disk-uuid[570]: Secondary Header is updated. Oct 13 00:09:25.654105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:09:25.656868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:09:26.657868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 00:09:26.658463 disk-uuid[582]: The operation has completed successfully. Oct 13 00:09:26.694089 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:09:26.694231 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:09:26.740205 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:09:26.743352 sh[595]: Success Oct 13 00:09:26.756861 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 13 00:09:26.792871 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:09:26.813963 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:09:26.817338 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:09:26.828680 kernel: BTRFS info (device dm-0): first mount of filesystem 1b3281fd-66ec-42df-bcbd-268fe4ae17be Oct 13 00:09:26.828712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 00:09:26.828723 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 13 00:09:26.830325 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:09:26.831505 kernel: BTRFS info (device dm-0): using free space tree Oct 13 00:09:26.837057 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:09:26.840575 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:09:26.854200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:09:26.858154 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:09:26.879976 kernel: BTRFS info (device vda6): first mount of filesystem 2fdf6342-252c-4019-93cf-d9e28c1a91b4 Oct 13 00:09:26.880047 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 00:09:26.880059 kernel: BTRFS info (device vda6): using free space tree Oct 13 00:09:26.883871 kernel: BTRFS info (device vda6): auto enabling async discard Oct 13 00:09:26.889881 kernel: BTRFS info (device vda6): last unmount of filesystem 2fdf6342-252c-4019-93cf-d9e28c1a91b4 Oct 13 00:09:26.894217 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:09:26.906131 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:09:26.962733 ignition[690]: Ignition 2.20.0 Oct 13 00:09:26.962749 ignition[690]: Stage: fetch-offline Oct 13 00:09:26.962807 ignition[690]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:26.962821 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:26.962961 ignition[690]: parsed url from cmdline: "" Oct 13 00:09:26.962966 ignition[690]: no config URL provided Oct 13 00:09:26.962973 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:09:26.962987 ignition[690]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:09:26.963015 ignition[690]: op(1): [started] loading QEMU firmware config module Oct 13 00:09:26.963021 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 00:09:26.969535 ignition[690]: op(1): [finished] loading QEMU firmware config module Oct 13 00:09:26.992183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:09:27.001996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:09:27.032209 systemd-networkd[781]: lo: Link UP Oct 13 00:09:27.032220 systemd-networkd[781]: lo: Gained carrier Oct 13 00:09:27.034049 systemd-networkd[781]: Enumeration completed Oct 13 00:09:27.034447 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:09:27.034452 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:09:27.034695 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:09:27.036166 systemd-networkd[781]: eth0: Link UP Oct 13 00:09:27.036170 systemd-networkd[781]: eth0: Gained carrier Oct 13 00:09:27.036178 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:09:27.039906 systemd[1]: Reached target network.target - Network. Oct 13 00:09:27.062917 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:09:27.083077 ignition[690]: parsing config with SHA512: de992593f1cea77ca252318ff6b0a9c4f13253903a0eb8e4a78873b7e28f61a775e509cda94ff2861beee2dfd4764767f6aa1c7e30e25f30ebb3cf14b4a1efb5 Oct 13 00:09:27.088428 unknown[690]: fetched base config from "system" Oct 13 00:09:27.088443 unknown[690]: fetched user config from "qemu" Oct 13 00:09:27.091314 ignition[690]: fetch-offline: fetch-offline passed Oct 13 00:09:27.092656 ignition[690]: Ignition finished successfully Oct 13 00:09:27.095269 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:09:27.099906 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 00:09:27.109088 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:09:27.127223 ignition[786]: Ignition 2.20.0 Oct 13 00:09:27.127236 ignition[786]: Stage: kargs Oct 13 00:09:27.127410 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:27.127423 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:27.128263 ignition[786]: kargs: kargs passed Oct 13 00:09:27.128309 ignition[786]: Ignition finished successfully Oct 13 00:09:27.135124 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:09:27.148083 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:09:27.162594 ignition[795]: Ignition 2.20.0 Oct 13 00:09:27.162607 ignition[795]: Stage: disks Oct 13 00:09:27.162786 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:27.162800 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:27.163751 ignition[795]: disks: disks passed Oct 13 00:09:27.163804 ignition[795]: Ignition finished successfully Oct 13 00:09:27.170180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:09:27.174014 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:09:27.177549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:09:27.181439 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:09:27.184710 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:09:27.188203 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:09:27.200973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:09:27.218460 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 13 00:09:27.268234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:09:27.282948 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:09:27.398890 kernel: EXT4-fs (vda9): mounted filesystem 02b6903b-203a-4032-98c5-29ee940136f6 r/w with ordered data mode. Quota mode: none. Oct 13 00:09:27.399690 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:09:27.402960 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:09:27.417926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:09:27.421797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:09:27.425151 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 00:09:27.429668 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (814) Oct 13 00:09:27.425202 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:09:27.425225 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:09:27.434139 kernel: BTRFS info (device vda6): first mount of filesystem 2fdf6342-252c-4019-93cf-d9e28c1a91b4 Oct 13 00:09:27.434156 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 00:09:27.434167 kernel: BTRFS info (device vda6): using free space tree Oct 13 00:09:27.438852 kernel: BTRFS info (device vda6): auto enabling async discard Oct 13 00:09:27.444439 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:09:27.444635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:09:27.450600 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:09:27.488905 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:09:27.494890 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:09:27.499583 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:09:27.504040 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:09:27.597660 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:09:27.613019 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:09:27.615353 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:09:27.623879 kernel: BTRFS info (device vda6): last unmount of filesystem 2fdf6342-252c-4019-93cf-d9e28c1a91b4 Oct 13 00:09:27.653087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:09:27.768881 ignition[931]: INFO : Ignition 2.20.0 Oct 13 00:09:27.768881 ignition[931]: INFO : Stage: mount Oct 13 00:09:27.771790 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:27.771790 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:27.771790 ignition[931]: INFO : mount: mount passed Oct 13 00:09:27.771790 ignition[931]: INFO : Ignition finished successfully Oct 13 00:09:27.772200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:09:27.783023 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:09:27.827242 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:09:27.836161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:09:27.844936 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (941) Oct 13 00:09:27.850021 kernel: BTRFS info (device vda6): first mount of filesystem 2fdf6342-252c-4019-93cf-d9e28c1a91b4 Oct 13 00:09:27.850048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 00:09:27.850059 kernel: BTRFS info (device vda6): using free space tree Oct 13 00:09:27.854861 kernel: BTRFS info (device vda6): auto enabling async discard Oct 13 00:09:27.856243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:09:27.889044 ignition[958]: INFO : Ignition 2.20.0 Oct 13 00:09:27.889044 ignition[958]: INFO : Stage: files Oct 13 00:09:27.891747 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:27.891747 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:27.895886 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:09:27.898665 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:09:27.898665 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:09:27.903450 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:09:27.905925 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:09:27.905925 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:09:27.905925 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 00:09:27.905925 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 00:09:27.904067 unknown[958]: wrote ssh authorized keys file for user: core Oct 13 00:09:27.942597 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 00:09:28.002707 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 00:09:28.006219 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 00:09:28.006219 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 13 00:09:28.089058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 13 00:09:28.228018 systemd-networkd[781]: eth0: Gained IPv6LL Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:09:28.230574 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 00:09:28.257186 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 00:09:28.448235 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 13 00:09:28.837790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 00:09:28.837790 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 00:09:28.843574 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:09:28.847147 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:09:28.847147 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 00:09:28.847147 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 00:09:28.854557 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:09:28.857784 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 00:09:28.857784 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 00:09:28.863064 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 00:09:28.882158 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:09:28.887440 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 00:09:28.890421 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 00:09:28.890421 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 00:09:28.895606 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 00:09:28.898248 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:09:28.901353 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:09:28.904324 ignition[958]: INFO : files: files passed Oct 13 00:09:28.905649 ignition[958]: INFO : Ignition finished successfully Oct 13 00:09:28.906502 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:09:28.917097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:09:28.920471 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:09:28.929346 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:09:28.929483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:09:28.936231 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 00:09:28.941386 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:09:28.941386 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:09:28.946808 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:09:28.951080 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:09:28.951442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:09:28.963125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:09:28.988759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:09:28.988985 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:09:28.991316 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:09:28.994431 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:09:28.997647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:09:29.003152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:09:29.025459 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:09:29.029349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:09:29.043651 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:09:29.047470 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:09:29.049565 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:09:29.052902 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:09:29.053025 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:09:29.057119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:09:29.059624 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:09:29.063118 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:09:29.066310 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:09:29.069582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:09:29.073194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:09:29.076615 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:09:29.080384 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:09:29.083646 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:09:29.087200 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:09:29.090122 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:09:29.090282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:09:29.094153 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:09:29.096471 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:09:29.099926 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:09:29.100066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:09:29.103629 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:09:29.103786 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:09:29.107510 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:09:29.107632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:09:29.110972 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:09:29.113897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:09:29.117894 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:09:29.121417 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:09:29.124530 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:09:29.127920 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:09:29.128030 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:09:29.130889 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:09:29.130976 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:09:29.134216 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:09:29.134343 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:09:29.138578 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:09:29.138696 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:09:29.151015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:09:29.154376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:09:29.154558 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:09:29.169805 ignition[1012]: INFO : Ignition 2.20.0 Oct 13 00:09:29.169805 ignition[1012]: INFO : Stage: umount Oct 13 00:09:29.169805 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:09:29.169805 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 00:09:29.169805 ignition[1012]: INFO : umount: umount passed Oct 13 00:09:29.169805 ignition[1012]: INFO : Ignition finished successfully Oct 13 00:09:29.158967 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:09:29.160858 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:09:29.160986 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:09:29.164641 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:09:29.164873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:09:29.171227 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:09:29.171408 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:09:29.176889 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:09:29.177033 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:09:29.183180 systemd[1]: Stopped target network.target - Network. Oct 13 00:09:29.185125 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:09:29.185195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:09:29.188405 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:09:29.188460 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:09:29.191924 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:09:29.192005 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:09:29.195271 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:09:29.195326 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:09:29.199081 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:09:29.202480 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:09:29.207082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:09:29.207778 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:09:29.207956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:09:29.214416 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:09:29.215348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:09:29.215474 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:09:29.220782 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:09:29.221126 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:09:29.221281 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:09:29.227073 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:09:29.227921 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:09:29.228030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:09:29.241995 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:09:29.244884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:09:29.244962 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:09:29.248594 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:09:29.248653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:09:29.252528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:09:29.252593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:09:29.256426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:09:29.260964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:09:29.270718 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:09:29.270889 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:09:29.278699 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:09:29.278912 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:09:29.281216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:09:29.281267 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:09:29.284400 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:09:29.284443 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:09:29.288204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:09:29.288260 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:09:29.291958 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:09:29.292014 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:09:29.295111 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:09:29.295163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:09:29.308958 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:09:29.312067 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:09:29.312129 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:09:29.315775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:09:29.315872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:09:29.319616 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:09:29.319728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:09:29.500539 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:09:29.500796 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:09:29.503574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:09:29.507760 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:09:29.507910 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:09:29.528177 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:09:29.539682 systemd[1]: Switching root. Oct 13 00:09:29.573995 systemd-journald[194]: Journal stopped Oct 13 00:09:31.183486 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Oct 13 00:09:31.183566 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:09:31.183593 kernel: SELinux: policy capability open_perms=1 Oct 13 00:09:31.183608 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:09:31.183622 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:09:31.183633 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:09:31.183652 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:09:31.183663 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:09:31.183675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:09:31.183686 kernel: audit: type=1403 audit(1760314170.242:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:09:31.183699 systemd[1]: Successfully loaded SELinux policy in 54.527ms. Oct 13 00:09:31.183721 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.422ms. Oct 13 00:09:31.183735 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:09:31.183748 systemd[1]: Detected virtualization kvm. Oct 13 00:09:31.185504 systemd[1]: Detected architecture x86-64. Oct 13 00:09:31.185535 systemd[1]: Detected first boot. Oct 13 00:09:31.185549 systemd[1]: Initializing machine ID from VM UUID. Oct 13 00:09:31.185562 zram_generator::config[1058]: No configuration found. Oct 13 00:09:31.185578 kernel: Guest personality initialized and is inactive Oct 13 00:09:31.185590 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 00:09:31.185606 kernel: Initialized host personality Oct 13 00:09:31.185618 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:09:31.185630 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:09:31.185644 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:09:31.185657 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:09:31.185669 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:09:31.185682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:09:31.185695 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:09:31.185710 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:09:31.185722 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:09:31.185735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:09:31.185747 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:09:31.185770 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:09:31.185785 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:09:31.185797 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:09:31.185809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:09:31.185822 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:09:31.185854 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:09:31.185867 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:09:31.185880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:09:31.185894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:09:31.185907 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 00:09:31.185919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:09:31.185932 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:09:31.185944 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:09:31.185959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:09:31.185971 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:09:31.185984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:09:31.186002 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:09:31.186014 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:09:31.186027 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:09:31.186039 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:09:31.186052 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:09:31.186064 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:09:31.186079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:09:31.186092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:09:31.186104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:09:31.186117 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:09:31.186129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:09:31.186142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:09:31.186154 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:09:31.186168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 00:09:31.186186 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:09:31.186201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:09:31.186213 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:09:31.186226 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:09:31.186239 systemd[1]: Reached target machines.target - Containers. Oct 13 00:09:31.186252 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:09:31.186264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:09:31.186277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:09:31.186290 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:09:31.186305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:09:31.186318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:09:31.186335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:09:31.186348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:09:31.186360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:09:31.186373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:09:31.186385 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:09:31.186398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:09:31.186413 kernel: fuse: init (API version 7.39) Oct 13 00:09:31.186424 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:09:31.186437 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:09:31.186451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:09:31.186464 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:09:31.186476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:09:31.186489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:09:31.186501 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:09:31.186514 kernel: ACPI: bus type drm_connector registered Oct 13 00:09:31.186527 kernel: loop: module loaded Oct 13 00:09:31.186539 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:09:31.186574 systemd-journald[1143]: Collecting audit messages is disabled. Oct 13 00:09:31.186597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:09:31.186617 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:09:31.186630 systemd-journald[1143]: Journal started Oct 13 00:09:31.186653 systemd-journald[1143]: Runtime Journal (/run/log/journal/6cf9039a06e948d68ac952c384a89c0a) is 6M, max 48.2M, 42.2M free. Oct 13 00:09:30.865235 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:09:31.186940 systemd[1]: Stopped verity-setup.service. Oct 13 00:09:30.879085 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 00:09:30.879608 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:09:31.192865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 00:09:31.200888 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:09:31.202443 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:09:31.204246 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:09:31.206214 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:09:31.207930 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:09:31.209803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:09:31.211753 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:09:31.213683 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:09:31.216128 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:09:31.218445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:09:31.218687 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:09:31.220961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:09:31.221188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:09:31.223347 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:09:31.223578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:09:31.225608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:09:31.225983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:09:31.228248 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:09:31.228474 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:09:31.230510 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:09:31.230736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:09:31.232824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:09:31.234973 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:09:31.237328 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:09:31.239665 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:09:31.258087 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:09:31.268000 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:09:31.273195 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:09:31.275589 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:09:31.275654 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:09:31.279231 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:09:31.283373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:09:31.287049 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:09:31.289291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:09:31.291610 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:09:31.296450 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:09:31.298822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:09:31.304323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:09:31.306254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:09:31.308412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:09:31.314213 systemd-journald[1143]: Time spent on flushing to /var/log/journal/6cf9039a06e948d68ac952c384a89c0a is 23.598ms for 1053 entries. Oct 13 00:09:31.314213 systemd-journald[1143]: System Journal (/var/log/journal/6cf9039a06e948d68ac952c384a89c0a) is 8M, max 195.6M, 187.6M free. Oct 13 00:09:31.431111 systemd-journald[1143]: Received client request to flush runtime journal. Oct 13 00:09:31.312055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:09:31.325000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:09:31.329382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:09:31.332909 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:09:31.336714 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:09:31.339142 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:09:31.347429 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 13 00:09:31.432867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:09:31.466988 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:09:31.472460 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:09:31.477092 kernel: loop0: detected capacity change from 0 to 147912 Oct 13 00:09:31.483011 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:09:31.492540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:09:31.495586 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 13 00:09:31.577865 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:09:31.581764 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:09:31.591254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:09:31.612705 kernel: loop1: detected capacity change from 0 to 219144 Oct 13 00:09:31.621620 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Oct 13 00:09:31.621643 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Oct 13 00:09:31.643899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:09:31.735177 kernel: loop2: detected capacity change from 0 to 138176 Oct 13 00:09:31.777869 kernel: loop3: detected capacity change from 0 to 147912 Oct 13 00:09:31.877867 kernel: loop4: detected capacity change from 0 to 219144 Oct 13 00:09:31.894861 kernel: loop5: detected capacity change from 0 to 138176 Oct 13 00:09:32.111888 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:09:32.114285 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 00:09:32.115097 (sd-merge)[1201]: Merged extensions into '/usr'. Oct 13 00:09:32.119846 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:09:32.136885 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:09:32.137665 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:09:32.141997 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:09:32.142022 systemd[1]: Reloading... Oct 13 00:09:32.202416 zram_generator::config[1230]: No configuration found. Oct 13 00:09:32.323061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 13 00:09:32.387760 systemd[1]: Reloading finished in 245 ms. Oct 13 00:09:32.405761 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:09:32.408528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:09:32.429612 systemd[1]: Starting ensure-sysext.service... Oct 13 00:09:32.432423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:09:32.435985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:09:32.448793 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:09:32.448808 systemd[1]: Reloading... Oct 13 00:09:32.459864 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:09:32.460176 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:09:32.461242 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:09:32.461520 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 13 00:09:32.461610 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 13 00:09:32.466019 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:09:32.466034 systemd-tmpfiles[1268]: Skipping /boot Oct 13 00:09:32.470918 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Oct 13 00:09:32.480094 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:09:32.480107 systemd-tmpfiles[1268]: Skipping /boot Oct 13 00:09:32.536860 zram_generator::config[1318]: No configuration found. Oct 13 00:09:32.583891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1306) Oct 13 00:09:32.630872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 13 00:09:32.635874 kernel: ACPI: button: Power Button [PWRF] Oct 13 00:09:32.643387 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 00:09:32.643709 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 00:09:32.644015 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 13 00:09:32.646257 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 00:09:32.669856 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 13 00:09:32.699683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 13 00:09:32.744866 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 00:09:32.828824 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 00:09:32.829259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 00:09:32.832034 systemd[1]: Reloading finished in 382 ms. Oct 13 00:09:32.856511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:09:32.859644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:09:32.866819 kernel: kvm_amd: TSC scaling supported Oct 13 00:09:32.866892 kernel: kvm_amd: Nested Virtualization enabled Oct 13 00:09:32.866915 kernel: kvm_amd: Nested Paging enabled Oct 13 00:09:32.867874 kernel: kvm_amd: LBR virtualization supported Oct 13 00:09:32.869013 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 00:09:32.870200 kernel: kvm_amd: Virtual GIF supported Oct 13 00:09:32.923872 kernel: EDAC MC: Ver: 3.0.0 Oct 13 00:09:32.945180 systemd[1]: Finished ensure-sysext.service. Oct 13 00:09:32.960502 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 13 00:09:32.985078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 00:09:32.999186 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:09:33.004186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:09:33.007438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:09:33.008933 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 13 00:09:33.015110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:09:33.023045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:09:33.028037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:09:33.032567 lvm[1371]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 13 00:09:33.033937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:09:33.036295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:09:33.038296 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:09:33.040681 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:09:33.044090 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:09:33.050369 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:09:33.057168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:09:33.072446 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 00:09:33.082376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:09:33.086785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:09:33.090699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 00:09:33.092459 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 13 00:09:33.096486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:09:33.096947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:09:33.101256 augenrules[1404]: No rules Oct 13 00:09:33.103944 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:09:33.104288 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:09:33.106694 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:09:33.107063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:09:33.109483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:09:33.109737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:09:33.110149 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:09:33.110362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:09:33.110827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:09:33.111583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:09:33.123941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:09:33.130275 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 13 00:09:33.133283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:09:33.133526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:09:33.135342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:09:33.137269 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 13 00:09:33.143351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:09:33.152442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:09:33.156038 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:09:33.158794 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:09:33.164119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:09:33.168605 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:09:33.172606 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 13 00:09:33.203142 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:09:33.309308 systemd-networkd[1384]: lo: Link UP Oct 13 00:09:33.312274 systemd-networkd[1384]: lo: Gained carrier Oct 13 00:09:33.316866 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 00:09:33.320017 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:09:33.324081 systemd-networkd[1384]: Enumeration completed Oct 13 00:09:33.324500 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:09:33.324505 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:09:33.325303 systemd-networkd[1384]: eth0: Link UP Oct 13 00:09:33.325309 systemd-networkd[1384]: eth0: Gained carrier Oct 13 00:09:33.325325 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:09:33.325693 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:09:33.328369 systemd-resolved[1385]: Positive Trust Anchors: Oct 13 00:09:33.328382 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:09:33.328413 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:09:33.334159 systemd-resolved[1385]: Defaulting to hostname 'linux'. Oct 13 00:09:33.336920 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 00:09:33.337216 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:09:33.337568 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Oct 13 00:09:34.388144 systemd-resolved[1385]: Clock change detected. Flushing caches. Oct 13 00:09:34.388342 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 00:09:34.388416 systemd-timesyncd[1389]: Initial clock synchronization to Mon 2025-10-13 00:09:34.388102 UTC. Oct 13 00:09:34.390533 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:09:34.392993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:09:34.395407 systemd[1]: Reached target network.target - Network. Oct 13 00:09:34.398791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:09:34.401041 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:09:34.403737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:09:34.406813 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:09:34.409422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:09:34.416465 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:09:34.418680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:09:34.418790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:09:34.418831 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:09:34.419107 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:09:34.420124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:09:34.422887 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:09:34.426882 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:09:34.427591 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:09:34.432989 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:09:34.441483 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:09:34.443916 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:09:34.447201 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:09:34.449696 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:09:34.452705 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:09:34.457662 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:09:34.459564 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:09:34.459604 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:09:34.473923 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:09:34.477152 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:09:34.479854 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:09:34.483783 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:09:34.486010 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:09:34.488224 jq[1446]: false Oct 13 00:09:34.488959 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:09:34.494403 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 00:09:34.499814 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:09:34.502986 dbus-daemon[1445]: [system] SELinux support is enabled Oct 13 00:09:34.505046 extend-filesystems[1447]: Found loop3 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found loop4 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found loop5 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found sr0 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda1 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda2 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda3 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found usr Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda4 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda6 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda7 Oct 13 00:09:34.505046 extend-filesystems[1447]: Found vda9 Oct 13 00:09:34.505046 extend-filesystems[1447]: Checking size of /dev/vda9 Oct 13 00:09:34.519037 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:09:34.525493 extend-filesystems[1447]: Resized partition /dev/vda9 Oct 13 00:09:34.533973 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Oct 13 00:09:34.549130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1319) Oct 13 00:09:34.549172 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 00:09:34.563075 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:09:34.566681 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:09:34.567519 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:09:34.571343 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:09:34.578158 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:09:34.581650 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:09:34.596323 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:09:34.596692 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:09:34.597182 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:09:34.597531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:09:34.597808 update_engine[1468]: I20251013 00:09:34.597707 1468 main.cc:92] Flatcar Update Engine starting Oct 13 00:09:34.599595 update_engine[1468]: I20251013 00:09:34.599550 1468 update_check_scheduler.cc:74] Next update check in 7m51s Oct 13 00:09:34.602318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:09:34.602654 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:09:34.605000 jq[1469]: true Oct 13 00:09:34.612572 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 00:09:34.623702 jq[1472]: true Oct 13 00:09:34.626313 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:09:34.641314 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 00:09:34.641314 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 00:09:34.641314 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 00:09:34.649840 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Oct 13 00:09:34.645172 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:09:34.645654 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:09:34.662935 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Oct 13 00:09:34.663781 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 00:09:34.665945 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:09:34.666307 tar[1471]: linux-amd64/LICENSE Oct 13 00:09:34.673391 tar[1471]: linux-amd64/helm Oct 13 00:09:34.666321 systemd-logind[1463]: New seat seat0. Oct 13 00:09:34.670239 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:09:34.673527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:09:34.673691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:09:34.676260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:09:34.676410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:09:34.688095 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:09:34.704814 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:09:34.705409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:09:34.709600 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 00:09:34.754415 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:09:34.774824 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:09:34.805301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:09:34.815239 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:09:34.822535 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:09:34.822830 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:09:34.828080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:09:34.845743 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:09:34.854113 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:09:34.858037 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 00:09:34.860512 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:09:34.880795 containerd[1473]: time="2025-10-13T00:09:34.879897210Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Oct 13 00:09:34.905317 containerd[1473]: time="2025-10-13T00:09:34.905174392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907298 containerd[1473]: time="2025-10-13T00:09:34.907261347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.110-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907298 containerd[1473]: time="2025-10-13T00:09:34.907287947Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 13 00:09:34.907364 containerd[1473]: time="2025-10-13T00:09:34.907302264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 13 00:09:34.907515 containerd[1473]: time="2025-10-13T00:09:34.907486149Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 13 00:09:34.907515 containerd[1473]: time="2025-10-13T00:09:34.907508010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907592 containerd[1473]: time="2025-10-13T00:09:34.907575056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907592 containerd[1473]: time="2025-10-13T00:09:34.907590104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907927 containerd[1473]: time="2025-10-13T00:09:34.907901268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907957 containerd[1473]: time="2025-10-13T00:09:34.907925714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.907957 containerd[1473]: time="2025-10-13T00:09:34.907942385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 13 00:09:34.908001 containerd[1473]: time="2025-10-13T00:09:34.907955389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.908093 containerd[1473]: time="2025-10-13T00:09:34.908072479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.908396 containerd[1473]: time="2025-10-13T00:09:34.908366240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 13 00:09:34.908630 containerd[1473]: time="2025-10-13T00:09:34.908569261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 13 00:09:34.908630 containerd[1473]: time="2025-10-13T00:09:34.908592575Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 13 00:09:34.908746 containerd[1473]: time="2025-10-13T00:09:34.908726836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 13 00:09:34.908834 containerd[1473]: time="2025-10-13T00:09:34.908816495Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:09:34.915147 containerd[1473]: time="2025-10-13T00:09:34.915118498Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 13 00:09:34.915197 containerd[1473]: time="2025-10-13T00:09:34.915167680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 13 00:09:34.915197 containerd[1473]: time="2025-10-13T00:09:34.915187988Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 13 00:09:34.915197 containerd[1473]: time="2025-10-13T00:09:34.915206914Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 13 00:09:34.915270 containerd[1473]: time="2025-10-13T00:09:34.915223194Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 13 00:09:34.915475 containerd[1473]: time="2025-10-13T00:09:34.915439059Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 13 00:09:34.915817 containerd[1473]: time="2025-10-13T00:09:34.915795588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 13 00:09:34.915966 containerd[1473]: time="2025-10-13T00:09:34.915946311Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 13 00:09:34.915993 containerd[1473]: time="2025-10-13T00:09:34.915971007Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 13 00:09:34.916012 containerd[1473]: time="2025-10-13T00:09:34.915988410Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 13 00:09:34.916012 containerd[1473]: time="2025-10-13T00:09:34.916003578Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916012 containerd[1473]: time="2025-10-13T00:09:34.916019057Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916085 containerd[1473]: time="2025-10-13T00:09:34.916030298Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916085 containerd[1473]: time="2025-10-13T00:09:34.916042902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916085 containerd[1473]: time="2025-10-13T00:09:34.916055846Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916085 containerd[1473]: time="2025-10-13T00:09:34.916067578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916085 containerd[1473]: time="2025-10-13T00:09:34.916078980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916089209Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916106712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916118534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916129745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916142940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916153870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916166 containerd[1473]: time="2025-10-13T00:09:34.916165923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916176382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916188325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916202191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916219233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916235704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916250582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916266291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916299 containerd[1473]: time="2025-10-13T00:09:34.916283383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 13 00:09:34.916437 containerd[1473]: time="2025-10-13T00:09:34.916319190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916437 containerd[1473]: time="2025-10-13T00:09:34.916332876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.916437 containerd[1473]: time="2025-10-13T00:09:34.916343245Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 13 00:09:34.917088 containerd[1473]: time="2025-10-13T00:09:34.917068867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 13 00:09:34.917118 containerd[1473]: time="2025-10-13T00:09:34.917092711Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 13 00:09:34.917205 containerd[1473]: time="2025-10-13T00:09:34.917185636Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 13 00:09:34.918797 containerd[1473]: time="2025-10-13T00:09:34.918251806Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 13 00:09:34.918797 containerd[1473]: time="2025-10-13T00:09:34.918410163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.918797 containerd[1473]: time="2025-10-13T00:09:34.918429069Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 13 00:09:34.918797 containerd[1473]: time="2025-10-13T00:09:34.918451420Z" level=info msg="NRI interface is disabled by configuration." Oct 13 00:09:34.918797 containerd[1473]: time="2025-10-13T00:09:34.918461059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 13 00:09:34.918929 containerd[1473]: time="2025-10-13T00:09:34.918818058Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 13 00:09:34.918929 containerd[1473]: time="2025-10-13T00:09:34.918876027Z" level=info msg="Connect containerd service" Oct 13 00:09:34.918929 containerd[1473]: time="2025-10-13T00:09:34.918910772Z" level=info msg="using legacy CRI server" Oct 13 00:09:34.918929 containerd[1473]: time="2025-10-13T00:09:34.918919729Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:09:34.919110 containerd[1473]: time="2025-10-13T00:09:34.919020067Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 13 00:09:34.919948 containerd[1473]: time="2025-10-13T00:09:34.919926548Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:09:34.920094 containerd[1473]: time="2025-10-13T00:09:34.920059958Z" level=info msg="Start subscribing containerd event" Oct 13 00:09:34.920120 containerd[1473]: time="2025-10-13T00:09:34.920110874Z" level=info msg="Start recovering state" Oct 13 00:09:34.920189 containerd[1473]: time="2025-10-13T00:09:34.920175214Z" level=info msg="Start event monitor" Oct 13 00:09:34.920214 containerd[1473]: time="2025-10-13T00:09:34.920207285Z" level=info msg="Start snapshots syncer" Oct 13 00:09:34.920242 containerd[1473]: time="2025-10-13T00:09:34.920217384Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:09:34.920242 containerd[1473]: time="2025-10-13T00:09:34.920226300Z" level=info msg="Start streaming server" Oct 13 00:09:34.920677 containerd[1473]: time="2025-10-13T00:09:34.920654664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:09:34.920748 containerd[1473]: time="2025-10-13T00:09:34.920725166Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:09:34.920823 containerd[1473]: time="2025-10-13T00:09:34.920806970Z" level=info msg="containerd successfully booted in 0.042215s" Oct 13 00:09:34.920918 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:09:35.128166 tar[1471]: linux-amd64/README.md Oct 13 00:09:35.145247 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 00:09:35.869103 systemd-networkd[1384]: eth0: Gained IPv6LL Oct 13 00:09:35.873254 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:09:35.877173 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:09:35.893274 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 00:09:35.897625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:09:35.901505 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:09:35.928437 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 00:09:35.929003 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 00:09:35.932526 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:09:35.933554 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:09:37.934747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:09:37.937735 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:09:37.939702 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:09:37.941900 systemd[1]: Startup finished in 1.565s (kernel) + 6.464s (initrd) + 6.701s (userspace) = 14.732s. Oct 13 00:09:38.209001 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:09:38.211005 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:40574.service - OpenSSH per-connection server daemon (10.0.0.1:40574). Oct 13 00:09:38.276196 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 40574 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:38.279195 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:38.288287 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:09:38.305116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:09:38.313182 systemd-logind[1463]: New session 1 of user core. Oct 13 00:09:38.326247 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:09:38.338104 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:09:38.347325 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:09:38.351183 systemd-logind[1463]: New session c1 of user core. Oct 13 00:09:38.524177 systemd[1573]: Queued start job for default target default.target. Oct 13 00:09:38.534028 kubelet[1558]: E1013 00:09:38.532447 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:09:38.535229 systemd[1573]: Created slice app.slice - User Application Slice. Oct 13 00:09:38.535257 systemd[1573]: Reached target paths.target - Paths. Oct 13 00:09:38.535304 systemd[1573]: Reached target timers.target - Timers. Oct 13 00:09:38.535624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:09:38.535861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:09:38.536631 systemd[1]: kubelet.service: Consumed 2.425s CPU time, 258.6M memory peak. Oct 13 00:09:38.537297 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:09:38.549710 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:09:38.549894 systemd[1573]: Reached target sockets.target - Sockets. Oct 13 00:09:38.549951 systemd[1573]: Reached target basic.target - Basic System. Oct 13 00:09:38.550010 systemd[1573]: Reached target default.target - Main User Target. Oct 13 00:09:38.550053 systemd[1573]: Startup finished in 189ms. Oct 13 00:09:38.550410 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:09:38.571213 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:09:38.646149 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:40584.service - OpenSSH per-connection server daemon (10.0.0.1:40584). Oct 13 00:09:38.693194 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 40584 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:38.698012 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:38.704061 systemd-logind[1463]: New session 2 of user core. Oct 13 00:09:38.718914 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:09:38.775791 sshd[1588]: Connection closed by 10.0.0.1 port 40584 Oct 13 00:09:38.776088 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:38.788720 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:40584.service: Deactivated successfully. Oct 13 00:09:38.790814 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 00:09:38.792492 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Oct 13 00:09:38.801096 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:40592.service - OpenSSH per-connection server daemon (10.0.0.1:40592). Oct 13 00:09:38.802209 systemd-logind[1463]: Removed session 2. Oct 13 00:09:38.841606 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 40592 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:38.843120 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:38.847639 systemd-logind[1463]: New session 3 of user core. Oct 13 00:09:38.855965 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:09:38.906282 sshd[1596]: Connection closed by 10.0.0.1 port 40592 Oct 13 00:09:38.906678 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:38.919701 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:40592.service: Deactivated successfully. Oct 13 00:09:38.921721 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 00:09:38.923223 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Oct 13 00:09:38.937096 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:40598.service - OpenSSH per-connection server daemon (10.0.0.1:40598). Oct 13 00:09:38.938317 systemd-logind[1463]: Removed session 3. Oct 13 00:09:38.976246 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 40598 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:38.977966 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:38.982451 systemd-logind[1463]: New session 4 of user core. Oct 13 00:09:38.991924 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:09:39.046147 sshd[1604]: Connection closed by 10.0.0.1 port 40598 Oct 13 00:09:39.046927 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:39.065384 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:40598.service: Deactivated successfully. Oct 13 00:09:39.067301 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:09:39.068019 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:09:39.080329 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:40602.service - OpenSSH per-connection server daemon (10.0.0.1:40602). Oct 13 00:09:39.081671 systemd-logind[1463]: Removed session 4. Oct 13 00:09:39.118176 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 40602 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:39.119723 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:39.124527 systemd-logind[1463]: New session 5 of user core. Oct 13 00:09:39.137916 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:09:39.198564 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:09:39.198995 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:09:39.214395 sudo[1613]: pam_unix(sudo:session): session closed for user root Oct 13 00:09:39.216353 sshd[1612]: Connection closed by 10.0.0.1 port 40602 Oct 13 00:09:39.218014 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:39.232064 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:40602.service: Deactivated successfully. Oct 13 00:09:39.234446 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:09:39.236379 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:09:39.243351 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:40610.service - OpenSSH per-connection server daemon (10.0.0.1:40610). Oct 13 00:09:39.244678 systemd-logind[1463]: Removed session 5. Oct 13 00:09:39.283045 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:39.284812 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:39.289794 systemd-logind[1463]: New session 6 of user core. Oct 13 00:09:39.299944 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:09:39.357495 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:09:39.357915 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:09:39.362027 sudo[1623]: pam_unix(sudo:session): session closed for user root Oct 13 00:09:39.368842 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:09:39.369194 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:09:39.389163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:09:39.424139 augenrules[1645]: No rules Oct 13 00:09:39.426512 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:09:39.427152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:09:39.428700 sudo[1622]: pam_unix(sudo:session): session closed for user root Oct 13 00:09:39.430651 sshd[1621]: Connection closed by 10.0.0.1 port 40610 Oct 13 00:09:39.431070 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Oct 13 00:09:39.447177 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:40610.service: Deactivated successfully. Oct 13 00:09:39.449014 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:09:39.450564 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:09:39.460111 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:40626.service - OpenSSH per-connection server daemon (10.0.0.1:40626). Oct 13 00:09:39.461292 systemd-logind[1463]: Removed session 6. Oct 13 00:09:39.500836 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 40626 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:09:39.502308 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:09:39.506744 systemd-logind[1463]: New session 7 of user core. Oct 13 00:09:39.516937 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:09:39.571898 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:09:39.572295 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:09:40.356285 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 00:09:40.356329 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 00:09:41.409399 dockerd[1676]: time="2025-10-13T00:09:41.409280439Z" level=info msg="Starting up" Oct 13 00:09:42.170094 dockerd[1676]: time="2025-10-13T00:09:42.170019119Z" level=info msg="Loading containers: start." Oct 13 00:09:42.395795 kernel: Initializing XFRM netlink socket Oct 13 00:09:42.497483 systemd-networkd[1384]: docker0: Link UP Oct 13 00:09:42.542662 dockerd[1676]: time="2025-10-13T00:09:42.542584027Z" level=info msg="Loading containers: done." Oct 13 00:09:42.583060 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3661894270-merged.mount: Deactivated successfully. Oct 13 00:09:42.588083 dockerd[1676]: time="2025-10-13T00:09:42.588032973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 00:09:42.588166 dockerd[1676]: time="2025-10-13T00:09:42.588155092Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Oct 13 00:09:42.588306 dockerd[1676]: time="2025-10-13T00:09:42.588284254Z" level=info msg="Daemon has completed initialization" Oct 13 00:09:42.640924 dockerd[1676]: time="2025-10-13T00:09:42.640831998Z" level=info msg="API listen on /run/docker.sock" Oct 13 00:09:42.641055 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 00:09:43.289457 containerd[1473]: time="2025-10-13T00:09:43.289413443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 00:09:44.416109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183368806.mount: Deactivated successfully. Oct 13 00:09:48.549700 containerd[1473]: time="2025-10-13T00:09:48.549608855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:48.550504 containerd[1473]: time="2025-10-13T00:09:48.550415789Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 00:09:48.551936 containerd[1473]: time="2025-10-13T00:09:48.551859507Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:48.557725 containerd[1473]: time="2025-10-13T00:09:48.557671862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:48.559993 containerd[1473]: time="2025-10-13T00:09:48.559943353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 5.270484936s" Oct 13 00:09:48.559993 containerd[1473]: time="2025-10-13T00:09:48.559992916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 00:09:48.561544 containerd[1473]: time="2025-10-13T00:09:48.561495665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 00:09:48.786652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 00:09:48.803956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:09:49.058416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:09:49.063560 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:09:49.124802 kubelet[1936]: E1013 00:09:49.124714 1936 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:09:49.131993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:09:49.132225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:09:49.132672 systemd[1]: kubelet.service: Consumed 333ms CPU time, 112.4M memory peak. Oct 13 00:09:50.562258 containerd[1473]: time="2025-10-13T00:09:50.562132242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:50.564392 containerd[1473]: time="2025-10-13T00:09:50.564328312Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 00:09:50.568609 containerd[1473]: time="2025-10-13T00:09:50.568548179Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:50.572127 containerd[1473]: time="2025-10-13T00:09:50.572077390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:50.573264 containerd[1473]: time="2025-10-13T00:09:50.573198824Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.011665709s" Oct 13 00:09:50.573264 containerd[1473]: time="2025-10-13T00:09:50.573239811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 00:09:50.573794 containerd[1473]: time="2025-10-13T00:09:50.573745850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 00:09:51.605932 containerd[1473]: time="2025-10-13T00:09:51.605863707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:51.606756 containerd[1473]: time="2025-10-13T00:09:51.606683064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 00:09:51.607985 containerd[1473]: time="2025-10-13T00:09:51.607942938Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:51.610720 containerd[1473]: time="2025-10-13T00:09:51.610687256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:51.611813 containerd[1473]: time="2025-10-13T00:09:51.611756583Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.037959717s" Oct 13 00:09:51.611813 containerd[1473]: time="2025-10-13T00:09:51.611808089Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 00:09:51.612497 containerd[1473]: time="2025-10-13T00:09:51.612328516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 00:09:53.385535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849475749.mount: Deactivated successfully. Oct 13 00:09:53.705271 containerd[1473]: time="2025-10-13T00:09:53.705092731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:53.705846 containerd[1473]: time="2025-10-13T00:09:53.705791421Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 00:09:53.706947 containerd[1473]: time="2025-10-13T00:09:53.706917494Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:53.708820 containerd[1473]: time="2025-10-13T00:09:53.708785509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:53.709388 containerd[1473]: time="2025-10-13T00:09:53.709357061Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.096997867s" Oct 13 00:09:53.709425 containerd[1473]: time="2025-10-13T00:09:53.709393339Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 00:09:53.710010 containerd[1473]: time="2025-10-13T00:09:53.709853542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 00:09:54.326662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659269571.mount: Deactivated successfully. Oct 13 00:09:55.900218 containerd[1473]: time="2025-10-13T00:09:55.900108367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:55.934010 containerd[1473]: time="2025-10-13T00:09:55.933938935Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 00:09:56.051194 containerd[1473]: time="2025-10-13T00:09:56.051112722Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:56.117460 containerd[1473]: time="2025-10-13T00:09:56.117377309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:56.119080 containerd[1473]: time="2025-10-13T00:09:56.118992729Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.409111996s" Oct 13 00:09:56.119080 containerd[1473]: time="2025-10-13T00:09:56.119031181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 00:09:56.122163 containerd[1473]: time="2025-10-13T00:09:56.121329243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 00:09:56.975842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472964481.mount: Deactivated successfully. Oct 13 00:09:56.985970 containerd[1473]: time="2025-10-13T00:09:56.985895120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:56.987302 containerd[1473]: time="2025-10-13T00:09:56.987238701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 00:09:56.988681 containerd[1473]: time="2025-10-13T00:09:56.988635010Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:56.991475 containerd[1473]: time="2025-10-13T00:09:56.991425005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:09:56.992368 containerd[1473]: time="2025-10-13T00:09:56.992315726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 870.940367ms" Oct 13 00:09:56.992368 containerd[1473]: time="2025-10-13T00:09:56.992357544Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 00:09:56.993474 containerd[1473]: time="2025-10-13T00:09:56.993420338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 00:09:59.224264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 00:09:59.231073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:09:59.831693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:09:59.836302 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:10:00.483786 kubelet[2065]: E1013 00:10:00.483690 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:10:00.488307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:10:00.488584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:10:00.489033 systemd[1]: kubelet.service: Consumed 344ms CPU time, 115.2M memory peak. Oct 13 00:10:00.600233 containerd[1473]: time="2025-10-13T00:10:00.600169096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:00.602451 containerd[1473]: time="2025-10-13T00:10:00.602400913Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 00:10:00.603783 containerd[1473]: time="2025-10-13T00:10:00.603742109Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:00.606826 containerd[1473]: time="2025-10-13T00:10:00.606782644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:00.608102 containerd[1473]: time="2025-10-13T00:10:00.608071411Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.614617771s" Oct 13 00:10:00.608152 containerd[1473]: time="2025-10-13T00:10:00.608104764Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 00:10:04.575512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:04.575685 systemd[1]: kubelet.service: Consumed 344ms CPU time, 115.2M memory peak. Oct 13 00:10:04.586990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:10:04.619018 systemd[1]: Reload requested from client PID 2106 ('systemctl') (unit session-7.scope)... Oct 13 00:10:04.619041 systemd[1]: Reloading... Oct 13 00:10:04.708797 zram_generator::config[2150]: No configuration found. Oct 13 00:10:05.054005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 13 00:10:05.159420 systemd[1]: Reloading finished in 539 ms. Oct 13 00:10:05.204141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:05.207926 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:10:05.208615 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:10:05.209015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:05.209065 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.1M memory peak. Oct 13 00:10:05.211118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:10:05.428880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:05.434326 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:10:05.649246 kubelet[2200]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:10:05.649246 kubelet[2200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:10:05.649246 kubelet[2200]: I1013 00:10:05.648326 2200 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:10:06.823009 kubelet[2200]: I1013 00:10:06.822956 2200 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:10:06.823009 kubelet[2200]: I1013 00:10:06.822991 2200 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:10:06.824947 kubelet[2200]: I1013 00:10:06.824922 2200 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:10:06.824947 kubelet[2200]: I1013 00:10:06.824938 2200 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:10:06.825181 kubelet[2200]: I1013 00:10:06.825161 2200 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:10:06.912841 kubelet[2200]: I1013 00:10:06.912279 2200 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:10:06.912841 kubelet[2200]: E1013 00:10:06.912304 2200 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 00:10:06.915634 kubelet[2200]: E1013 00:10:06.915603 2200 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 13 00:10:06.915732 kubelet[2200]: I1013 00:10:06.915658 2200 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 13 00:10:06.922361 kubelet[2200]: I1013 00:10:06.922333 2200 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:10:06.923151 kubelet[2200]: I1013 00:10:06.923112 2200 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:10:06.923370 kubelet[2200]: I1013 00:10:06.923139 2200 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:10:06.923487 kubelet[2200]: I1013 00:10:06.923379 2200 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:10:06.923487 kubelet[2200]: I1013 00:10:06.923389 2200 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:10:06.923537 kubelet[2200]: I1013 00:10:06.923521 2200 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:10:06.958134 kubelet[2200]: I1013 00:10:06.958107 2200 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:10:06.959452 kubelet[2200]: I1013 00:10:06.959414 2200 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:10:06.959452 kubelet[2200]: I1013 00:10:06.959463 2200 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:10:06.959624 kubelet[2200]: I1013 00:10:06.959511 2200 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:10:06.959624 kubelet[2200]: I1013 00:10:06.959541 2200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:10:06.960738 kubelet[2200]: E1013 00:10:06.960687 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 00:10:06.960738 kubelet[2200]: E1013 00:10:06.960694 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 00:10:06.961579 kubelet[2200]: I1013 00:10:06.961539 2200 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 13 00:10:06.962244 kubelet[2200]: I1013 00:10:06.962211 2200 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:10:06.962287 kubelet[2200]: I1013 00:10:06.962247 2200 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:10:06.962344 kubelet[2200]: W1013 00:10:06.962328 2200 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:10:06.965807 kubelet[2200]: I1013 00:10:06.965689 2200 server.go:1262] "Started kubelet" Oct 13 00:10:06.966702 kubelet[2200]: I1013 00:10:06.966224 2200 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:10:06.966702 kubelet[2200]: I1013 00:10:06.966296 2200 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:10:06.969170 kubelet[2200]: I1013 00:10:06.968875 2200 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:10:06.969170 kubelet[2200]: I1013 00:10:06.969039 2200 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:10:06.969170 kubelet[2200]: I1013 00:10:06.969121 2200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:10:06.971085 kubelet[2200]: E1013 00:10:06.970112 2200 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186de471d742cc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 00:10:06.96564848 +0000 UTC m=+1.358078954,LastTimestamp:2025-10-13 00:10:06.96564848 +0000 UTC m=+1.358078954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 00:10:06.972272 kubelet[2200]: I1013 00:10:06.972233 2200 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:10:06.972553 kubelet[2200]: I1013 00:10:06.972524 2200 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:10:06.972731 kubelet[2200]: I1013 00:10:06.972625 2200 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:10:06.973335 kubelet[2200]: E1013 00:10:06.973310 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 00:10:06.975461 kubelet[2200]: I1013 00:10:06.973723 2200 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:10:06.975461 kubelet[2200]: I1013 00:10:06.973956 2200 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:10:06.975694 kubelet[2200]: E1013 00:10:06.975673 2200 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:10:06.975930 kubelet[2200]: E1013 00:10:06.975846 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Oct 13 00:10:06.975930 kubelet[2200]: I1013 00:10:06.975922 2200 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:10:06.977535 kubelet[2200]: I1013 00:10:06.977506 2200 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:10:06.977833 kubelet[2200]: I1013 00:10:06.977749 2200 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:10:06.978584 kubelet[2200]: E1013 00:10:06.978562 2200 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:10:06.992085 kubelet[2200]: I1013 00:10:06.992040 2200 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:10:06.992085 kubelet[2200]: I1013 00:10:06.992063 2200 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:10:06.992085 kubelet[2200]: I1013 00:10:06.992088 2200 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:10:07.029284 kubelet[2200]: I1013 00:10:07.029225 2200 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:10:07.030730 kubelet[2200]: I1013 00:10:07.030682 2200 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:10:07.031215 kubelet[2200]: I1013 00:10:07.030734 2200 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:10:07.031215 kubelet[2200]: I1013 00:10:07.030816 2200 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:10:07.031215 kubelet[2200]: E1013 00:10:07.030868 2200 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:10:07.031480 kubelet[2200]: E1013 00:10:07.031448 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:10:07.045979 kubelet[2200]: I1013 00:10:07.045943 2200 policy_none.go:49] "None policy: Start" Oct 13 00:10:07.045979 kubelet[2200]: I1013 00:10:07.045981 2200 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:10:07.046065 kubelet[2200]: I1013 00:10:07.045999 2200 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:10:07.076298 kubelet[2200]: E1013 00:10:07.076122 2200 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:10:07.131687 kubelet[2200]: E1013 00:10:07.131594 2200 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 00:10:07.141857 kubelet[2200]: I1013 00:10:07.141801 2200 policy_none.go:47] "Start" Oct 13 00:10:07.147183 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:10:07.162896 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:10:07.166690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:10:07.176688 kubelet[2200]: E1013 00:10:07.176638 2200 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 00:10:07.177016 kubelet[2200]: E1013 00:10:07.176979 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Oct 13 00:10:07.182980 kubelet[2200]: E1013 00:10:07.182915 2200 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:10:07.183313 kubelet[2200]: I1013 00:10:07.183236 2200 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:10:07.183313 kubelet[2200]: I1013 00:10:07.183257 2200 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:10:07.183825 kubelet[2200]: I1013 00:10:07.183659 2200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:10:07.184842 kubelet[2200]: E1013 00:10:07.184799 2200 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:10:07.184908 kubelet[2200]: E1013 00:10:07.184892 2200 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 00:10:07.285709 kubelet[2200]: I1013 00:10:07.285643 2200 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:10:07.286197 kubelet[2200]: E1013 00:10:07.286157 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Oct 13 00:10:07.344876 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 00:10:07.365167 kubelet[2200]: E1013 00:10:07.365124 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:07.368238 systemd[1]: Created slice kubepods-burstable-poddb0386a6d9b47f0a6dc4fcdd682da24f.slice - libcontainer container kubepods-burstable-poddb0386a6d9b47f0a6dc4fcdd682da24f.slice. Oct 13 00:10:07.370215 kubelet[2200]: E1013 00:10:07.370181 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:07.372072 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 00:10:07.374583 kubelet[2200]: E1013 00:10:07.374549 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:07.374642 kubelet[2200]: I1013 00:10:07.374598 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:07.374673 kubelet[2200]: I1013 00:10:07.374653 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:07.374729 kubelet[2200]: I1013 00:10:07.374690 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:07.374790 kubelet[2200]: I1013 00:10:07.374728 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:07.374790 kubelet[2200]: I1013 00:10:07.374757 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:07.374841 kubelet[2200]: I1013 00:10:07.374801 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:07.374841 kubelet[2200]: I1013 00:10:07.374819 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:07.374893 kubelet[2200]: I1013 00:10:07.374851 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:07.374893 kubelet[2200]: I1013 00:10:07.374866 2200 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:07.487789 kubelet[2200]: I1013 00:10:07.487721 2200 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:10:07.488163 kubelet[2200]: E1013 00:10:07.488130 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Oct 13 00:10:07.578428 kubelet[2200]: E1013 00:10:07.578358 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Oct 13 00:10:07.670489 containerd[1473]: time="2025-10-13T00:10:07.670332416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:07.674138 containerd[1473]: time="2025-10-13T00:10:07.674101990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db0386a6d9b47f0a6dc4fcdd682da24f,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:07.677884 containerd[1473]: time="2025-10-13T00:10:07.677824844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:07.792147 kubelet[2200]: E1013 00:10:07.792102 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 00:10:07.805385 kubelet[2200]: E1013 00:10:07.805335 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 00:10:07.889524 kubelet[2200]: I1013 00:10:07.889475 2200 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:10:07.890092 kubelet[2200]: E1013 00:10:07.889869 2200 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Oct 13 00:10:07.953926 kubelet[2200]: E1013 00:10:07.953702 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 00:10:08.204206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63850001.mount: Deactivated successfully. Oct 13 00:10:08.211366 containerd[1473]: time="2025-10-13T00:10:08.211315866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:10:08.214698 containerd[1473]: time="2025-10-13T00:10:08.214648392Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 13 00:10:08.215659 containerd[1473]: time="2025-10-13T00:10:08.215626597Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:10:08.217535 containerd[1473]: time="2025-10-13T00:10:08.217497582Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:10:08.218496 containerd[1473]: time="2025-10-13T00:10:08.218436011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 13 00:10:08.219343 containerd[1473]: time="2025-10-13T00:10:08.219316137Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:10:08.219974 containerd[1473]: time="2025-10-13T00:10:08.219942888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 13 00:10:08.220926 containerd[1473]: time="2025-10-13T00:10:08.220900733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:10:08.222505 containerd[1473]: time="2025-10-13T00:10:08.222467626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.013065ms" Oct 13 00:10:08.223461 containerd[1473]: time="2025-10-13T00:10:08.223421364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.500556ms" Oct 13 00:10:08.227972 containerd[1473]: time="2025-10-13T00:10:08.227938650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.754402ms" Oct 13 00:10:08.340251 containerd[1473]: time="2025-10-13T00:10:08.340135428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:08.340373 containerd[1473]: time="2025-10-13T00:10:08.340264505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:08.340373 containerd[1473]: time="2025-10-13T00:10:08.340299221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.341048 containerd[1473]: time="2025-10-13T00:10:08.339625511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:08.341155 containerd[1473]: time="2025-10-13T00:10:08.341103512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.342419 containerd[1473]: time="2025-10-13T00:10:08.341090558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.342480321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.342587656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.343059280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.343106581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.343121039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.343778 containerd[1473]: time="2025-10-13T00:10:08.343193107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:08.368952 systemd[1]: Started cri-containerd-1ee73c6ffc11247f1fb6277fa35412193c3c794e1a29e8fe481f02c64a4e24a2.scope - libcontainer container 1ee73c6ffc11247f1fb6277fa35412193c3c794e1a29e8fe481f02c64a4e24a2. Oct 13 00:10:08.373999 systemd[1]: Started cri-containerd-167a367e6397bf1c0a2c87b8a3675dcfdee3320f466efa985e9f1ee651d5bfb0.scope - libcontainer container 167a367e6397bf1c0a2c87b8a3675dcfdee3320f466efa985e9f1ee651d5bfb0. Oct 13 00:10:08.376779 systemd[1]: Started cri-containerd-a1e6a4dcc4c494faa693ec2ed49a7cbfcc015315cb06632d6b41ea806f8f799e.scope - libcontainer container a1e6a4dcc4c494faa693ec2ed49a7cbfcc015315cb06632d6b41ea806f8f799e. Oct 13 00:10:08.379171 kubelet[2200]: E1013 00:10:08.379124 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Oct 13 00:10:08.412471 containerd[1473]: time="2025-10-13T00:10:08.412411126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db0386a6d9b47f0a6dc4fcdd682da24f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ee73c6ffc11247f1fb6277fa35412193c3c794e1a29e8fe481f02c64a4e24a2\"" Oct 13 00:10:08.420287 containerd[1473]: time="2025-10-13T00:10:08.420249908Z" level=info msg="CreateContainer within sandbox \"1ee73c6ffc11247f1fb6277fa35412193c3c794e1a29e8fe481f02c64a4e24a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 00:10:08.422152 containerd[1473]: time="2025-10-13T00:10:08.422115933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"167a367e6397bf1c0a2c87b8a3675dcfdee3320f466efa985e9f1ee651d5bfb0\"" Oct 13 00:10:08.431623 containerd[1473]: time="2025-10-13T00:10:08.431588054Z" level=info msg="CreateContainer within sandbox \"167a367e6397bf1c0a2c87b8a3675dcfdee3320f466efa985e9f1ee651d5bfb0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 00:10:08.433276 containerd[1473]: time="2025-10-13T00:10:08.433233207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e6a4dcc4c494faa693ec2ed49a7cbfcc015315cb06632d6b41ea806f8f799e\"" Oct 13 00:10:08.438831 containerd[1473]: time="2025-10-13T00:10:08.438801026Z" level=info msg="CreateContainer within sandbox \"a1e6a4dcc4c494faa693ec2ed49a7cbfcc015315cb06632d6b41ea806f8f799e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 00:10:08.445055 containerd[1473]: time="2025-10-13T00:10:08.445020093Z" level=info msg="CreateContainer within sandbox \"1ee73c6ffc11247f1fb6277fa35412193c3c794e1a29e8fe481f02c64a4e24a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"493964429fb7178c14a6a55f2ac650d163613c16a910cc4028a49ff5cceb7880\"" Oct 13 00:10:08.445485 containerd[1473]: time="2025-10-13T00:10:08.445454025Z" level=info msg="StartContainer for \"493964429fb7178c14a6a55f2ac650d163613c16a910cc4028a49ff5cceb7880\"" Oct 13 00:10:08.461175 containerd[1473]: time="2025-10-13T00:10:08.460964357Z" level=info msg="CreateContainer within sandbox \"167a367e6397bf1c0a2c87b8a3675dcfdee3320f466efa985e9f1ee651d5bfb0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ba2cadf3e6533f23b38ceb2e12663990ae3aec7993e79e1c8f1605da4c19ae55\"" Oct 13 00:10:08.461985 containerd[1473]: time="2025-10-13T00:10:08.461952591Z" level=info msg="StartContainer for \"ba2cadf3e6533f23b38ceb2e12663990ae3aec7993e79e1c8f1605da4c19ae55\"" Oct 13 00:10:08.465923 containerd[1473]: time="2025-10-13T00:10:08.465878484Z" level=info msg="CreateContainer within sandbox \"a1e6a4dcc4c494faa693ec2ed49a7cbfcc015315cb06632d6b41ea806f8f799e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10114d0b25311af3693c3e6f479a417de86b8adcdb91100208b1f97e87e7e47b\"" Oct 13 00:10:08.466511 containerd[1473]: time="2025-10-13T00:10:08.466447123Z" level=info msg="StartContainer for \"10114d0b25311af3693c3e6f479a417de86b8adcdb91100208b1f97e87e7e47b\"" Oct 13 00:10:08.480923 systemd[1]: Started cri-containerd-493964429fb7178c14a6a55f2ac650d163613c16a910cc4028a49ff5cceb7880.scope - libcontainer container 493964429fb7178c14a6a55f2ac650d163613c16a910cc4028a49ff5cceb7880. Oct 13 00:10:08.489382 kubelet[2200]: E1013 00:10:08.489341 2200 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 00:10:08.495916 systemd[1]: Started cri-containerd-ba2cadf3e6533f23b38ceb2e12663990ae3aec7993e79e1c8f1605da4c19ae55.scope - libcontainer container ba2cadf3e6533f23b38ceb2e12663990ae3aec7993e79e1c8f1605da4c19ae55. Oct 13 00:10:08.500042 systemd[1]: Started cri-containerd-10114d0b25311af3693c3e6f479a417de86b8adcdb91100208b1f97e87e7e47b.scope - libcontainer container 10114d0b25311af3693c3e6f479a417de86b8adcdb91100208b1f97e87e7e47b. Oct 13 00:10:08.536956 containerd[1473]: time="2025-10-13T00:10:08.536454846Z" level=info msg="StartContainer for \"493964429fb7178c14a6a55f2ac650d163613c16a910cc4028a49ff5cceb7880\" returns successfully" Oct 13 00:10:08.549531 containerd[1473]: time="2025-10-13T00:10:08.549085790Z" level=info msg="StartContainer for \"ba2cadf3e6533f23b38ceb2e12663990ae3aec7993e79e1c8f1605da4c19ae55\" returns successfully" Oct 13 00:10:08.555262 containerd[1473]: time="2025-10-13T00:10:08.555222520Z" level=info msg="StartContainer for \"10114d0b25311af3693c3e6f479a417de86b8adcdb91100208b1f97e87e7e47b\" returns successfully" Oct 13 00:10:08.692247 kubelet[2200]: I1013 00:10:08.692215 2200 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:10:09.037273 kubelet[2200]: E1013 00:10:09.037240 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:09.039278 kubelet[2200]: E1013 00:10:09.039259 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:09.041363 kubelet[2200]: E1013 00:10:09.041344 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:10.058181 kubelet[2200]: E1013 00:10:10.057976 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:10.059133 kubelet[2200]: E1013 00:10:10.059037 2200 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 00:10:10.842073 kubelet[2200]: E1013 00:10:10.841940 2200 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 00:10:10.892497 kubelet[2200]: E1013 00:10:10.892356 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186de471d742cc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 00:10:06.96564848 +0000 UTC m=+1.358078954,LastTimestamp:2025-10-13 00:10:06.96564848 +0000 UTC m=+1.358078954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 00:10:10.966068 kubelet[2200]: I1013 00:10:10.966008 2200 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:10:10.966068 kubelet[2200]: E1013 00:10:10.966056 2200 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 00:10:10.976834 kubelet[2200]: I1013 00:10:10.976776 2200 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:10.993785 kubelet[2200]: E1013 00:10:10.993711 2200 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:10.993785 kubelet[2200]: I1013 00:10:10.993746 2200 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:10.997048 kubelet[2200]: E1013 00:10:10.996736 2200 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:10.997048 kubelet[2200]: I1013 00:10:10.996756 2200 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:10.998302 kubelet[2200]: E1013 00:10:10.998252 2200 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:11.054832 kubelet[2200]: I1013 00:10:11.054780 2200 apiserver.go:52] "Watching apiserver" Oct 13 00:10:11.072682 kubelet[2200]: I1013 00:10:11.072639 2200 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:10:12.689652 kubelet[2200]: I1013 00:10:12.689607 2200 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:12.888041 systemd[1]: Reload requested from client PID 2489 ('systemctl') (unit session-7.scope)... Oct 13 00:10:12.888063 systemd[1]: Reloading... Oct 13 00:10:12.974893 zram_generator::config[2533]: No configuration found. Oct 13 00:10:13.107001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 13 00:10:13.228138 systemd[1]: Reloading finished in 339 ms. Oct 13 00:10:13.260556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:10:13.273400 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:10:13.273797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:13.273871 systemd[1]: kubelet.service: Consumed 1.414s CPU time, 130.1M memory peak. Oct 13 00:10:13.284988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:10:13.498747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:10:13.504055 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:10:13.557298 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:10:13.557298 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:10:13.557723 kubelet[2579]: I1013 00:10:13.557376 2579 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:10:13.565811 kubelet[2579]: I1013 00:10:13.564877 2579 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 00:10:13.565811 kubelet[2579]: I1013 00:10:13.564904 2579 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:10:13.565811 kubelet[2579]: I1013 00:10:13.564937 2579 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 00:10:13.565811 kubelet[2579]: I1013 00:10:13.564944 2579 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:10:13.565811 kubelet[2579]: I1013 00:10:13.565133 2579 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 00:10:13.566533 kubelet[2579]: I1013 00:10:13.566497 2579 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 00:10:13.568507 kubelet[2579]: I1013 00:10:13.568487 2579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:10:13.573997 kubelet[2579]: E1013 00:10:13.573961 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 13 00:10:13.574129 kubelet[2579]: I1013 00:10:13.574027 2579 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 13 00:10:13.579457 kubelet[2579]: I1013 00:10:13.579422 2579 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 00:10:13.579714 kubelet[2579]: I1013 00:10:13.579664 2579 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:10:13.579925 kubelet[2579]: I1013 00:10:13.579698 2579 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:10:13.580036 kubelet[2579]: I1013 00:10:13.579935 2579 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:10:13.580036 kubelet[2579]: I1013 00:10:13.579945 2579 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 00:10:13.580036 kubelet[2579]: I1013 00:10:13.579978 2579 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 00:10:13.580697 kubelet[2579]: I1013 00:10:13.580679 2579 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:10:13.581668 kubelet[2579]: I1013 00:10:13.580941 2579 kubelet.go:475] "Attempting to sync node with API server" Oct 13 00:10:13.581668 kubelet[2579]: I1013 00:10:13.580969 2579 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:10:13.581668 kubelet[2579]: I1013 00:10:13.581008 2579 kubelet.go:387] "Adding apiserver pod source" Oct 13 00:10:13.581668 kubelet[2579]: I1013 00:10:13.581037 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:10:13.582073 kubelet[2579]: I1013 00:10:13.582053 2579 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 13 00:10:13.582656 kubelet[2579]: I1013 00:10:13.582639 2579 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 00:10:13.582734 kubelet[2579]: I1013 00:10:13.582723 2579 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 00:10:13.586037 kubelet[2579]: I1013 00:10:13.586023 2579 server.go:1262] "Started kubelet" Oct 13 00:10:13.587195 kubelet[2579]: I1013 00:10:13.587148 2579 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:10:13.590053 kubelet[2579]: I1013 00:10:13.590035 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:10:13.590485 kubelet[2579]: I1013 00:10:13.590467 2579 server.go:310] "Adding debug handlers to kubelet server" Oct 13 00:10:13.599914 kubelet[2579]: I1013 00:10:13.599883 2579 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:10:13.600215 kubelet[2579]: I1013 00:10:13.600194 2579 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 00:10:13.603467 kubelet[2579]: I1013 00:10:13.603363 2579 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:10:13.607274 kubelet[2579]: I1013 00:10:13.607237 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:10:13.610604 kubelet[2579]: I1013 00:10:13.610575 2579 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 00:10:13.613906 kubelet[2579]: I1013 00:10:13.613753 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 00:10:13.614574 kubelet[2579]: I1013 00:10:13.614556 2579 reconciler.go:29] "Reconciler: start to sync state" Oct 13 00:10:13.615802 kubelet[2579]: I1013 00:10:13.614794 2579 factory.go:223] Registration of the systemd container factory successfully Oct 13 00:10:13.615802 kubelet[2579]: I1013 00:10:13.614941 2579 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:10:13.617021 kubelet[2579]: I1013 00:10:13.616976 2579 factory.go:223] Registration of the containerd container factory successfully Oct 13 00:10:13.617237 kubelet[2579]: E1013 00:10:13.617209 2579 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:10:13.623712 kubelet[2579]: I1013 00:10:13.623587 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 00:10:13.625660 kubelet[2579]: I1013 00:10:13.625638 2579 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 00:10:13.625703 kubelet[2579]: I1013 00:10:13.625667 2579 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 00:10:13.625741 kubelet[2579]: I1013 00:10:13.625707 2579 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 00:10:13.626141 kubelet[2579]: E1013 00:10:13.625756 2579 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:10:13.655892 kubelet[2579]: I1013 00:10:13.655856 2579 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:10:13.656113 kubelet[2579]: I1013 00:10:13.656087 2579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:10:13.656113 kubelet[2579]: I1013 00:10:13.656112 2579 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:10:13.656253 kubelet[2579]: I1013 00:10:13.656238 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 00:10:13.656287 kubelet[2579]: I1013 00:10:13.656251 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 00:10:13.656287 kubelet[2579]: I1013 00:10:13.656268 2579 policy_none.go:49] "None policy: Start" Oct 13 00:10:13.656287 kubelet[2579]: I1013 00:10:13.656278 2579 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 00:10:13.656359 kubelet[2579]: I1013 00:10:13.656287 2579 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 00:10:13.656436 kubelet[2579]: I1013 00:10:13.656404 2579 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 00:10:13.656436 kubelet[2579]: I1013 00:10:13.656419 2579 policy_none.go:47] "Start" Oct 13 00:10:13.664049 kubelet[2579]: E1013 00:10:13.663497 2579 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 00:10:13.664049 kubelet[2579]: I1013 00:10:13.663686 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:10:13.664049 kubelet[2579]: I1013 00:10:13.663698 2579 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:10:13.664049 kubelet[2579]: I1013 00:10:13.663945 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:10:13.665612 kubelet[2579]: E1013 00:10:13.665456 2579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:10:13.727692 kubelet[2579]: I1013 00:10:13.727642 2579 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:13.727692 kubelet[2579]: I1013 00:10:13.727691 2579 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.727927 kubelet[2579]: I1013 00:10:13.727868 2579 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:13.767385 kubelet[2579]: I1013 00:10:13.767253 2579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 00:10:13.815779 kubelet[2579]: I1013 00:10:13.815721 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.815870 kubelet[2579]: I1013 00:10:13.815790 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.815870 kubelet[2579]: I1013 00:10:13.815819 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.815870 kubelet[2579]: I1013 00:10:13.815852 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:13.815870 kubelet[2579]: I1013 00:10:13.815867 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:13.815982 kubelet[2579]: I1013 00:10:13.815886 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.815982 kubelet[2579]: I1013 00:10:13.815903 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:13.815982 kubelet[2579]: I1013 00:10:13.815916 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:13.815982 kubelet[2579]: I1013 00:10:13.815931 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0386a6d9b47f0a6dc4fcdd682da24f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db0386a6d9b47f0a6dc4fcdd682da24f\") " pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:13.920589 kubelet[2579]: E1013 00:10:13.919732 2579 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 00:10:13.922212 kubelet[2579]: I1013 00:10:13.922160 2579 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 00:10:13.922348 kubelet[2579]: I1013 00:10:13.922254 2579 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 00:10:13.949704 sudo[2617]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 13 00:10:13.950238 sudo[2617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 13 00:10:14.476949 sudo[2617]: pam_unix(sudo:session): session closed for user root Oct 13 00:10:14.582744 kubelet[2579]: I1013 00:10:14.582695 2579 apiserver.go:52] "Watching apiserver" Oct 13 00:10:14.614183 kubelet[2579]: I1013 00:10:14.614158 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 00:10:14.641582 kubelet[2579]: I1013 00:10:14.641549 2579 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:14.642180 kubelet[2579]: I1013 00:10:14.642161 2579 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:14.648602 kubelet[2579]: E1013 00:10:14.648567 2579 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 00:10:14.649328 kubelet[2579]: E1013 00:10:14.649292 2579 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 00:10:14.661035 kubelet[2579]: I1013 00:10:14.660924 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.660907057 podStartE2EDuration="2.660907057s" podCreationTimestamp="2025-10-13 00:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:14.660579043 +0000 UTC m=+1.147884209" watchObservedRunningTime="2025-10-13 00:10:14.660907057 +0000 UTC m=+1.148212223" Oct 13 00:10:14.668531 kubelet[2579]: I1013 00:10:14.668463 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.668443892 podStartE2EDuration="1.668443892s" podCreationTimestamp="2025-10-13 00:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:14.667800137 +0000 UTC m=+1.155105313" watchObservedRunningTime="2025-10-13 00:10:14.668443892 +0000 UTC m=+1.155749058" Oct 13 00:10:14.676788 kubelet[2579]: I1013 00:10:14.676632 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.676614304 podStartE2EDuration="1.676614304s" podCreationTimestamp="2025-10-13 00:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:14.676516918 +0000 UTC m=+1.163822084" watchObservedRunningTime="2025-10-13 00:10:14.676614304 +0000 UTC m=+1.163919460" Oct 13 00:10:15.755153 sudo[1657]: pam_unix(sudo:session): session closed for user root Oct 13 00:10:15.756692 sshd[1656]: Connection closed by 10.0.0.1 port 40626 Oct 13 00:10:15.757336 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Oct 13 00:10:15.762029 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:40626.service: Deactivated successfully. Oct 13 00:10:15.764821 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:10:15.765091 systemd[1]: session-7.scope: Consumed 6.036s CPU time, 259.4M memory peak. Oct 13 00:10:15.766317 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:10:15.767229 systemd-logind[1463]: Removed session 7. Oct 13 00:10:18.256604 kubelet[2579]: I1013 00:10:18.256562 2579 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 00:10:18.257106 containerd[1473]: time="2025-10-13T00:10:18.256919736Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:10:18.258043 kubelet[2579]: I1013 00:10:18.258019 2579 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 00:10:19.170228 systemd[1]: Created slice kubepods-besteffort-podeb0d5687_0b26_4db7_b026_564e66211a58.slice - libcontainer container kubepods-besteffort-podeb0d5687_0b26_4db7_b026_564e66211a58.slice. Oct 13 00:10:19.189582 systemd[1]: Created slice kubepods-burstable-pod00da2c07_26f7_49a6_afc4_85356af3886a.slice - libcontainer container kubepods-burstable-pod00da2c07_26f7_49a6_afc4_85356af3886a.slice. Oct 13 00:10:19.247542 kubelet[2579]: I1013 00:10:19.247485 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-etc-cni-netd\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247542 kubelet[2579]: I1013 00:10:19.247532 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-lib-modules\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247542 kubelet[2579]: I1013 00:10:19.247555 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb0d5687-0b26-4db7-b026-564e66211a58-lib-modules\") pod \"kube-proxy-bbxpk\" (UID: \"eb0d5687-0b26-4db7-b026-564e66211a58\") " pod="kube-system/kube-proxy-bbxpk" Oct 13 00:10:19.247782 kubelet[2579]: I1013 00:10:19.247621 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-hostproc\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247782 kubelet[2579]: I1013 00:10:19.247661 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00da2c07-26f7-49a6-afc4-85356af3886a-clustermesh-secrets\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247782 kubelet[2579]: I1013 00:10:19.247681 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-config-path\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247782 kubelet[2579]: I1013 00:10:19.247699 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-net\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247891 kubelet[2579]: I1013 00:10:19.247756 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-kernel\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247891 kubelet[2579]: I1013 00:10:19.247839 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-hubble-tls\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247891 kubelet[2579]: I1013 00:10:19.247874 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-cgroup\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247953 kubelet[2579]: I1013 00:10:19.247911 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-xtables-lock\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247953 kubelet[2579]: I1013 00:10:19.247935 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb0d5687-0b26-4db7-b026-564e66211a58-kube-proxy\") pod \"kube-proxy-bbxpk\" (UID: \"eb0d5687-0b26-4db7-b026-564e66211a58\") " pod="kube-system/kube-proxy-bbxpk" Oct 13 00:10:19.247996 kubelet[2579]: I1013 00:10:19.247953 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-run\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247996 kubelet[2579]: I1013 00:10:19.247975 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc6x\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-kube-api-access-mpc6x\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.247996 kubelet[2579]: I1013 00:10:19.247992 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb0d5687-0b26-4db7-b026-564e66211a58-xtables-lock\") pod \"kube-proxy-bbxpk\" (UID: \"eb0d5687-0b26-4db7-b026-564e66211a58\") " pod="kube-system/kube-proxy-bbxpk" Oct 13 00:10:19.248052 kubelet[2579]: I1013 00:10:19.248007 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9xn\" (UniqueName: \"kubernetes.io/projected/eb0d5687-0b26-4db7-b026-564e66211a58-kube-api-access-mz9xn\") pod \"kube-proxy-bbxpk\" (UID: \"eb0d5687-0b26-4db7-b026-564e66211a58\") " pod="kube-system/kube-proxy-bbxpk" Oct 13 00:10:19.248052 kubelet[2579]: I1013 00:10:19.248022 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-bpf-maps\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.248052 kubelet[2579]: I1013 00:10:19.248034 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cni-path\") pod \"cilium-wcftm\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " pod="kube-system/cilium-wcftm" Oct 13 00:10:19.416268 systemd[1]: Created slice kubepods-besteffort-pod782b5634_85a7_444f_b03d_28a690560c56.slice - libcontainer container kubepods-besteffort-pod782b5634_85a7_444f_b03d_28a690560c56.slice. Oct 13 00:10:19.449366 kubelet[2579]: I1013 00:10:19.449174 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782b5634-85a7-444f-b03d-28a690560c56-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-dlq2v\" (UID: \"782b5634-85a7-444f-b03d-28a690560c56\") " pod="kube-system/cilium-operator-6f9c7c5859-dlq2v" Oct 13 00:10:19.449366 kubelet[2579]: I1013 00:10:19.449230 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmt96\" (UniqueName: \"kubernetes.io/projected/782b5634-85a7-444f-b03d-28a690560c56-kube-api-access-dmt96\") pod \"cilium-operator-6f9c7c5859-dlq2v\" (UID: \"782b5634-85a7-444f-b03d-28a690560c56\") " pod="kube-system/cilium-operator-6f9c7c5859-dlq2v" Oct 13 00:10:19.483062 containerd[1473]: time="2025-10-13T00:10:19.482997460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbxpk,Uid:eb0d5687-0b26-4db7-b026-564e66211a58,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:19.496214 containerd[1473]: time="2025-10-13T00:10:19.496176365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcftm,Uid:00da2c07-26f7-49a6-afc4-85356af3886a,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:19.516001 containerd[1473]: time="2025-10-13T00:10:19.515245236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:19.516001 containerd[1473]: time="2025-10-13T00:10:19.515312214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:19.516001 containerd[1473]: time="2025-10-13T00:10:19.515332602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.516001 containerd[1473]: time="2025-10-13T00:10:19.515440116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.529735 containerd[1473]: time="2025-10-13T00:10:19.529454024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:19.529735 containerd[1473]: time="2025-10-13T00:10:19.529535919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:19.529735 containerd[1473]: time="2025-10-13T00:10:19.529548142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.529735 containerd[1473]: time="2025-10-13T00:10:19.529638263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.538945 systemd[1]: Started cri-containerd-b3862a713d4dab584866e3c1b7e5145daca983034487e79084ee880c20c9841c.scope - libcontainer container b3862a713d4dab584866e3c1b7e5145daca983034487e79084ee880c20c9841c. Oct 13 00:10:19.545519 systemd[1]: Started cri-containerd-2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54.scope - libcontainer container 2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54. Oct 13 00:10:19.577803 containerd[1473]: time="2025-10-13T00:10:19.576057456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbxpk,Uid:eb0d5687-0b26-4db7-b026-564e66211a58,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3862a713d4dab584866e3c1b7e5145daca983034487e79084ee880c20c9841c\"" Oct 13 00:10:19.583826 containerd[1473]: time="2025-10-13T00:10:19.583752575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wcftm,Uid:00da2c07-26f7-49a6-afc4-85356af3886a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\"" Oct 13 00:10:19.585695 containerd[1473]: time="2025-10-13T00:10:19.585661223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 13 00:10:19.588697 containerd[1473]: time="2025-10-13T00:10:19.588666371Z" level=info msg="CreateContainer within sandbox \"b3862a713d4dab584866e3c1b7e5145daca983034487e79084ee880c20c9841c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:10:19.609632 containerd[1473]: time="2025-10-13T00:10:19.609560522Z" level=info msg="CreateContainer within sandbox \"b3862a713d4dab584866e3c1b7e5145daca983034487e79084ee880c20c9841c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a5d01a23e1500deed1b56c15acba286f55e9542b1f941d5eb8451e83fb65206\"" Oct 13 00:10:19.610233 containerd[1473]: time="2025-10-13T00:10:19.610194604Z" level=info msg="StartContainer for \"0a5d01a23e1500deed1b56c15acba286f55e9542b1f941d5eb8451e83fb65206\"" Oct 13 00:10:19.644019 systemd[1]: Started cri-containerd-0a5d01a23e1500deed1b56c15acba286f55e9542b1f941d5eb8451e83fb65206.scope - libcontainer container 0a5d01a23e1500deed1b56c15acba286f55e9542b1f941d5eb8451e83fb65206. Oct 13 00:10:19.683082 containerd[1473]: time="2025-10-13T00:10:19.683035178Z" level=info msg="StartContainer for \"0a5d01a23e1500deed1b56c15acba286f55e9542b1f941d5eb8451e83fb65206\" returns successfully" Oct 13 00:10:19.723054 containerd[1473]: time="2025-10-13T00:10:19.722932196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-dlq2v,Uid:782b5634-85a7-444f-b03d-28a690560c56,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:19.750549 containerd[1473]: time="2025-10-13T00:10:19.750213306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:19.750549 containerd[1473]: time="2025-10-13T00:10:19.750312264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:19.750549 containerd[1473]: time="2025-10-13T00:10:19.750327803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.750549 containerd[1473]: time="2025-10-13T00:10:19.750436248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:19.775937 systemd[1]: Started cri-containerd-ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6.scope - libcontainer container ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6. Oct 13 00:10:19.813410 containerd[1473]: time="2025-10-13T00:10:19.813079489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-dlq2v,Uid:782b5634-85a7-444f-b03d-28a690560c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\"" Oct 13 00:10:20.210285 update_engine[1468]: I20251013 00:10:20.210151 1468 update_attempter.cc:509] Updating boot flags... Oct 13 00:10:20.240206 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2962) Oct 13 00:10:20.294901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2965) Oct 13 00:10:20.680041 kubelet[2579]: I1013 00:10:20.679436 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bbxpk" podStartSLOduration=2.679420156 podStartE2EDuration="2.679420156s" podCreationTimestamp="2025-10-13 00:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:20.679199598 +0000 UTC m=+7.166504774" watchObservedRunningTime="2025-10-13 00:10:20.679420156 +0000 UTC m=+7.166725322" Oct 13 00:10:28.299468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096084877.mount: Deactivated successfully. Oct 13 00:10:30.892460 containerd[1473]: time="2025-10-13T00:10:30.892394934Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:30.893151 containerd[1473]: time="2025-10-13T00:10:30.893103289Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Oct 13 00:10:30.894264 containerd[1473]: time="2025-10-13T00:10:30.894232428Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:30.895914 containerd[1473]: time="2025-10-13T00:10:30.895887360Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.309999496s" Oct 13 00:10:30.895980 containerd[1473]: time="2025-10-13T00:10:30.895916314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 13 00:10:30.896941 containerd[1473]: time="2025-10-13T00:10:30.896912953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 13 00:10:30.901271 containerd[1473]: time="2025-10-13T00:10:30.901234173Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 00:10:30.916082 containerd[1473]: time="2025-10-13T00:10:30.916034803Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\"" Oct 13 00:10:30.916587 containerd[1473]: time="2025-10-13T00:10:30.916531810Z" level=info msg="StartContainer for \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\"" Oct 13 00:10:30.951931 systemd[1]: Started cri-containerd-61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2.scope - libcontainer container 61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2. Oct 13 00:10:30.988307 containerd[1473]: time="2025-10-13T00:10:30.988258241Z" level=info msg="StartContainer for \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\" returns successfully" Oct 13 00:10:30.999408 systemd[1]: cri-containerd-61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2.scope: Deactivated successfully. Oct 13 00:10:31.648576 containerd[1473]: time="2025-10-13T00:10:31.648475487Z" level=info msg="shim disconnected" id=61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2 namespace=k8s.io Oct 13 00:10:31.648576 containerd[1473]: time="2025-10-13T00:10:31.648547673Z" level=warning msg="cleaning up after shim disconnected" id=61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2 namespace=k8s.io Oct 13 00:10:31.648576 containerd[1473]: time="2025-10-13T00:10:31.648559345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:10:31.666448 containerd[1473]: time="2025-10-13T00:10:31.666363719Z" level=warning msg="cleanup warnings time=\"2025-10-13T00:10:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 13 00:10:31.693533 containerd[1473]: time="2025-10-13T00:10:31.693460762Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 00:10:31.710590 containerd[1473]: time="2025-10-13T00:10:31.710508610Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\"" Oct 13 00:10:31.711319 containerd[1473]: time="2025-10-13T00:10:31.711282850Z" level=info msg="StartContainer for \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\"" Oct 13 00:10:31.745979 systemd[1]: Started cri-containerd-fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265.scope - libcontainer container fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265. Oct 13 00:10:31.777865 containerd[1473]: time="2025-10-13T00:10:31.777798043Z" level=info msg="StartContainer for \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\" returns successfully" Oct 13 00:10:31.794940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:10:31.795192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:10:31.795490 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:10:31.805147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:10:31.807997 systemd[1]: cri-containerd-fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265.scope: Deactivated successfully. Oct 13 00:10:31.824486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:10:31.827142 containerd[1473]: time="2025-10-13T00:10:31.827082423Z" level=info msg="shim disconnected" id=fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265 namespace=k8s.io Oct 13 00:10:31.827142 containerd[1473]: time="2025-10-13T00:10:31.827136444Z" level=warning msg="cleaning up after shim disconnected" id=fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265 namespace=k8s.io Oct 13 00:10:31.827142 containerd[1473]: time="2025-10-13T00:10:31.827145952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:10:31.912380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2-rootfs.mount: Deactivated successfully. Oct 13 00:10:32.602128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488490552.mount: Deactivated successfully. Oct 13 00:10:32.699676 containerd[1473]: time="2025-10-13T00:10:32.699612931Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 00:10:32.718526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113115357.mount: Deactivated successfully. Oct 13 00:10:32.720119 containerd[1473]: time="2025-10-13T00:10:32.720075555Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\"" Oct 13 00:10:32.723504 containerd[1473]: time="2025-10-13T00:10:32.723462829Z" level=info msg="StartContainer for \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\"" Oct 13 00:10:32.757998 systemd[1]: Started cri-containerd-b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5.scope - libcontainer container b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5. Oct 13 00:10:32.795338 containerd[1473]: time="2025-10-13T00:10:32.795289494Z" level=info msg="StartContainer for \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\" returns successfully" Oct 13 00:10:32.796597 systemd[1]: cri-containerd-b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5.scope: Deactivated successfully. Oct 13 00:10:32.897507 containerd[1473]: time="2025-10-13T00:10:32.897177262Z" level=info msg="shim disconnected" id=b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5 namespace=k8s.io Oct 13 00:10:32.897507 containerd[1473]: time="2025-10-13T00:10:32.897232036Z" level=warning msg="cleaning up after shim disconnected" id=b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5 namespace=k8s.io Oct 13 00:10:32.897507 containerd[1473]: time="2025-10-13T00:10:32.897240321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:10:33.217404 containerd[1473]: time="2025-10-13T00:10:33.217280697Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:33.218403 containerd[1473]: time="2025-10-13T00:10:33.218371613Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Oct 13 00:10:33.219568 containerd[1473]: time="2025-10-13T00:10:33.219544973Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:10:33.220952 containerd[1473]: time="2025-10-13T00:10:33.220894035Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.323939784s" Oct 13 00:10:33.220952 containerd[1473]: time="2025-10-13T00:10:33.220949911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 13 00:10:33.226116 containerd[1473]: time="2025-10-13T00:10:33.226088672Z" level=info msg="CreateContainer within sandbox \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 13 00:10:33.238240 containerd[1473]: time="2025-10-13T00:10:33.238212544Z" level=info msg="CreateContainer within sandbox \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\"" Oct 13 00:10:33.238709 containerd[1473]: time="2025-10-13T00:10:33.238670688Z" level=info msg="StartContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\"" Oct 13 00:10:33.276934 systemd[1]: Started cri-containerd-fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8.scope - libcontainer container fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8. Oct 13 00:10:33.305253 containerd[1473]: time="2025-10-13T00:10:33.305214661Z" level=info msg="StartContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" returns successfully" Oct 13 00:10:33.715311 containerd[1473]: time="2025-10-13T00:10:33.715190417Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 00:10:33.739793 containerd[1473]: time="2025-10-13T00:10:33.738081065Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\"" Oct 13 00:10:33.742049 containerd[1473]: time="2025-10-13T00:10:33.741992133Z" level=info msg="StartContainer for \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\"" Oct 13 00:10:33.793892 systemd[1]: Started cri-containerd-0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c.scope - libcontainer container 0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c. Oct 13 00:10:33.834963 systemd[1]: cri-containerd-0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c.scope: Deactivated successfully. Oct 13 00:10:33.841285 containerd[1473]: time="2025-10-13T00:10:33.841125278Z" level=info msg="StartContainer for \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\" returns successfully" Oct 13 00:10:34.373107 containerd[1473]: time="2025-10-13T00:10:34.373022586Z" level=info msg="shim disconnected" id=0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c namespace=k8s.io Oct 13 00:10:34.373107 containerd[1473]: time="2025-10-13T00:10:34.373080254Z" level=warning msg="cleaning up after shim disconnected" id=0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c namespace=k8s.io Oct 13 00:10:34.373107 containerd[1473]: time="2025-10-13T00:10:34.373088560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:10:34.725163 containerd[1473]: time="2025-10-13T00:10:34.725110300Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 00:10:34.736790 kubelet[2579]: I1013 00:10:34.736653 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-dlq2v" podStartSLOduration=2.329242833 podStartE2EDuration="15.736626983s" podCreationTimestamp="2025-10-13 00:10:19 +0000 UTC" firstStartedPulling="2025-10-13 00:10:19.814338146 +0000 UTC m=+6.301643312" lastFinishedPulling="2025-10-13 00:10:33.221722296 +0000 UTC m=+19.709027462" observedRunningTime="2025-10-13 00:10:33.765903685 +0000 UTC m=+20.253208861" watchObservedRunningTime="2025-10-13 00:10:34.736626983 +0000 UTC m=+21.223932159" Oct 13 00:10:34.748161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157436666.mount: Deactivated successfully. Oct 13 00:10:34.749255 containerd[1473]: time="2025-10-13T00:10:34.749174096Z" level=info msg="CreateContainer within sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\"" Oct 13 00:10:34.751622 containerd[1473]: time="2025-10-13T00:10:34.749912708Z" level=info msg="StartContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\"" Oct 13 00:10:34.797447 systemd[1]: Started cri-containerd-7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b.scope - libcontainer container 7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b. Oct 13 00:10:34.845838 containerd[1473]: time="2025-10-13T00:10:34.845725027Z" level=info msg="StartContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" returns successfully" Oct 13 00:10:35.064899 kubelet[2579]: I1013 00:10:35.064663 2579 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 00:10:35.134372 systemd[1]: Created slice kubepods-burstable-pod372a32be_866f_4ef6_85e3_1b09b2a8e6c2.slice - libcontainer container kubepods-burstable-pod372a32be_866f_4ef6_85e3_1b09b2a8e6c2.slice. Oct 13 00:10:35.151574 systemd[1]: Created slice kubepods-burstable-podd67e86ee_34fa_43b1_8490_edc5c0ec3df3.slice - libcontainer container kubepods-burstable-podd67e86ee_34fa_43b1_8490_edc5c0ec3df3.slice. Oct 13 00:10:35.156516 kubelet[2579]: I1013 00:10:35.156418 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d67e86ee-34fa-43b1-8490-edc5c0ec3df3-config-volume\") pod \"coredns-66bc5c9577-252mw\" (UID: \"d67e86ee-34fa-43b1-8490-edc5c0ec3df3\") " pod="kube-system/coredns-66bc5c9577-252mw" Oct 13 00:10:35.156516 kubelet[2579]: I1013 00:10:35.156488 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372a32be-866f-4ef6-85e3-1b09b2a8e6c2-config-volume\") pod \"coredns-66bc5c9577-mvsnm\" (UID: \"372a32be-866f-4ef6-85e3-1b09b2a8e6c2\") " pod="kube-system/coredns-66bc5c9577-mvsnm" Oct 13 00:10:35.156858 kubelet[2579]: I1013 00:10:35.156522 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g46q6\" (UniqueName: \"kubernetes.io/projected/d67e86ee-34fa-43b1-8490-edc5c0ec3df3-kube-api-access-g46q6\") pod \"coredns-66bc5c9577-252mw\" (UID: \"d67e86ee-34fa-43b1-8490-edc5c0ec3df3\") " pod="kube-system/coredns-66bc5c9577-252mw" Oct 13 00:10:35.156858 kubelet[2579]: I1013 00:10:35.156576 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48jn\" (UniqueName: \"kubernetes.io/projected/372a32be-866f-4ef6-85e3-1b09b2a8e6c2-kube-api-access-c48jn\") pod \"coredns-66bc5c9577-mvsnm\" (UID: \"372a32be-866f-4ef6-85e3-1b09b2a8e6c2\") " pod="kube-system/coredns-66bc5c9577-mvsnm" Oct 13 00:10:35.449309 containerd[1473]: time="2025-10-13T00:10:35.448792449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mvsnm,Uid:372a32be-866f-4ef6-85e3-1b09b2a8e6c2,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:35.459408 containerd[1473]: time="2025-10-13T00:10:35.459336676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-252mw,Uid:d67e86ee-34fa-43b1-8490-edc5c0ec3df3,Namespace:kube-system,Attempt:0,}" Oct 13 00:10:35.797447 kubelet[2579]: I1013 00:10:35.797232 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wcftm" podStartSLOduration=5.485716938 podStartE2EDuration="16.797200159s" podCreationTimestamp="2025-10-13 00:10:19 +0000 UTC" firstStartedPulling="2025-10-13 00:10:19.585296361 +0000 UTC m=+6.072601517" lastFinishedPulling="2025-10-13 00:10:30.896779572 +0000 UTC m=+17.384084738" observedRunningTime="2025-10-13 00:10:35.7942003 +0000 UTC m=+22.281505476" watchObservedRunningTime="2025-10-13 00:10:35.797200159 +0000 UTC m=+22.284505325" Oct 13 00:10:37.639903 systemd-networkd[1384]: cilium_host: Link UP Oct 13 00:10:37.640083 systemd-networkd[1384]: cilium_net: Link UP Oct 13 00:10:37.640270 systemd-networkd[1384]: cilium_net: Gained carrier Oct 13 00:10:37.640455 systemd-networkd[1384]: cilium_host: Gained carrier Oct 13 00:10:37.753369 systemd-networkd[1384]: cilium_vxlan: Link UP Oct 13 00:10:37.753379 systemd-networkd[1384]: cilium_vxlan: Gained carrier Oct 13 00:10:37.973790 kernel: NET: Registered PF_ALG protocol family Oct 13 00:10:38.268951 systemd-networkd[1384]: cilium_host: Gained IPv6LL Oct 13 00:10:38.525938 systemd-networkd[1384]: cilium_net: Gained IPv6LL Oct 13 00:10:38.659143 systemd-networkd[1384]: lxc_health: Link UP Oct 13 00:10:38.664932 systemd-networkd[1384]: lxc_health: Gained carrier Oct 13 00:10:39.034908 kernel: eth0: renamed from tmp1a0b1 Oct 13 00:10:39.034054 systemd-networkd[1384]: lxc2d0a0613e94d: Link UP Oct 13 00:10:39.046822 systemd-networkd[1384]: lxcd0d0edf893e8: Link UP Oct 13 00:10:39.053306 systemd-networkd[1384]: lxc2d0a0613e94d: Gained carrier Oct 13 00:10:39.055456 kernel: eth0: renamed from tmp164b4 Oct 13 00:10:39.061603 systemd-networkd[1384]: lxcd0d0edf893e8: Gained carrier Oct 13 00:10:39.741849 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Oct 13 00:10:39.749407 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:58630.service - OpenSSH per-connection server daemon (10.0.0.1:58630). Oct 13 00:10:39.801326 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 58630 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:10:39.803831 sshd-session[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:10:39.809022 systemd-logind[1463]: New session 8 of user core. Oct 13 00:10:39.813906 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:10:39.950866 sshd[3818]: Connection closed by 10.0.0.1 port 58630 Oct 13 00:10:39.951230 sshd-session[3816]: pam_unix(sshd:session): session closed for user core Oct 13 00:10:39.955751 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:58630.service: Deactivated successfully. Oct 13 00:10:39.958112 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:10:39.958827 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:10:39.959802 systemd-logind[1463]: Removed session 8. Oct 13 00:10:40.252984 systemd-networkd[1384]: lxcd0d0edf893e8: Gained IPv6LL Oct 13 00:10:40.381035 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 13 00:10:41.021042 systemd-networkd[1384]: lxc2d0a0613e94d: Gained IPv6LL Oct 13 00:10:42.867320 containerd[1473]: time="2025-10-13T00:10:42.867212319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:42.867320 containerd[1473]: time="2025-10-13T00:10:42.867270419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:42.867320 containerd[1473]: time="2025-10-13T00:10:42.867282171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:42.867953 containerd[1473]: time="2025-10-13T00:10:42.867362472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:42.868046 containerd[1473]: time="2025-10-13T00:10:42.867930421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:10:42.868046 containerd[1473]: time="2025-10-13T00:10:42.867991245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:10:42.868046 containerd[1473]: time="2025-10-13T00:10:42.868001985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:42.868145 containerd[1473]: time="2025-10-13T00:10:42.868088517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:10:42.900953 systemd[1]: Started cri-containerd-164b491b4ac5730ecd203ede4b463935700874fc8b332571d7cec2695518e15c.scope - libcontainer container 164b491b4ac5730ecd203ede4b463935700874fc8b332571d7cec2695518e15c. Oct 13 00:10:42.902610 systemd[1]: Started cri-containerd-1a0b161a8d477d8f885dd514ba4bd814d64c3331f43d613d4819120a6b04d549.scope - libcontainer container 1a0b161a8d477d8f885dd514ba4bd814d64c3331f43d613d4819120a6b04d549. Oct 13 00:10:42.915695 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:10:42.919313 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 00:10:42.942683 containerd[1473]: time="2025-10-13T00:10:42.942563933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mvsnm,Uid:372a32be-866f-4ef6-85e3-1b09b2a8e6c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"164b491b4ac5730ecd203ede4b463935700874fc8b332571d7cec2695518e15c\"" Oct 13 00:10:42.949607 containerd[1473]: time="2025-10-13T00:10:42.949566681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-252mw,Uid:d67e86ee-34fa-43b1-8490-edc5c0ec3df3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a0b161a8d477d8f885dd514ba4bd814d64c3331f43d613d4819120a6b04d549\"" Oct 13 00:10:42.954965 containerd[1473]: time="2025-10-13T00:10:42.954838626Z" level=info msg="CreateContainer within sandbox \"164b491b4ac5730ecd203ede4b463935700874fc8b332571d7cec2695518e15c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:10:42.957973 containerd[1473]: time="2025-10-13T00:10:42.957953383Z" level=info msg="CreateContainer within sandbox \"1a0b161a8d477d8f885dd514ba4bd814d64c3331f43d613d4819120a6b04d549\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:10:42.973715 containerd[1473]: time="2025-10-13T00:10:42.973675790Z" level=info msg="CreateContainer within sandbox \"164b491b4ac5730ecd203ede4b463935700874fc8b332571d7cec2695518e15c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"628c6ea1007976ba21a3d8ed108882658c3f9c08502bc66c645453594bdca596\"" Oct 13 00:10:42.974334 containerd[1473]: time="2025-10-13T00:10:42.974270418Z" level=info msg="StartContainer for \"628c6ea1007976ba21a3d8ed108882658c3f9c08502bc66c645453594bdca596\"" Oct 13 00:10:42.978089 containerd[1473]: time="2025-10-13T00:10:42.978028546Z" level=info msg="CreateContainer within sandbox \"1a0b161a8d477d8f885dd514ba4bd814d64c3331f43d613d4819120a6b04d549\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66052df595d6f4cf0c0174fe269708a0edaba1d3a6d7898f0717684e649db3fb\"" Oct 13 00:10:42.978989 containerd[1473]: time="2025-10-13T00:10:42.978964856Z" level=info msg="StartContainer for \"66052df595d6f4cf0c0174fe269708a0edaba1d3a6d7898f0717684e649db3fb\"" Oct 13 00:10:43.009906 systemd[1]: Started cri-containerd-628c6ea1007976ba21a3d8ed108882658c3f9c08502bc66c645453594bdca596.scope - libcontainer container 628c6ea1007976ba21a3d8ed108882658c3f9c08502bc66c645453594bdca596. Oct 13 00:10:43.012968 systemd[1]: Started cri-containerd-66052df595d6f4cf0c0174fe269708a0edaba1d3a6d7898f0717684e649db3fb.scope - libcontainer container 66052df595d6f4cf0c0174fe269708a0edaba1d3a6d7898f0717684e649db3fb. Oct 13 00:10:43.046081 containerd[1473]: time="2025-10-13T00:10:43.045990038Z" level=info msg="StartContainer for \"66052df595d6f4cf0c0174fe269708a0edaba1d3a6d7898f0717684e649db3fb\" returns successfully" Oct 13 00:10:43.053809 containerd[1473]: time="2025-10-13T00:10:43.053748075Z" level=info msg="StartContainer for \"628c6ea1007976ba21a3d8ed108882658c3f9c08502bc66c645453594bdca596\" returns successfully" Oct 13 00:10:43.772081 kubelet[2579]: I1013 00:10:43.772004 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-252mw" podStartSLOduration=24.771986827 podStartE2EDuration="24.771986827s" podCreationTimestamp="2025-10-13 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:43.771572117 +0000 UTC m=+30.258877313" watchObservedRunningTime="2025-10-13 00:10:43.771986827 +0000 UTC m=+30.259292013" Oct 13 00:10:43.784833 kubelet[2579]: I1013 00:10:43.784736 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mvsnm" podStartSLOduration=24.784716852 podStartE2EDuration="24.784716852s" podCreationTimestamp="2025-10-13 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:10:43.783921667 +0000 UTC m=+30.271226853" watchObservedRunningTime="2025-10-13 00:10:43.784716852 +0000 UTC m=+30.272022018" Oct 13 00:10:43.872662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285087924.mount: Deactivated successfully. Oct 13 00:10:44.967534 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:58634.service - OpenSSH per-connection server daemon (10.0.0.1:58634). Oct 13 00:10:45.020194 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 58634 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:10:45.022513 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:10:45.027303 systemd-logind[1463]: New session 9 of user core. Oct 13 00:10:45.042034 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:10:45.198329 sshd[4014]: Connection closed by 10.0.0.1 port 58634 Oct 13 00:10:45.198821 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Oct 13 00:10:45.203291 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:58634.service: Deactivated successfully. Oct 13 00:10:45.205746 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:10:45.206584 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:10:45.207595 systemd-logind[1463]: Removed session 9. Oct 13 00:10:50.217319 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Oct 13 00:10:50.260879 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:10:50.262612 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:10:50.267295 systemd-logind[1463]: New session 10 of user core. Oct 13 00:10:50.288952 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:10:50.408930 sshd[4033]: Connection closed by 10.0.0.1 port 37628 Oct 13 00:10:50.409399 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Oct 13 00:10:50.413447 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:37628.service: Deactivated successfully. Oct 13 00:10:50.416075 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:10:50.416925 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:10:50.418580 systemd-logind[1463]: Removed session 10. Oct 13 00:10:55.435407 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:37640.service - OpenSSH per-connection server daemon (10.0.0.1:37640). Oct 13 00:10:55.509268 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 37640 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:10:55.510955 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:10:55.516240 systemd-logind[1463]: New session 11 of user core. Oct 13 00:10:55.530027 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:10:55.663621 sshd[4049]: Connection closed by 10.0.0.1 port 37640 Oct 13 00:10:55.664033 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Oct 13 00:10:55.668670 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:37640.service: Deactivated successfully. Oct 13 00:10:55.671368 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:10:55.672211 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:10:55.673329 systemd-logind[1463]: Removed session 11. Oct 13 00:11:00.679097 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:36166.service - OpenSSH per-connection server daemon (10.0.0.1:36166). Oct 13 00:11:00.722408 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 36166 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:00.724126 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:00.728623 systemd-logind[1463]: New session 12 of user core. Oct 13 00:11:00.736894 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:11:00.855608 sshd[4065]: Connection closed by 10.0.0.1 port 36166 Oct 13 00:11:00.856041 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:00.870751 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:36166.service: Deactivated successfully. Oct 13 00:11:00.873129 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:11:00.875020 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:11:00.884079 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:36172.service - OpenSSH per-connection server daemon (10.0.0.1:36172). Oct 13 00:11:00.885516 systemd-logind[1463]: Removed session 12. Oct 13 00:11:00.923652 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 36172 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:00.925170 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:00.929492 systemd-logind[1463]: New session 13 of user core. Oct 13 00:11:00.938915 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:11:01.092899 sshd[4081]: Connection closed by 10.0.0.1 port 36172 Oct 13 00:11:01.093373 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:01.103128 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:36172.service: Deactivated successfully. Oct 13 00:11:01.105391 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:11:01.106693 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:11:01.117218 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:36178.service - OpenSSH per-connection server daemon (10.0.0.1:36178). Oct 13 00:11:01.119520 systemd-logind[1463]: Removed session 13. Oct 13 00:11:01.162445 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 36178 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:01.164120 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:01.168600 systemd-logind[1463]: New session 14 of user core. Oct 13 00:11:01.178891 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:11:01.307152 sshd[4095]: Connection closed by 10.0.0.1 port 36178 Oct 13 00:11:01.307529 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:01.311877 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:36178.service: Deactivated successfully. Oct 13 00:11:01.313922 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:11:01.314572 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:11:01.315384 systemd-logind[1463]: Removed session 14. Oct 13 00:11:06.319700 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Oct 13 00:11:06.362508 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:06.364042 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:06.368421 systemd-logind[1463]: New session 15 of user core. Oct 13 00:11:06.377904 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:11:06.492839 sshd[4110]: Connection closed by 10.0.0.1 port 36180 Oct 13 00:11:06.493251 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:06.497581 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:36180.service: Deactivated successfully. Oct 13 00:11:06.499958 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:11:06.500654 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:11:06.501579 systemd-logind[1463]: Removed session 15. Oct 13 00:11:11.506084 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:53098.service - OpenSSH per-connection server daemon (10.0.0.1:53098). Oct 13 00:11:11.550638 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 53098 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:11.553009 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:11.558086 systemd-logind[1463]: New session 16 of user core. Oct 13 00:11:11.568045 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:11:11.692920 sshd[4126]: Connection closed by 10.0.0.1 port 53098 Oct 13 00:11:11.693399 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:11.705447 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:53098.service: Deactivated successfully. Oct 13 00:11:11.707335 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:11:11.708917 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:11:11.719118 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:53112.service - OpenSSH per-connection server daemon (10.0.0.1:53112). Oct 13 00:11:11.720545 systemd-logind[1463]: Removed session 16. Oct 13 00:11:11.764799 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 53112 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:11.767042 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:11.771691 systemd-logind[1463]: New session 17 of user core. Oct 13 00:11:11.780902 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:11:11.998274 sshd[4141]: Connection closed by 10.0.0.1 port 53112 Oct 13 00:11:11.998751 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:12.016926 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:53112.service: Deactivated successfully. Oct 13 00:11:12.019211 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:11:12.021004 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:11:12.029318 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:53116.service - OpenSSH per-connection server daemon (10.0.0.1:53116). Oct 13 00:11:12.030289 systemd-logind[1463]: Removed session 17. Oct 13 00:11:12.073013 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 53116 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:12.074506 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:12.079032 systemd-logind[1463]: New session 18 of user core. Oct 13 00:11:12.085896 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:11:12.552569 sshd[4155]: Connection closed by 10.0.0.1 port 53116 Oct 13 00:11:12.553028 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:12.561744 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:53116.service: Deactivated successfully. Oct 13 00:11:12.564725 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:11:12.566636 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:11:12.573293 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:53126.service - OpenSSH per-connection server daemon (10.0.0.1:53126). Oct 13 00:11:12.576046 systemd-logind[1463]: Removed session 18. Oct 13 00:11:12.616283 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 53126 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:12.617941 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:12.622576 systemd-logind[1463]: New session 19 of user core. Oct 13 00:11:12.631098 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:11:12.893972 sshd[4173]: Connection closed by 10.0.0.1 port 53126 Oct 13 00:11:12.894241 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:12.905165 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:53126.service: Deactivated successfully. Oct 13 00:11:12.907253 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:11:12.909204 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:11:12.918063 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Oct 13 00:11:12.919067 systemd-logind[1463]: Removed session 19. Oct 13 00:11:12.958045 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:12.959796 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:12.964471 systemd-logind[1463]: New session 20 of user core. Oct 13 00:11:12.974944 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 00:11:13.089733 sshd[4186]: Connection closed by 10.0.0.1 port 53142 Oct 13 00:11:13.090136 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:13.094721 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:53142.service: Deactivated successfully. Oct 13 00:11:13.097134 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 00:11:13.097944 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Oct 13 00:11:13.099072 systemd-logind[1463]: Removed session 20. Oct 13 00:11:18.103342 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:45634.service - OpenSSH per-connection server daemon (10.0.0.1:45634). Oct 13 00:11:18.149781 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 45634 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:18.151842 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:18.157402 systemd-logind[1463]: New session 21 of user core. Oct 13 00:11:18.166931 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 00:11:18.287628 sshd[4206]: Connection closed by 10.0.0.1 port 45634 Oct 13 00:11:18.289355 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:18.293450 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:45634.service: Deactivated successfully. Oct 13 00:11:18.295914 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 00:11:18.296661 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Oct 13 00:11:18.297660 systemd-logind[1463]: Removed session 21. Oct 13 00:11:23.301486 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:45642.service - OpenSSH per-connection server daemon (10.0.0.1:45642). Oct 13 00:11:23.353451 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 45642 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:23.355248 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:23.359237 systemd-logind[1463]: New session 22 of user core. Oct 13 00:11:23.368886 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 00:11:23.482125 sshd[4226]: Connection closed by 10.0.0.1 port 45642 Oct 13 00:11:23.482523 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:23.486682 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:45642.service: Deactivated successfully. Oct 13 00:11:23.488692 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 00:11:23.489355 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Oct 13 00:11:23.490218 systemd-logind[1463]: Removed session 22. Oct 13 00:11:28.499141 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:36358.service - OpenSSH per-connection server daemon (10.0.0.1:36358). Oct 13 00:11:28.542023 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:28.543523 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:28.547981 systemd-logind[1463]: New session 23 of user core. Oct 13 00:11:28.559903 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 00:11:28.668695 sshd[4242]: Connection closed by 10.0.0.1 port 36358 Oct 13 00:11:28.669225 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:28.689024 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:36358.service: Deactivated successfully. Oct 13 00:11:28.691320 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 00:11:28.693369 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Oct 13 00:11:28.709252 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:36364.service - OpenSSH per-connection server daemon (10.0.0.1:36364). Oct 13 00:11:28.710676 systemd-logind[1463]: Removed session 23. Oct 13 00:11:28.752316 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 36364 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:28.754179 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:28.759235 systemd-logind[1463]: New session 24 of user core. Oct 13 00:11:28.768904 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 00:11:30.117815 containerd[1473]: time="2025-10-13T00:11:30.115212405Z" level=info msg="StopContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" with timeout 30 (s)" Oct 13 00:11:30.124733 containerd[1473]: time="2025-10-13T00:11:30.124449718Z" level=info msg="Stop container \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" with signal terminated" Oct 13 00:11:30.145618 systemd[1]: cri-containerd-fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8.scope: Deactivated successfully. Oct 13 00:11:30.155474 containerd[1473]: time="2025-10-13T00:11:30.155416857Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:11:30.157661 containerd[1473]: time="2025-10-13T00:11:30.157637363Z" level=info msg="StopContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" with timeout 2 (s)" Oct 13 00:11:30.157947 containerd[1473]: time="2025-10-13T00:11:30.157927027Z" level=info msg="Stop container \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" with signal terminated" Oct 13 00:11:30.173689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8-rootfs.mount: Deactivated successfully. Oct 13 00:11:30.176117 systemd-networkd[1384]: lxc_health: Link DOWN Oct 13 00:11:30.176127 systemd-networkd[1384]: lxc_health: Lost carrier Oct 13 00:11:30.196186 systemd[1]: cri-containerd-7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b.scope: Deactivated successfully. Oct 13 00:11:30.196548 systemd[1]: cri-containerd-7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b.scope: Consumed 7.374s CPU time, 124M memory peak, 200K read from disk, 13.3M written to disk. Oct 13 00:11:30.225751 containerd[1473]: time="2025-10-13T00:11:30.225649583Z" level=info msg="shim disconnected" id=fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8 namespace=k8s.io Oct 13 00:11:30.225751 containerd[1473]: time="2025-10-13T00:11:30.225738744Z" level=warning msg="cleaning up after shim disconnected" id=fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8 namespace=k8s.io Oct 13 00:11:30.225751 containerd[1473]: time="2025-10-13T00:11:30.225750717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:30.228104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b-rootfs.mount: Deactivated successfully. Oct 13 00:11:30.234165 containerd[1473]: time="2025-10-13T00:11:30.234048522Z" level=info msg="shim disconnected" id=7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b namespace=k8s.io Oct 13 00:11:30.234165 containerd[1473]: time="2025-10-13T00:11:30.234102977Z" level=warning msg="cleaning up after shim disconnected" id=7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b namespace=k8s.io Oct 13 00:11:30.234165 containerd[1473]: time="2025-10-13T00:11:30.234112955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:30.248009 containerd[1473]: time="2025-10-13T00:11:30.247966243Z" level=warning msg="cleanup warnings time=\"2025-10-13T00:11:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 13 00:11:30.252392 containerd[1473]: time="2025-10-13T00:11:30.252348472Z" level=info msg="StopContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" returns successfully" Oct 13 00:11:30.255186 containerd[1473]: time="2025-10-13T00:11:30.255124370Z" level=info msg="StopContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" returns successfully" Oct 13 00:11:30.256122 containerd[1473]: time="2025-10-13T00:11:30.256096959Z" level=info msg="StopPodSandbox for \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\"" Oct 13 00:11:30.256995 containerd[1473]: time="2025-10-13T00:11:30.256961382Z" level=info msg="StopPodSandbox for \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\"" Oct 13 00:11:30.258346 containerd[1473]: time="2025-10-13T00:11:30.258303258Z" level=info msg="Container to stop \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.260459 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6-shm.mount: Deactivated successfully. Oct 13 00:11:30.266312 systemd[1]: cri-containerd-ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6.scope: Deactivated successfully. Oct 13 00:11:30.273714 containerd[1473]: time="2025-10-13T00:11:30.258300293Z" level=info msg="Container to stop \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.273714 containerd[1473]: time="2025-10-13T00:11:30.273688605Z" level=info msg="Container to stop \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.273714 containerd[1473]: time="2025-10-13T00:11:30.273699336Z" level=info msg="Container to stop \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.273714 containerd[1473]: time="2025-10-13T00:11:30.273708232Z" level=info msg="Container to stop \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.273714 containerd[1473]: time="2025-10-13T00:11:30.273717130Z" level=info msg="Container to stop \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:11:30.279917 systemd[1]: cri-containerd-2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54.scope: Deactivated successfully. Oct 13 00:11:30.302730 containerd[1473]: time="2025-10-13T00:11:30.302553124Z" level=info msg="shim disconnected" id=ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6 namespace=k8s.io Oct 13 00:11:30.302730 containerd[1473]: time="2025-10-13T00:11:30.302611516Z" level=warning msg="cleaning up after shim disconnected" id=ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6 namespace=k8s.io Oct 13 00:11:30.302730 containerd[1473]: time="2025-10-13T00:11:30.302626705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:30.312981 containerd[1473]: time="2025-10-13T00:11:30.312935616Z" level=info msg="shim disconnected" id=2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54 namespace=k8s.io Oct 13 00:11:30.312981 containerd[1473]: time="2025-10-13T00:11:30.312977877Z" level=warning msg="cleaning up after shim disconnected" id=2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54 namespace=k8s.io Oct 13 00:11:30.313184 containerd[1473]: time="2025-10-13T00:11:30.312986944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:30.319935 containerd[1473]: time="2025-10-13T00:11:30.319901976Z" level=info msg="TearDown network for sandbox \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\" successfully" Oct 13 00:11:30.319935 containerd[1473]: time="2025-10-13T00:11:30.319927164Z" level=info msg="StopPodSandbox for \"ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6\" returns successfully" Oct 13 00:11:30.329339 containerd[1473]: time="2025-10-13T00:11:30.329293583Z" level=info msg="TearDown network for sandbox \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" successfully" Oct 13 00:11:30.329339 containerd[1473]: time="2025-10-13T00:11:30.329329953Z" level=info msg="StopPodSandbox for \"2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54\" returns successfully" Oct 13 00:11:30.406244 kubelet[2579]: I1013 00:11:30.406081 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782b5634-85a7-444f-b03d-28a690560c56-cilium-config-path\") pod \"782b5634-85a7-444f-b03d-28a690560c56\" (UID: \"782b5634-85a7-444f-b03d-28a690560c56\") " Oct 13 00:11:30.406244 kubelet[2579]: I1013 00:11:30.406137 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00da2c07-26f7-49a6-afc4-85356af3886a-clustermesh-secrets\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.406244 kubelet[2579]: I1013 00:11:30.406169 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-config-path\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.406244 kubelet[2579]: I1013 00:11:30.406188 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-run\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.406244 kubelet[2579]: I1013 00:11:30.406206 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpc6x\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-kube-api-access-mpc6x\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407401 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407489 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-hostproc\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407517 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-hubble-tls\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407533 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-cgroup\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407548 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-lib-modules\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407579 kubelet[2579]: I1013 00:11:30.407560 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-kernel\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407573 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-xtables-lock\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407589 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-bpf-maps\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407607 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmt96\" (UniqueName: \"kubernetes.io/projected/782b5634-85a7-444f-b03d-28a690560c56-kube-api-access-dmt96\") pod \"782b5634-85a7-444f-b03d-28a690560c56\" (UID: \"782b5634-85a7-444f-b03d-28a690560c56\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407620 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-etc-cni-netd\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407634 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-net\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.407846 kubelet[2579]: I1013 00:11:30.407693 2579 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cni-path\") pod \"00da2c07-26f7-49a6-afc4-85356af3886a\" (UID: \"00da2c07-26f7-49a6-afc4-85356af3886a\") " Oct 13 00:11:30.408088 kubelet[2579]: I1013 00:11:30.407823 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.408088 kubelet[2579]: I1013 00:11:30.407848 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cni-path" (OuterVolumeSpecName: "cni-path") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.408088 kubelet[2579]: I1013 00:11:30.407876 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-hostproc" (OuterVolumeSpecName: "hostproc") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.410360 kubelet[2579]: I1013 00:11:30.409715 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782b5634-85a7-444f-b03d-28a690560c56-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "782b5634-85a7-444f-b03d-28a690560c56" (UID: "782b5634-85a7-444f-b03d-28a690560c56"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:11:30.410845 kubelet[2579]: I1013 00:11:30.410815 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-kube-api-access-mpc6x" (OuterVolumeSpecName: "kube-api-access-mpc6x") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "kube-api-access-mpc6x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:11:30.410902 kubelet[2579]: I1013 00:11:30.410849 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.410902 kubelet[2579]: I1013 00:11:30.410866 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.410902 kubelet[2579]: I1013 00:11:30.410887 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.410902 kubelet[2579]: I1013 00:11:30.410896 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.411012 kubelet[2579]: I1013 00:11:30.410921 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.411012 kubelet[2579]: I1013 00:11:30.410939 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.411012 kubelet[2579]: I1013 00:11:30.410960 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 13 00:11:30.411464 kubelet[2579]: I1013 00:11:30.411438 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da2c07-26f7-49a6-afc4-85356af3886a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:11:30.411464 kubelet[2579]: I1013 00:11:30.411458 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:11:30.413427 kubelet[2579]: I1013 00:11:30.413395 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00da2c07-26f7-49a6-afc4-85356af3886a" (UID: "00da2c07-26f7-49a6-afc4-85356af3886a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:11:30.413902 kubelet[2579]: I1013 00:11:30.413870 2579 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782b5634-85a7-444f-b03d-28a690560c56-kube-api-access-dmt96" (OuterVolumeSpecName: "kube-api-access-dmt96") pod "782b5634-85a7-444f-b03d-28a690560c56" (UID: "782b5634-85a7-444f-b03d-28a690560c56"). InnerVolumeSpecName "kube-api-access-dmt96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:11:30.508269 kubelet[2579]: I1013 00:11:30.508207 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmt96\" (UniqueName: \"kubernetes.io/projected/782b5634-85a7-444f-b03d-28a690560c56-kube-api-access-dmt96\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508269 kubelet[2579]: I1013 00:11:30.508245 2579 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508269 kubelet[2579]: I1013 00:11:30.508255 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508269 kubelet[2579]: I1013 00:11:30.508263 2579 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508304 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/782b5634-85a7-444f-b03d-28a690560c56-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508313 2579 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00da2c07-26f7-49a6-afc4-85356af3886a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508321 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508332 2579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpc6x\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-kube-api-access-mpc6x\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508340 2579 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508348 2579 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00da2c07-26f7-49a6-afc4-85356af3886a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508356 2579 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508531 kubelet[2579]: I1013 00:11:30.508363 2579 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508746 kubelet[2579]: I1013 00:11:30.508371 2579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508746 kubelet[2579]: I1013 00:11:30.508378 2579 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.508746 kubelet[2579]: I1013 00:11:30.508385 2579 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00da2c07-26f7-49a6-afc4-85356af3886a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 13 00:11:30.869697 kubelet[2579]: I1013 00:11:30.869585 2579 scope.go:117] "RemoveContainer" containerID="fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8" Oct 13 00:11:30.875687 containerd[1473]: time="2025-10-13T00:11:30.875651420Z" level=info msg="RemoveContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\"" Oct 13 00:11:30.878434 systemd[1]: Removed slice kubepods-besteffort-pod782b5634_85a7_444f_b03d_28a690560c56.slice - libcontainer container kubepods-besteffort-pod782b5634_85a7_444f_b03d_28a690560c56.slice. Oct 13 00:11:30.883099 containerd[1473]: time="2025-10-13T00:11:30.882938813Z" level=info msg="RemoveContainer for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" returns successfully" Oct 13 00:11:30.883802 kubelet[2579]: I1013 00:11:30.883193 2579 scope.go:117] "RemoveContainer" containerID="fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8" Oct 13 00:11:30.883925 containerd[1473]: time="2025-10-13T00:11:30.883430864Z" level=error msg="ContainerStatus for \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\": not found" Oct 13 00:11:30.885729 systemd[1]: Removed slice kubepods-burstable-pod00da2c07_26f7_49a6_afc4_85356af3886a.slice - libcontainer container kubepods-burstable-pod00da2c07_26f7_49a6_afc4_85356af3886a.slice. Oct 13 00:11:30.885991 systemd[1]: kubepods-burstable-pod00da2c07_26f7_49a6_afc4_85356af3886a.slice: Consumed 7.496s CPU time, 124.3M memory peak, 212K read from disk, 13.3M written to disk. Oct 13 00:11:30.891230 kubelet[2579]: E1013 00:11:30.891195 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\": not found" containerID="fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8" Oct 13 00:11:30.891329 kubelet[2579]: I1013 00:11:30.891245 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8"} err="failed to get container status \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb89000b60e26bb27ea38ee87be35adcdf6fe58a512f3fc5351e61d24aaa01c8\": not found" Oct 13 00:11:30.891329 kubelet[2579]: I1013 00:11:30.891289 2579 scope.go:117] "RemoveContainer" containerID="7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b" Oct 13 00:11:30.892787 containerd[1473]: time="2025-10-13T00:11:30.892433858Z" level=info msg="RemoveContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\"" Oct 13 00:11:30.896354 containerd[1473]: time="2025-10-13T00:11:30.896327572Z" level=info msg="RemoveContainer for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" returns successfully" Oct 13 00:11:30.896570 kubelet[2579]: I1013 00:11:30.896535 2579 scope.go:117] "RemoveContainer" containerID="0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c" Oct 13 00:11:30.897628 containerd[1473]: time="2025-10-13T00:11:30.897570239Z" level=info msg="RemoveContainer for \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\"" Oct 13 00:11:30.901173 containerd[1473]: time="2025-10-13T00:11:30.901128552Z" level=info msg="RemoveContainer for \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\" returns successfully" Oct 13 00:11:30.901358 kubelet[2579]: I1013 00:11:30.901326 2579 scope.go:117] "RemoveContainer" containerID="b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5" Oct 13 00:11:30.902298 containerd[1473]: time="2025-10-13T00:11:30.902271497Z" level=info msg="RemoveContainer for \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\"" Oct 13 00:11:30.905445 containerd[1473]: time="2025-10-13T00:11:30.905413064Z" level=info msg="RemoveContainer for \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\" returns successfully" Oct 13 00:11:30.905570 kubelet[2579]: I1013 00:11:30.905547 2579 scope.go:117] "RemoveContainer" containerID="fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265" Oct 13 00:11:30.906318 containerd[1473]: time="2025-10-13T00:11:30.906290853Z" level=info msg="RemoveContainer for \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\"" Oct 13 00:11:30.909357 containerd[1473]: time="2025-10-13T00:11:30.909326587Z" level=info msg="RemoveContainer for \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\" returns successfully" Oct 13 00:11:30.909512 kubelet[2579]: I1013 00:11:30.909473 2579 scope.go:117] "RemoveContainer" containerID="61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2" Oct 13 00:11:30.910358 containerd[1473]: time="2025-10-13T00:11:30.910336508Z" level=info msg="RemoveContainer for \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\"" Oct 13 00:11:30.913417 containerd[1473]: time="2025-10-13T00:11:30.913387180Z" level=info msg="RemoveContainer for \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\" returns successfully" Oct 13 00:11:30.913572 kubelet[2579]: I1013 00:11:30.913545 2579 scope.go:117] "RemoveContainer" containerID="7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b" Oct 13 00:11:30.913803 containerd[1473]: time="2025-10-13T00:11:30.913740667Z" level=error msg="ContainerStatus for \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\": not found" Oct 13 00:11:30.913919 kubelet[2579]: E1013 00:11:30.913898 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\": not found" containerID="7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b" Oct 13 00:11:30.913951 kubelet[2579]: I1013 00:11:30.913928 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b"} err="failed to get container status \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7da3759b4673f130c4d43d3bb4d54b837fe91328e460160a20d3407844dbbb1b\": not found" Oct 13 00:11:30.913979 kubelet[2579]: I1013 00:11:30.913949 2579 scope.go:117] "RemoveContainer" containerID="0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c" Oct 13 00:11:30.914165 containerd[1473]: time="2025-10-13T00:11:30.914105935Z" level=error msg="ContainerStatus for \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\": not found" Oct 13 00:11:30.914309 kubelet[2579]: E1013 00:11:30.914287 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\": not found" containerID="0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c" Oct 13 00:11:30.914345 kubelet[2579]: I1013 00:11:30.914317 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c"} err="failed to get container status \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fa4a4d72862a950298a4784f4b0b52e524667426efdffe3830d23a5fa59d77c\": not found" Oct 13 00:11:30.914345 kubelet[2579]: I1013 00:11:30.914338 2579 scope.go:117] "RemoveContainer" containerID="b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5" Oct 13 00:11:30.914517 containerd[1473]: time="2025-10-13T00:11:30.914489068Z" level=error msg="ContainerStatus for \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\": not found" Oct 13 00:11:30.914639 kubelet[2579]: E1013 00:11:30.914606 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\": not found" containerID="b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5" Oct 13 00:11:30.914675 kubelet[2579]: I1013 00:11:30.914636 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5"} err="failed to get container status \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5455db7ac83e621f3ef11150f5ce286de92458033fcc16d513bc629418f18c5\": not found" Oct 13 00:11:30.914675 kubelet[2579]: I1013 00:11:30.914658 2579 scope.go:117] "RemoveContainer" containerID="fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265" Oct 13 00:11:30.914826 containerd[1473]: time="2025-10-13T00:11:30.914800873Z" level=error msg="ContainerStatus for \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\": not found" Oct 13 00:11:30.914942 kubelet[2579]: E1013 00:11:30.914921 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\": not found" containerID="fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265" Oct 13 00:11:30.914989 kubelet[2579]: I1013 00:11:30.914947 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265"} err="failed to get container status \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe4ef8ce8869f7d451e26e09db9b8a188617fa9e2e91a12d4d85b7eec88d5265\": not found" Oct 13 00:11:30.914989 kubelet[2579]: I1013 00:11:30.914963 2579 scope.go:117] "RemoveContainer" containerID="61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2" Oct 13 00:11:30.915180 containerd[1473]: time="2025-10-13T00:11:30.915113983Z" level=error msg="ContainerStatus for \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\": not found" Oct 13 00:11:30.915241 kubelet[2579]: E1013 00:11:30.915220 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\": not found" containerID="61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2" Oct 13 00:11:30.915268 kubelet[2579]: I1013 00:11:30.915244 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2"} err="failed to get container status \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\": rpc error: code = NotFound desc = an error occurred when try to find container \"61e21f5a8c5fd66abdb1ce63e5a6a4f641566cafd20942752128d7440d7b9ef2\": not found" Oct 13 00:11:31.123538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd74038f1faea16e3864d8ac1b96b37871b3725754434d0515ae1886b321bc6-rootfs.mount: Deactivated successfully. Oct 13 00:11:31.123664 systemd[1]: var-lib-kubelet-pods-782b5634\x2d85a7\x2d444f\x2db03d\x2d28a690560c56-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmt96.mount: Deactivated successfully. Oct 13 00:11:31.123753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54-rootfs.mount: Deactivated successfully. Oct 13 00:11:31.123877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e31061e7666f1002538c8c45112b9b298e0dd367d6cc17a25505fa095687f54-shm.mount: Deactivated successfully. Oct 13 00:11:31.123980 systemd[1]: var-lib-kubelet-pods-00da2c07\x2d26f7\x2d49a6\x2dafc4\x2d85356af3886a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpc6x.mount: Deactivated successfully. Oct 13 00:11:31.124099 systemd[1]: var-lib-kubelet-pods-00da2c07\x2d26f7\x2d49a6\x2dafc4\x2d85356af3886a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 13 00:11:31.124210 systemd[1]: var-lib-kubelet-pods-00da2c07\x2d26f7\x2d49a6\x2dafc4\x2d85356af3886a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 13 00:11:31.629254 kubelet[2579]: I1013 00:11:31.629192 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00da2c07-26f7-49a6-afc4-85356af3886a" path="/var/lib/kubelet/pods/00da2c07-26f7-49a6-afc4-85356af3886a/volumes" Oct 13 00:11:31.630130 kubelet[2579]: I1013 00:11:31.630100 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782b5634-85a7-444f-b03d-28a690560c56" path="/var/lib/kubelet/pods/782b5634-85a7-444f-b03d-28a690560c56/volumes" Oct 13 00:11:32.078057 sshd[4257]: Connection closed by 10.0.0.1 port 36364 Oct 13 00:11:32.078804 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:32.093963 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:36364.service: Deactivated successfully. Oct 13 00:11:32.096251 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 00:11:32.096982 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Oct 13 00:11:32.098370 systemd-logind[1463]: Removed session 24. Oct 13 00:11:32.103045 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:36368.service - OpenSSH per-connection server daemon (10.0.0.1:36368). Oct 13 00:11:32.148000 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:32.149718 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:32.154710 systemd-logind[1463]: New session 25 of user core. Oct 13 00:11:32.161901 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 00:11:32.666964 sshd[4420]: Connection closed by 10.0.0.1 port 36368 Oct 13 00:11:32.668946 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:32.684287 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:36368.service: Deactivated successfully. Oct 13 00:11:32.688465 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 00:11:32.691177 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Oct 13 00:11:32.703385 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:36378.service - OpenSSH per-connection server daemon (10.0.0.1:36378). Oct 13 00:11:32.707938 systemd-logind[1463]: Removed session 25. Oct 13 00:11:32.714614 systemd[1]: Created slice kubepods-burstable-podc3fc952d_7767_4b0c_8bb3_c9dd9b688df1.slice - libcontainer container kubepods-burstable-podc3fc952d_7767_4b0c_8bb3_c9dd9b688df1.slice. Oct 13 00:11:32.721864 kubelet[2579]: I1013 00:11:32.721829 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-bpf-maps\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722251 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-cni-path\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722286 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-etc-cni-netd\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722302 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-clustermesh-secrets\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722315 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-host-proc-sys-net\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722329 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4jx\" (UniqueName: \"kubernetes.io/projected/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-kube-api-access-kw4jx\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722541 kubelet[2579]: I1013 00:11:32.722345 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-xtables-lock\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722360 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-cilium-ipsec-secrets\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722374 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-cilium-run\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722388 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-hostproc\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722400 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-cilium-config-path\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722414 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-hubble-tls\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722694 kubelet[2579]: I1013 00:11:32.722427 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-cilium-cgroup\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722853 kubelet[2579]: I1013 00:11:32.722440 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-lib-modules\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.722853 kubelet[2579]: I1013 00:11:32.722457 2579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c3fc952d-7767-4b0c-8bb3-c9dd9b688df1-host-proc-sys-kernel\") pod \"cilium-l7svb\" (UID: \"c3fc952d-7767-4b0c-8bb3-c9dd9b688df1\") " pod="kube-system/cilium-l7svb" Oct 13 00:11:32.741370 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 36378 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:32.742993 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:32.747342 systemd-logind[1463]: New session 26 of user core. Oct 13 00:11:32.757906 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 00:11:32.809314 sshd[4434]: Connection closed by 10.0.0.1 port 36378 Oct 13 00:11:32.809622 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:32.821567 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:36378.service: Deactivated successfully. Oct 13 00:11:32.823913 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 00:11:32.827715 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Oct 13 00:11:32.836146 systemd[1]: Started sshd@26-10.0.0.99:22-10.0.0.1:36388.service - OpenSSH per-connection server daemon (10.0.0.1:36388). Oct 13 00:11:32.844704 systemd-logind[1463]: Removed session 26. Oct 13 00:11:32.875259 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 36388 ssh2: RSA SHA256:uA0PI7HM4rJwOplfo+o8LLawN4hvFVGW7IM2aCyGITY Oct 13 00:11:32.876782 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:11:32.881515 systemd-logind[1463]: New session 27 of user core. Oct 13 00:11:32.890897 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 00:11:33.022971 containerd[1473]: time="2025-10-13T00:11:33.022909704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7svb,Uid:c3fc952d-7767-4b0c-8bb3-c9dd9b688df1,Namespace:kube-system,Attempt:0,}" Oct 13 00:11:33.046248 containerd[1473]: time="2025-10-13T00:11:33.046113620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 13 00:11:33.046248 containerd[1473]: time="2025-10-13T00:11:33.046195576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 13 00:11:33.046248 containerd[1473]: time="2025-10-13T00:11:33.046211496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:11:33.046456 containerd[1473]: time="2025-10-13T00:11:33.046336134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 13 00:11:33.078058 systemd[1]: Started cri-containerd-11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5.scope - libcontainer container 11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5. Oct 13 00:11:33.106241 containerd[1473]: time="2025-10-13T00:11:33.106192091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7svb,Uid:c3fc952d-7767-4b0c-8bb3-c9dd9b688df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\"" Oct 13 00:11:33.113042 containerd[1473]: time="2025-10-13T00:11:33.112994389Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 13 00:11:33.124778 containerd[1473]: time="2025-10-13T00:11:33.124737481Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c\"" Oct 13 00:11:33.126238 containerd[1473]: time="2025-10-13T00:11:33.125308521Z" level=info msg="StartContainer for \"eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c\"" Oct 13 00:11:33.156956 systemd[1]: Started cri-containerd-eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c.scope - libcontainer container eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c. Oct 13 00:11:33.185937 containerd[1473]: time="2025-10-13T00:11:33.185885624Z" level=info msg="StartContainer for \"eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c\" returns successfully" Oct 13 00:11:33.197324 systemd[1]: cri-containerd-eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c.scope: Deactivated successfully. Oct 13 00:11:33.294380 containerd[1473]: time="2025-10-13T00:11:33.294214814Z" level=info msg="shim disconnected" id=eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c namespace=k8s.io Oct 13 00:11:33.294380 containerd[1473]: time="2025-10-13T00:11:33.294272464Z" level=warning msg="cleaning up after shim disconnected" id=eb5e9f89b319b81c88b563114aff9f6309c572d8494fc65f76a4bf877e99fc6c namespace=k8s.io Oct 13 00:11:33.294380 containerd[1473]: time="2025-10-13T00:11:33.294281301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:33.684046 kubelet[2579]: E1013 00:11:33.683930 2579 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 13 00:11:33.890027 containerd[1473]: time="2025-10-13T00:11:33.889974694Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 13 00:11:33.904463 containerd[1473]: time="2025-10-13T00:11:33.904412923Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69\"" Oct 13 00:11:33.904936 containerd[1473]: time="2025-10-13T00:11:33.904904361Z" level=info msg="StartContainer for \"5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69\"" Oct 13 00:11:33.941940 systemd[1]: Started cri-containerd-5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69.scope - libcontainer container 5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69. Oct 13 00:11:33.968052 containerd[1473]: time="2025-10-13T00:11:33.968001055Z" level=info msg="StartContainer for \"5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69\" returns successfully" Oct 13 00:11:33.975656 systemd[1]: cri-containerd-5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69.scope: Deactivated successfully. Oct 13 00:11:34.007210 containerd[1473]: time="2025-10-13T00:11:34.007128108Z" level=info msg="shim disconnected" id=5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69 namespace=k8s.io Oct 13 00:11:34.007210 containerd[1473]: time="2025-10-13T00:11:34.007185929Z" level=warning msg="cleaning up after shim disconnected" id=5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69 namespace=k8s.io Oct 13 00:11:34.007210 containerd[1473]: time="2025-10-13T00:11:34.007195286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:34.838935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dde8a3c140c4ae5973d40208077d5b9b594cc5ae3872eaee399589e44445c69-rootfs.mount: Deactivated successfully. Oct 13 00:11:34.899895 containerd[1473]: time="2025-10-13T00:11:34.899821379Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 13 00:11:35.090393 containerd[1473]: time="2025-10-13T00:11:35.090229838Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307\"" Oct 13 00:11:35.090885 containerd[1473]: time="2025-10-13T00:11:35.090814453Z" level=info msg="StartContainer for \"c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307\"" Oct 13 00:11:35.119904 systemd[1]: Started cri-containerd-c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307.scope - libcontainer container c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307. Oct 13 00:11:35.152283 containerd[1473]: time="2025-10-13T00:11:35.152238564Z" level=info msg="StartContainer for \"c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307\" returns successfully" Oct 13 00:11:35.154055 systemd[1]: cri-containerd-c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307.scope: Deactivated successfully. Oct 13 00:11:35.177433 containerd[1473]: time="2025-10-13T00:11:35.177372412Z" level=info msg="shim disconnected" id=c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307 namespace=k8s.io Oct 13 00:11:35.177433 containerd[1473]: time="2025-10-13T00:11:35.177426575Z" level=warning msg="cleaning up after shim disconnected" id=c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307 namespace=k8s.io Oct 13 00:11:35.177433 containerd[1473]: time="2025-10-13T00:11:35.177435512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:35.308119 kubelet[2579]: I1013 00:11:35.308033 2579 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T00:11:35Z","lastTransitionTime":"2025-10-13T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 13 00:11:35.839086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c03f25ede90327a7a4bcfe5c6d1d34a32a6af1351fbbe008a85368e5ed89b307-rootfs.mount: Deactivated successfully. Oct 13 00:11:35.899643 containerd[1473]: time="2025-10-13T00:11:35.899592639Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 13 00:11:35.935738 containerd[1473]: time="2025-10-13T00:11:35.935666939Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345\"" Oct 13 00:11:35.936309 containerd[1473]: time="2025-10-13T00:11:35.936273325Z" level=info msg="StartContainer for \"55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345\"" Oct 13 00:11:35.967902 systemd[1]: Started cri-containerd-55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345.scope - libcontainer container 55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345. Oct 13 00:11:35.992081 systemd[1]: cri-containerd-55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345.scope: Deactivated successfully. Oct 13 00:11:36.018867 containerd[1473]: time="2025-10-13T00:11:36.018831900Z" level=info msg="StartContainer for \"55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345\" returns successfully" Oct 13 00:11:36.042401 containerd[1473]: time="2025-10-13T00:11:36.042332735Z" level=info msg="shim disconnected" id=55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345 namespace=k8s.io Oct 13 00:11:36.042401 containerd[1473]: time="2025-10-13T00:11:36.042390676Z" level=warning msg="cleaning up after shim disconnected" id=55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345 namespace=k8s.io Oct 13 00:11:36.042401 containerd[1473]: time="2025-10-13T00:11:36.042402197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:11:36.839059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55a5bc94e12cb3987eac6f1b64b2247f09f1c5b6e60906b00b82a7c512dac345-rootfs.mount: Deactivated successfully. Oct 13 00:11:36.903656 containerd[1473]: time="2025-10-13T00:11:36.903588952Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 13 00:11:36.919985 containerd[1473]: time="2025-10-13T00:11:36.919923451Z" level=info msg="CreateContainer within sandbox \"11a4d2e00d8af2b0d799a2039a3523209895620c4cd8a134e796aa56ffe525b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42\"" Oct 13 00:11:36.920472 containerd[1473]: time="2025-10-13T00:11:36.920434926Z" level=info msg="StartContainer for \"81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42\"" Oct 13 00:11:36.953915 systemd[1]: Started cri-containerd-81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42.scope - libcontainer container 81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42. Oct 13 00:11:36.986810 containerd[1473]: time="2025-10-13T00:11:36.985425690Z" level=info msg="StartContainer for \"81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42\" returns successfully" Oct 13 00:11:37.402811 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 13 00:11:37.920041 kubelet[2579]: I1013 00:11:37.919979 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l7svb" podStartSLOduration=5.919960371 podStartE2EDuration="5.919960371s" podCreationTimestamp="2025-10-13 00:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:11:37.919801889 +0000 UTC m=+84.407107056" watchObservedRunningTime="2025-10-13 00:11:37.919960371 +0000 UTC m=+84.407265537" Oct 13 00:11:40.563530 systemd-networkd[1384]: lxc_health: Link UP Oct 13 00:11:40.564747 systemd-networkd[1384]: lxc_health: Gained carrier Oct 13 00:11:42.077068 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 13 00:11:45.567745 systemd[1]: run-containerd-runc-k8s.io-81c67a537e7d2002fe2f88e6cff5caf23f8a7c615d06706c1e1444869822ff42-runc.PHa7q4.mount: Deactivated successfully. Oct 13 00:11:45.617661 sshd[4447]: Connection closed by 10.0.0.1 port 36388 Oct 13 00:11:45.618120 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Oct 13 00:11:45.623046 systemd[1]: sshd@26-10.0.0.99:22-10.0.0.1:36388.service: Deactivated successfully. Oct 13 00:11:45.625452 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 00:11:45.626465 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Oct 13 00:11:45.627733 systemd-logind[1463]: Removed session 27.