Sep 12 10:16:56.005884 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:16:56.006643 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:56.006658 kernel: BIOS-provided physical RAM map: Sep 12 10:16:56.006665 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 10:16:56.006675 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 10:16:56.006682 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 10:16:56.006690 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 10:16:56.006697 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 12 10:16:56.006703 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 10:16:56.006710 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 10:16:56.006717 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 10:16:56.006723 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 10:16:56.006734 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 10:16:56.006743 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 10:16:56.006754 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 10:16:56.006761 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 10:16:56.006768 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 10:16:56.006775 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:16:56.006785 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 10:16:56.006792 kernel: NX (Execute Disable) protection: active Sep 12 10:16:56.006799 kernel: APIC: Static calls initialized Sep 12 10:16:56.006806 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 12 10:16:56.006813 kernel: e820: update [mem 0x9a185018-0x9a18ec57] usable ==> usable Sep 12 10:16:56.006820 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 12 10:16:56.006827 kernel: e820: update [mem 0x9a148018-0x9a184e57] usable ==> usable Sep 12 10:16:56.006834 kernel: extended physical RAM map: Sep 12 10:16:56.006841 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 10:16:56.006849 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 10:16:56.006856 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 10:16:56.006863 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 10:16:56.006872 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a148017] usable Sep 12 10:16:56.006879 kernel: reserve setup_data: [mem 0x000000009a148018-0x000000009a184e57] usable Sep 12 10:16:56.006886 kernel: reserve setup_data: [mem 0x000000009a184e58-0x000000009a185017] usable Sep 12 10:16:56.006893 kernel: reserve setup_data: [mem 0x000000009a185018-0x000000009a18ec57] usable Sep 12 10:16:56.006900 kernel: reserve setup_data: [mem 0x000000009a18ec58-0x000000009b8ecfff] usable Sep 12 10:16:56.006907 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 10:16:56.006914 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 10:16:56.006921 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 10:16:56.006928 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 10:16:56.006936 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 10:16:56.006949 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 10:16:56.006956 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 10:16:56.006963 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 10:16:56.006971 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 10:16:56.006978 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:16:56.006988 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 10:16:56.006998 kernel: efi: EFI v2.7 by EDK II Sep 12 10:16:56.007005 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Sep 12 10:16:56.007013 kernel: random: crng init done Sep 12 10:16:56.007020 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 10:16:56.007027 kernel: secureboot: Secure boot enabled Sep 12 10:16:56.007035 kernel: SMBIOS 2.8 present. Sep 12 10:16:56.007042 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 10:16:56.007049 kernel: Hypervisor detected: KVM Sep 12 10:16:56.007057 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:16:56.007064 kernel: kvm-clock: using sched offset of 5296907404 cycles Sep 12 10:16:56.007072 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:16:56.007082 kernel: tsc: Detected 2794.748 MHz processor Sep 12 10:16:56.007093 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:16:56.007108 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:16:56.007117 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 12 10:16:56.007124 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 10:16:56.007132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:16:56.007140 kernel: Using GB pages for direct mapping Sep 12 10:16:56.007147 kernel: ACPI: Early table checksum verification disabled Sep 12 10:16:56.007155 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 12 10:16:56.007174 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 10:16:56.007183 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007194 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007201 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 12 10:16:56.007213 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007229 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007237 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007245 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:16:56.007261 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 10:16:56.007284 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 12 10:16:56.007292 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 12 10:16:56.007300 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 12 10:16:56.007315 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 12 10:16:56.007323 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 12 10:16:56.007337 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 12 10:16:56.007346 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 12 10:16:56.007354 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 12 10:16:56.007362 kernel: No NUMA configuration found Sep 12 10:16:56.007373 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 12 10:16:56.007380 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Sep 12 10:16:56.007388 kernel: Zone ranges: Sep 12 10:16:56.007400 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:16:56.007414 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 12 10:16:56.007432 kernel: Normal empty Sep 12 10:16:56.007445 kernel: Movable zone start for each node Sep 12 10:16:56.007452 kernel: Early memory node ranges Sep 12 10:16:56.007460 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 12 10:16:56.007471 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 12 10:16:56.007500 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 12 10:16:56.007526 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 12 10:16:56.007543 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 12 10:16:56.007566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 12 10:16:56.007590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:16:56.007610 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 12 10:16:56.007620 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 10:16:56.007629 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 10:16:56.007644 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 10:16:56.007671 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 12 10:16:56.007694 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 10:16:56.007720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:16:56.007743 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:16:56.007752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 10:16:56.007759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:16:56.007768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:16:56.007782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:16:56.007827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:16:56.007868 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:16:56.007905 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 10:16:56.007935 kernel: TSC deadline timer available Sep 12 10:16:56.007957 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 10:16:56.007977 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 10:16:56.007987 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 10:16:56.008011 kernel: kvm-guest: setup PV sched yield Sep 12 10:16:56.008025 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 10:16:56.008035 kernel: Booting paravirtualized kernel on KVM Sep 12 10:16:56.008046 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:16:56.008056 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 10:16:56.008071 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 10:16:56.008081 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 10:16:56.008091 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 10:16:56.008101 kernel: kvm-guest: PV spinlocks enabled Sep 12 10:16:56.008116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:16:56.008129 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:56.008141 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:16:56.008152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 10:16:56.008170 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 10:16:56.008180 kernel: Fallback order for Node 0: 0 Sep 12 10:16:56.008190 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Sep 12 10:16:56.008209 kernel: Policy zone: DMA32 Sep 12 10:16:56.008222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:16:56.008238 kernel: Memory: 2370352K/2552216K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 181608K reserved, 0K cma-reserved) Sep 12 10:16:56.008248 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 10:16:56.008272 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:16:56.008283 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:16:56.008321 kernel: Dynamic Preempt: voluntary Sep 12 10:16:56.008345 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:16:56.008375 kernel: rcu: RCU event tracing is enabled. Sep 12 10:16:56.008406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 10:16:56.008433 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:16:56.008470 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:16:56.008534 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:16:56.008558 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:16:56.008568 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 10:16:56.008577 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 10:16:56.008587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:16:56.008597 kernel: Console: colour dummy device 80x25 Sep 12 10:16:56.008612 kernel: printk: console [ttyS0] enabled Sep 12 10:16:56.008622 kernel: ACPI: Core revision 20230628 Sep 12 10:16:56.008633 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 10:16:56.008648 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:16:56.008658 kernel: x2apic enabled Sep 12 10:16:56.008680 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:16:56.008714 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 10:16:56.008739 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 10:16:56.008750 kernel: kvm-guest: setup PV IPIs Sep 12 10:16:56.008760 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 10:16:56.008770 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 10:16:56.008781 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 10:16:56.008796 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 10:16:56.008806 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 10:16:56.008817 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 10:16:56.008827 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:16:56.008837 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:16:56.008847 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:16:56.008857 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 10:16:56.008878 kernel: active return thunk: retbleed_return_thunk Sep 12 10:16:56.008916 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 10:16:56.008943 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 10:16:56.008974 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 10:16:56.008995 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 10:16:56.009004 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 10:16:56.009017 kernel: active return thunk: srso_return_thunk Sep 12 10:16:56.009031 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 10:16:56.009042 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:16:56.009050 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:16:56.009072 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:16:56.009080 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:16:56.009088 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 10:16:56.009096 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:16:56.009104 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:16:56.009112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:16:56.009120 kernel: landlock: Up and running. Sep 12 10:16:56.009139 kernel: SELinux: Initializing. Sep 12 10:16:56.009159 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:16:56.009171 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:16:56.009180 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 10:16:56.009188 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:16:56.009196 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:16:56.009204 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:16:56.009212 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 10:16:56.009220 kernel: ... version: 0 Sep 12 10:16:56.009235 kernel: ... bit width: 48 Sep 12 10:16:56.009267 kernel: ... generic registers: 6 Sep 12 10:16:56.009295 kernel: ... value mask: 0000ffffffffffff Sep 12 10:16:56.009320 kernel: ... max period: 00007fffffffffff Sep 12 10:16:56.009343 kernel: ... fixed-purpose events: 0 Sep 12 10:16:56.009355 kernel: ... event mask: 000000000000003f Sep 12 10:16:56.009365 kernel: signal: max sigframe size: 1776 Sep 12 10:16:56.009374 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:16:56.009383 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:16:56.009394 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:16:56.009404 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:16:56.009421 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 10:16:56.009431 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 10:16:56.009441 kernel: smpboot: Max logical packages: 1 Sep 12 10:16:56.009452 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 10:16:56.009462 kernel: devtmpfs: initialized Sep 12 10:16:56.009473 kernel: x86/mm: Memory block size: 128MB Sep 12 10:16:56.009500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 12 10:16:56.009521 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 12 10:16:56.009532 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:16:56.009548 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 10:16:56.009560 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:16:56.009571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:16:56.009582 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:16:56.009592 kernel: audit: type=2000 audit(1757672215.513:1): state=initialized audit_enabled=0 res=1 Sep 12 10:16:56.009603 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:16:56.009617 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:16:56.009627 kernel: cpuidle: using governor menu Sep 12 10:16:56.009635 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:16:56.009648 kernel: dca service started, version 1.12.1 Sep 12 10:16:56.009656 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 10:16:56.009664 kernel: PCI: Using configuration type 1 for base access Sep 12 10:16:56.009672 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:16:56.009679 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:16:56.009687 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:16:56.009695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:16:56.009703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:16:56.009713 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:16:56.009721 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:16:56.009730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:16:56.009737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 10:16:56.009745 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:16:56.009753 kernel: ACPI: Interpreter enabled Sep 12 10:16:56.009761 kernel: ACPI: PM: (supports S0 S5) Sep 12 10:16:56.009768 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:16:56.009776 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:16:56.009785 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 10:16:56.009799 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 10:16:56.009810 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:16:56.010091 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:16:56.010275 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 10:16:56.010464 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 10:16:56.010477 kernel: PCI host bridge to bus 0000:00 Sep 12 10:16:56.010673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:16:56.012252 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:16:56.012378 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:16:56.012563 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 10:16:56.012720 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 10:16:56.014628 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 10:16:56.014770 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:16:56.014967 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 10:16:56.015126 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 10:16:56.015265 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 10:16:56.015397 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 10:16:56.015590 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 10:16:56.015734 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 10:16:56.015865 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 10:16:56.016059 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 10:16:56.016200 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 10:16:56.016336 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 10:16:56.016467 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 10:16:56.016650 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 10:16:56.016785 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 10:16:56.016918 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 10:16:56.017073 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 10:16:56.017231 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:16:56.017365 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 10:16:56.017531 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 10:16:56.017666 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 10:16:56.017796 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 10:16:56.017935 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 10:16:56.018072 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 10:16:56.018265 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 10:16:56.018406 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 10:16:56.018574 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 10:16:56.018723 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 10:16:56.018854 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 10:16:56.018871 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:16:56.018879 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:16:56.018887 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:16:56.018895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:16:56.018903 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 10:16:56.018911 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 10:16:56.018919 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 10:16:56.018931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 10:16:56.018939 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 10:16:56.018950 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 10:16:56.018957 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 10:16:56.018965 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 10:16:56.018973 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 10:16:56.018981 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 10:16:56.018989 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 10:16:56.018997 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 10:16:56.019005 kernel: iommu: Default domain type: Translated Sep 12 10:16:56.019013 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:16:56.019024 kernel: efivars: Registered efivars operations Sep 12 10:16:56.019031 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:16:56.019040 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:16:56.019047 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 12 10:16:56.019056 kernel: e820: reserve RAM buffer [mem 0x9a148018-0x9bffffff] Sep 12 10:16:56.019063 kernel: e820: reserve RAM buffer [mem 0x9a185018-0x9bffffff] Sep 12 10:16:56.019071 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 12 10:16:56.019079 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 12 10:16:56.019210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 10:16:56.019364 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 10:16:56.019549 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 10:16:56.019562 kernel: vgaarb: loaded Sep 12 10:16:56.019571 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 10:16:56.019579 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 10:16:56.019587 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:16:56.019595 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:16:56.019604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:16:56.019625 kernel: pnp: PnP ACPI init Sep 12 10:16:56.019798 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 10:16:56.019811 kernel: pnp: PnP ACPI: found 6 devices Sep 12 10:16:56.019820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:16:56.019828 kernel: NET: Registered PF_INET protocol family Sep 12 10:16:56.019836 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 10:16:56.019844 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 10:16:56.019853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:16:56.019865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 10:16:56.019874 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 10:16:56.019882 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 10:16:56.019890 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:16:56.019898 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:16:56.019906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:16:56.019914 kernel: NET: Registered PF_XDP protocol family Sep 12 10:16:56.020049 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 10:16:56.020184 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 10:16:56.020313 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:16:56.020471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:16:56.020687 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:16:56.020805 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 10:16:56.020922 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 10:16:56.021040 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 10:16:56.021051 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:16:56.021059 kernel: Initialise system trusted keyrings Sep 12 10:16:56.021072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 10:16:56.021080 kernel: Key type asymmetric registered Sep 12 10:16:56.021088 kernel: Asymmetric key parser 'x509' registered Sep 12 10:16:56.021096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:16:56.021104 kernel: io scheduler mq-deadline registered Sep 12 10:16:56.021112 kernel: io scheduler kyber registered Sep 12 10:16:56.021120 kernel: io scheduler bfq registered Sep 12 10:16:56.021129 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:16:56.021155 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 10:16:56.021170 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 10:16:56.021180 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 10:16:56.021190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:16:56.021199 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:16:56.021207 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:16:56.021215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:16:56.021223 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:16:56.021521 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 10:16:56.021540 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:16:56.021667 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 10:16:56.021789 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T10:16:55 UTC (1757672215) Sep 12 10:16:56.021911 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 10:16:56.021921 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 10:16:56.021930 kernel: efifb: probing for efifb Sep 12 10:16:56.021938 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 10:16:56.021947 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 10:16:56.021955 kernel: efifb: scrolling: redraw Sep 12 10:16:56.021967 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 10:16:56.021976 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 10:16:56.021984 kernel: fb0: EFI VGA frame buffer device Sep 12 10:16:56.021992 kernel: pstore: Using crash dump compression: deflate Sep 12 10:16:56.022001 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 10:16:56.022009 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:16:56.022017 kernel: Segment Routing with IPv6 Sep 12 10:16:56.022026 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:16:56.022034 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:16:56.022045 kernel: Key type dns_resolver registered Sep 12 10:16:56.022056 kernel: IPI shorthand broadcast: enabled Sep 12 10:16:56.022064 kernel: sched_clock: Marking stable (1338003848, 129782138)->(1507642507, -39856521) Sep 12 10:16:56.022072 kernel: registered taskstats version 1 Sep 12 10:16:56.022081 kernel: Loading compiled-in X.509 certificates Sep 12 10:16:56.022089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:16:56.022100 kernel: Key type .fscrypt registered Sep 12 10:16:56.022108 kernel: Key type fscrypt-provisioning registered Sep 12 10:16:56.022116 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:16:56.022125 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:16:56.022133 kernel: ima: No architecture policies found Sep 12 10:16:56.022142 kernel: clk: Disabling unused clocks Sep 12 10:16:56.022150 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:16:56.022158 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:16:56.022172 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:16:56.022181 kernel: Run /init as init process Sep 12 10:16:56.022191 kernel: with arguments: Sep 12 10:16:56.022199 kernel: /init Sep 12 10:16:56.022207 kernel: with environment: Sep 12 10:16:56.022215 kernel: HOME=/ Sep 12 10:16:56.022223 kernel: TERM=linux Sep 12 10:16:56.022232 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:16:56.022246 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:16:56.022261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:16:56.022270 systemd[1]: Detected virtualization kvm. Sep 12 10:16:56.022279 systemd[1]: Detected architecture x86-64. Sep 12 10:16:56.022288 systemd[1]: Running in initrd. Sep 12 10:16:56.022296 systemd[1]: No hostname configured, using default hostname. Sep 12 10:16:56.022305 systemd[1]: Hostname set to . Sep 12 10:16:56.022314 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:16:56.022325 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:16:56.022334 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:16:56.022343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:16:56.022353 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:16:56.022362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:16:56.022371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:16:56.022381 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:16:56.022394 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:16:56.022403 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:16:56.022412 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:16:56.022421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:16:56.022430 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:16:56.022438 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:16:56.022447 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:16:56.022456 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:16:56.022465 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:16:56.022477 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:16:56.022538 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:16:56.022547 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:16:56.022556 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:16:56.022565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:16:56.022574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:16:56.022583 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:16:56.022591 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:16:56.022607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:16:56.022615 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:16:56.022624 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:16:56.022635 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:16:56.022644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:16:56.022653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:56.022662 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:16:56.022670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:16:56.022682 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:16:56.022722 systemd-journald[192]: Collecting audit messages is disabled. Sep 12 10:16:56.022746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:16:56.022756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:56.022765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:56.022774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:16:56.022783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:16:56.022792 systemd-journald[192]: Journal started Sep 12 10:16:56.022818 systemd-journald[192]: Runtime Journal (/run/log/journal/b34adebb979b4834a95b30791d4aa5ea) is 6M, max 48M, 42M free. Sep 12 10:16:56.008388 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 10:16:56.028323 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:16:56.033621 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:16:56.035039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:16:56.039739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:56.044142 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:16:56.047126 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:16:56.049190 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 10:16:56.051412 kernel: Bridge firewalling registered Sep 12 10:16:56.049974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:16:56.051671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:16:56.059635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:16:56.061656 dracut-cmdline[222]: dracut-dracut-053 Sep 12 10:16:56.062997 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:16:56.069210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:16:56.073058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:16:56.122138 systemd-resolved[252]: Positive Trust Anchors: Sep 12 10:16:56.122185 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:16:56.122227 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:16:56.135643 systemd-resolved[252]: Defaulting to hostname 'linux'. Sep 12 10:16:56.138619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:16:56.138783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:16:56.154521 kernel: SCSI subsystem initialized Sep 12 10:16:56.167520 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:16:56.182537 kernel: iscsi: registered transport (tcp) Sep 12 10:16:56.211551 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:16:56.211613 kernel: QLogic iSCSI HBA Driver Sep 12 10:16:56.272971 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:16:56.284690 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:16:56.311541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:16:56.311632 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:16:56.311645 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:16:56.356541 kernel: raid6: avx2x4 gen() 30782 MB/s Sep 12 10:16:56.373519 kernel: raid6: avx2x2 gen() 31513 MB/s Sep 12 10:16:56.390538 kernel: raid6: avx2x1 gen() 26047 MB/s Sep 12 10:16:56.390598 kernel: raid6: using algorithm avx2x2 gen() 31513 MB/s Sep 12 10:16:56.408549 kernel: raid6: .... xor() 19984 MB/s, rmw enabled Sep 12 10:16:56.408624 kernel: raid6: using avx2x2 recovery algorithm Sep 12 10:16:56.430528 kernel: xor: automatically using best checksumming function avx Sep 12 10:16:56.584549 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:16:56.597989 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:16:56.615656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:16:56.631176 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 10:16:56.637241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:16:56.643806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:16:56.656905 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 12 10:16:56.690323 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:16:56.704701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:16:56.789342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:16:56.798647 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:16:56.818920 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:16:56.823533 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:16:56.826425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:16:56.829017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:16:56.833527 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 10:16:56.837034 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 10:16:56.842358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:16:56.842391 kernel: GPT:9289727 != 19775487 Sep 12 10:16:56.842405 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:16:56.842418 kernel: GPT:9289727 != 19775487 Sep 12 10:16:56.842432 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:16:56.842445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:16:56.837664 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:16:56.857660 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:16:56.861473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:16:56.869788 kernel: libata version 3.00 loaded. Sep 12 10:16:56.877951 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:16:56.877988 kernel: AES CTR mode by8 optimization enabled Sep 12 10:16:56.877999 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 10:16:56.878632 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:16:56.880082 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 10:16:56.878761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:56.883442 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:56.888612 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 10:16:56.888822 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 10:16:56.887302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:16:56.890841 kernel: scsi host0: ahci Sep 12 10:16:56.891123 kernel: scsi host1: ahci Sep 12 10:16:56.887807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:56.895419 kernel: scsi host2: ahci Sep 12 10:16:56.895641 kernel: scsi host3: ahci Sep 12 10:16:56.896636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:56.901174 kernel: scsi host4: ahci Sep 12 10:16:56.901424 kernel: scsi host5: ahci Sep 12 10:16:56.903663 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Sep 12 10:16:56.907516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (466) Sep 12 10:16:56.908799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:56.916197 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 10:16:56.916219 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 10:16:56.916231 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 10:16:56.916253 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 10:16:56.916264 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 10:16:56.916275 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 10:16:56.939136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 10:16:56.942096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:56.954512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 10:16:56.970177 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 10:16:56.973409 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 10:16:56.991848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:16:57.004627 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:16:57.049761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:16:57.049823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:57.053033 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:57.055787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:16:57.058048 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:16:57.071556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:16:57.074709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:16:57.096938 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:16:57.117729 disk-uuid[557]: Primary Header is updated. Sep 12 10:16:57.117729 disk-uuid[557]: Secondary Entries is updated. Sep 12 10:16:57.117729 disk-uuid[557]: Secondary Header is updated. Sep 12 10:16:57.121071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:16:57.126540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:16:57.226758 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 10:16:57.226816 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 10:16:57.226828 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 10:16:57.226839 kernel: ata3.00: applying bridge limits Sep 12 10:16:57.226849 kernel: ata3.00: configured for UDMA/100 Sep 12 10:16:57.226860 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 10:16:57.228512 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 10:16:57.228574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 10:16:57.229989 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 10:16:57.234512 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 10:16:57.283998 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 10:16:57.284238 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 10:16:57.296512 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 10:16:58.127122 disk-uuid[572]: The operation has completed successfully. Sep 12 10:16:58.128959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:16:58.180903 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:16:58.181051 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:16:58.215859 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:16:58.219091 sh[599]: Success Sep 12 10:16:58.234686 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 10:16:58.274808 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:16:58.288188 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:16:58.291829 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:16:58.304842 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:16:58.304886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:58.304897 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:16:58.306579 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:16:58.306608 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:16:58.313214 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:16:58.314121 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:16:58.339110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:16:58.342522 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:16:58.361247 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:58.361316 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:58.361328 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:16:58.364517 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:16:58.369520 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:58.375773 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:16:58.383667 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:16:58.556083 ignition[686]: Ignition 2.20.0 Sep 12 10:16:58.556100 ignition[686]: Stage: fetch-offline Sep 12 10:16:58.556167 ignition[686]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:58.556180 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:16:58.556296 ignition[686]: parsed url from cmdline: "" Sep 12 10:16:58.556300 ignition[686]: no config URL provided Sep 12 10:16:58.556307 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:16:58.556316 ignition[686]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:16:58.556343 ignition[686]: op(1): [started] loading QEMU firmware config module Sep 12 10:16:58.556348 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 10:16:58.573115 ignition[686]: op(1): [finished] loading QEMU firmware config module Sep 12 10:16:58.580703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:16:58.591644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:16:58.615535 ignition[686]: parsing config with SHA512: a6992261af902f404264e1a5b2df4e6984fce1a2aa3fdb31bd59cc76b545eda3c03e8203bdb0daa18ef4df3837595ecaac0a054466b008ea9560dcfeecb2022e Sep 12 10:16:58.619622 systemd-networkd[784]: lo: Link UP Sep 12 10:16:58.619633 systemd-networkd[784]: lo: Gained carrier Sep 12 10:16:58.621521 systemd-networkd[784]: Enumeration completed Sep 12 10:16:58.622532 ignition[686]: fetch-offline: fetch-offline passed Sep 12 10:16:58.622047 unknown[686]: fetched base config from "system" Sep 12 10:16:58.622618 ignition[686]: Ignition finished successfully Sep 12 10:16:58.622056 unknown[686]: fetched user config from "qemu" Sep 12 10:16:58.622179 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:58.622184 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:16:58.623104 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:16:58.624938 systemd[1]: Reached target network.target - Network. Sep 12 10:16:58.625839 systemd-networkd[784]: eth0: Link UP Sep 12 10:16:58.625843 systemd-networkd[784]: eth0: Gained carrier Sep 12 10:16:58.625851 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:16:58.626943 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:16:58.629269 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 10:16:58.643710 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:16:58.655548 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:16:58.687438 ignition[788]: Ignition 2.20.0 Sep 12 10:16:58.687464 ignition[788]: Stage: kargs Sep 12 10:16:58.687671 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:58.687684 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:16:58.691540 ignition[788]: kargs: kargs passed Sep 12 10:16:58.692205 ignition[788]: Ignition finished successfully Sep 12 10:16:58.696710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:16:58.708693 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:16:58.720405 ignition[797]: Ignition 2.20.0 Sep 12 10:16:58.720415 ignition[797]: Stage: disks Sep 12 10:16:58.720594 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:58.720605 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:16:58.721430 ignition[797]: disks: disks passed Sep 12 10:16:58.721502 ignition[797]: Ignition finished successfully Sep 12 10:16:58.727082 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:16:58.728349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:16:58.730105 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:16:58.731375 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:16:58.733289 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:16:58.734297 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:16:58.747608 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:16:58.784855 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 10:16:58.977724 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:16:58.988599 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:16:59.076522 kernel: EXT4-fs (vda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:16:59.077132 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:16:59.077885 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:16:59.087611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:16:59.088833 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:16:59.090588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 10:16:59.090647 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:16:59.102044 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (816) Sep 12 10:16:59.102070 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:59.102081 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:59.102106 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:16:59.090675 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:16:59.097778 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:16:59.106566 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:16:59.103221 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:16:59.108944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:16:59.147834 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:16:59.152749 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:16:59.157909 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:16:59.162375 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:16:59.261786 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:16:59.275561 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:16:59.276460 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:16:59.288512 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:59.304401 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:16:59.304968 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:16:59.351146 ignition[931]: INFO : Ignition 2.20.0 Sep 12 10:16:59.351146 ignition[931]: INFO : Stage: mount Sep 12 10:16:59.353310 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:59.353310 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:16:59.353310 ignition[931]: INFO : mount: mount passed Sep 12 10:16:59.353310 ignition[931]: INFO : Ignition finished successfully Sep 12 10:16:59.356792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:16:59.363615 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:16:59.373705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:16:59.386275 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (944) Sep 12 10:16:59.386302 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:16:59.387263 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:16:59.387277 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:16:59.390521 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:16:59.391840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:16:59.426214 ignition[961]: INFO : Ignition 2.20.0 Sep 12 10:16:59.426214 ignition[961]: INFO : Stage: files Sep 12 10:16:59.428220 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:16:59.428220 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:16:59.430746 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:16:59.432250 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:16:59.432250 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:16:59.436562 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:16:59.438047 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:16:59.439862 unknown[961]: wrote ssh authorized keys file for user: core Sep 12 10:16:59.441016 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:16:59.443079 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 10:16:59.444859 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 10:16:59.535222 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:16:59.894320 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 10:16:59.894320 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:16:59.898960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:16:59.991806 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:17:00.107791 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:17:00.107791 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:17:00.111434 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 10:17:00.360873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:17:00.480955 systemd-networkd[784]: eth0: Gained IPv6LL Sep 12 10:17:01.077725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 10:17:01.077725 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 10:17:01.081650 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 10:17:01.129036 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:17:01.136002 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:17:01.137743 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 10:17:01.137743 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:17:01.137743 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:17:01.137743 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:17:01.137743 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:17:01.137743 ignition[961]: INFO : files: files passed Sep 12 10:17:01.137743 ignition[961]: INFO : Ignition finished successfully Sep 12 10:17:01.150132 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:17:01.159633 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:17:01.162130 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:17:01.164765 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:17:01.164879 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:17:01.178630 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 10:17:01.182260 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:01.182260 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:01.185468 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:01.189762 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:17:01.190110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:17:01.203665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:17:01.229708 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:17:01.229833 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:17:01.232039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:17:01.234020 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:17:01.236030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:17:01.237109 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:17:01.259735 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:17:01.272845 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:17:01.285049 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:17:01.285220 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:17:01.285586 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:17:01.285882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:17:01.286007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:17:01.286685 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:17:01.286990 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:17:01.287303 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:17:01.287808 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:17:01.288121 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:17:01.288451 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:17:01.329291 ignition[1017]: INFO : Ignition 2.20.0 Sep 12 10:17:01.329291 ignition[1017]: INFO : Stage: umount Sep 12 10:17:01.329291 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:01.329291 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:01.329291 ignition[1017]: INFO : umount: umount passed Sep 12 10:17:01.329291 ignition[1017]: INFO : Ignition finished successfully Sep 12 10:17:01.288769 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:17:01.289091 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:17:01.289411 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:17:01.289779 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:17:01.290017 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:17:01.290131 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:17:01.290859 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:17:01.291188 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:17:01.291490 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:17:01.291620 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:17:01.291964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:17:01.292079 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:17:01.293076 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:17:01.293189 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:17:01.293540 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:17:01.293813 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:17:01.297572 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:17:01.297915 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:17:01.298218 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:17:01.298589 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:17:01.298721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:17:01.299204 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:17:01.299316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:17:01.299875 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:17:01.300049 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:17:01.300518 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:17:01.300670 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:17:01.302200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:17:01.302450 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:17:01.302629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:17:01.303951 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:17:01.304197 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:17:01.304347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:17:01.304827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:17:01.304975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:17:01.310734 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:17:01.310896 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:17:01.331913 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:17:01.332097 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:17:01.334459 systemd[1]: Stopped target network.target - Network. Sep 12 10:17:01.335773 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:17:01.335857 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:17:01.337628 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:17:01.337705 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:17:01.339907 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:17:01.339981 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:17:01.341871 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:17:01.341937 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:17:01.344175 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:17:01.345860 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:17:01.349034 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:17:01.355989 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:17:01.357138 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:17:01.369842 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:17:01.370226 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:17:01.370418 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:17:01.374122 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:17:01.375770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:17:01.375879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:17:01.389641 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:17:01.390637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:17:01.390735 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:17:01.392809 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:17:01.392875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:01.394790 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:17:01.394860 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:17:01.396819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:17:01.396888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:17:01.399824 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:17:01.403153 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:17:01.403250 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:17:01.415082 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:17:01.415268 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:17:01.417735 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:17:01.417999 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:17:01.420777 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:17:01.420934 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:17:01.423091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:17:01.423147 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:17:01.425025 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:17:01.425101 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:17:01.427712 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:17:01.427784 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:17:01.429550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:17:01.429620 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:17:01.444879 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:17:01.446583 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:17:01.446700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:17:01.449741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:17:01.449834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:01.454076 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:17:01.454176 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:17:01.455530 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:17:01.455686 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:17:01.569516 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:17:01.569665 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:17:01.571740 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:17:01.573475 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:17:01.573548 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:17:01.587634 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:17:01.595602 systemd[1]: Switching root. Sep 12 10:17:01.627684 systemd-journald[192]: Journal stopped Sep 12 10:17:03.266268 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 12 10:17:03.266347 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:17:03.266368 kernel: SELinux: policy capability open_perms=1 Sep 12 10:17:03.266380 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:17:03.266391 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:17:03.266403 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:17:03.266415 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:17:03.266427 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:17:03.266446 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:17:03.266465 kernel: audit: type=1403 audit(1757672222.219:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:17:03.266490 systemd[1]: Successfully loaded SELinux policy in 43.783ms. Sep 12 10:17:03.266525 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.930ms. Sep 12 10:17:03.266539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:17:03.266552 systemd[1]: Detected virtualization kvm. Sep 12 10:17:03.266569 systemd[1]: Detected architecture x86-64. Sep 12 10:17:03.266582 systemd[1]: Detected first boot. Sep 12 10:17:03.266594 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:17:03.266614 zram_generator::config[1063]: No configuration found. Sep 12 10:17:03.266628 kernel: Guest personality initialized and is inactive Sep 12 10:17:03.266640 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:17:03.266652 kernel: Initialized host personality Sep 12 10:17:03.266664 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:17:03.266676 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:17:03.266689 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:17:03.266702 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:17:03.266720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:17:03.266733 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:17:03.266746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:17:03.266758 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:17:03.266772 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:17:03.266784 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:17:03.266797 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:17:03.267145 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:17:03.267158 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:17:03.267176 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:17:03.267189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:17:03.267202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:17:03.267216 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:17:03.267228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:17:03.267241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:17:03.267254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:17:03.267273 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:17:03.267286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:17:03.267299 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:17:03.267312 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:17:03.267333 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:17:03.267346 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:17:03.267359 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:17:03.267372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:17:03.267386 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:17:03.267404 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:17:03.267416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:17:03.267429 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:17:03.267448 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:17:03.267461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:17:03.267474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:17:03.267499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:17:03.267512 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:17:03.267525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:17:03.267538 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:17:03.267556 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:17:03.267570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:03.267583 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:17:03.267596 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:17:03.267608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:17:03.267622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:17:03.267635 systemd[1]: Reached target machines.target - Containers. Sep 12 10:17:03.267647 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:17:03.267665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:17:03.267678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:17:03.267691 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:17:03.267708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:17:03.267721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:17:03.267734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:17:03.267753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:17:03.267766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:17:03.267784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:17:03.267798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:17:03.267811 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:17:03.267823 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:17:03.267836 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:17:03.267849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:17:03.267862 kernel: fuse: init (API version 7.39) Sep 12 10:17:03.267874 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:17:03.267886 kernel: loop: module loaded Sep 12 10:17:03.267906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:17:03.267919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:17:03.267932 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:17:03.267945 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:17:03.267963 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:17:03.267976 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:17:03.267988 systemd[1]: Stopped verity-setup.service. Sep 12 10:17:03.268001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:03.268013 kernel: ACPI: bus type drm_connector registered Sep 12 10:17:03.268025 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:17:03.268044 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:17:03.268056 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:17:03.268069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:17:03.268087 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:17:03.268100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:17:03.268113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:17:03.268125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:17:03.268138 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:17:03.268170 systemd-journald[1134]: Collecting audit messages is disabled. Sep 12 10:17:03.268201 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:17:03.268214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:17:03.268227 systemd-journald[1134]: Journal started Sep 12 10:17:03.268249 systemd-journald[1134]: Runtime Journal (/run/log/journal/b34adebb979b4834a95b30791d4aa5ea) is 6M, max 48M, 42M free. Sep 12 10:17:02.913252 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:17:02.929035 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 10:17:02.929703 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:17:03.269341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:17:03.272281 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:17:03.273737 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:17:03.274024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:17:03.275723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:17:03.275984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:17:03.277565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:17:03.277796 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:17:03.279202 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:17:03.279499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:17:03.281334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:17:03.283047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:17:03.284941 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:17:03.287056 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:17:03.307473 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:17:03.316574 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:17:03.318948 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:17:03.320260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:17:03.320301 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:17:03.322635 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:17:03.325128 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:17:03.327667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:17:03.328859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:17:03.332308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:17:03.335835 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:17:03.337200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:17:03.339393 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:17:03.340571 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:17:03.346474 systemd-journald[1134]: Time spent on flushing to /var/log/journal/b34adebb979b4834a95b30791d4aa5ea is 25.920ms for 1031 entries. Sep 12 10:17:03.346474 systemd-journald[1134]: System Journal (/var/log/journal/b34adebb979b4834a95b30791d4aa5ea) is 8M, max 195.6M, 187.6M free. Sep 12 10:17:03.384893 systemd-journald[1134]: Received client request to flush runtime journal. Sep 12 10:17:03.385292 kernel: loop0: detected capacity change from 0 to 147912 Sep 12 10:17:03.346514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:17:03.351813 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:17:03.357093 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:17:03.360293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:17:03.365713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:17:03.367156 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:17:03.368928 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:17:03.377758 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:17:03.384354 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:17:03.391813 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:17:03.467582 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:17:03.474893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:17:03.478988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:03.487545 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:17:03.490700 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 10:17:03.549065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:17:03.559677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:17:03.562505 kernel: loop1: detected capacity change from 0 to 138176 Sep 12 10:17:03.596258 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 12 10:17:03.596279 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 12 10:17:03.604898 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:17:03.618515 kernel: loop2: detected capacity change from 0 to 224512 Sep 12 10:17:03.700507 kernel: loop3: detected capacity change from 0 to 147912 Sep 12 10:17:03.714412 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:17:03.716705 kernel: loop4: detected capacity change from 0 to 138176 Sep 12 10:17:03.728510 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 10:17:03.736963 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 10:17:03.737675 (sd-merge)[1207]: Merged extensions into '/usr'. Sep 12 10:17:03.743194 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:17:03.743212 systemd[1]: Reloading... Sep 12 10:17:03.816512 zram_generator::config[1239]: No configuration found. Sep 12 10:17:03.889189 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:17:03.951345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:04.018225 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:17:04.019026 systemd[1]: Reloading finished in 275 ms. Sep 12 10:17:04.045016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:17:04.046714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:17:04.067422 systemd[1]: Starting ensure-sysext.service... Sep 12 10:17:04.070069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:17:04.142247 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:17:04.142572 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:17:04.143662 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:17:04.143973 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 12 10:17:04.144077 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 12 10:17:04.149144 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:17:04.149158 systemd-tmpfiles[1274]: Skipping /boot Sep 12 10:17:04.149715 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:17:04.149737 systemd[1]: Reloading... Sep 12 10:17:04.164499 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:17:04.164516 systemd-tmpfiles[1274]: Skipping /boot Sep 12 10:17:04.209145 zram_generator::config[1304]: No configuration found. Sep 12 10:17:04.331694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:04.397542 systemd[1]: Reloading finished in 247 ms. Sep 12 10:17:04.409930 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:17:04.430199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:17:04.440954 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:17:04.444123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:17:04.446856 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:17:04.450842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:17:04.454132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:17:04.457061 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:17:04.460984 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.461165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:17:04.464673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:17:04.467146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:17:04.470595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:17:04.470793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:17:04.470904 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:17:04.473632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:17:04.474698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.480117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:17:04.480435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:17:04.482348 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:17:04.482641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:17:04.489114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:17:04.489370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:17:04.496016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.496387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:17:04.510516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:17:04.513851 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 12 10:17:04.515123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:17:04.522112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:17:04.523303 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:17:04.523421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:17:04.523562 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.525069 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:17:04.527403 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:17:04.529403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:17:04.529646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:17:04.529708 augenrules[1377]: No rules Sep 12 10:17:04.531724 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:17:04.531982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:17:04.533723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:17:04.533941 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:17:04.535828 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:17:04.536102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:17:04.544972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:17:04.555693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:17:04.557756 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:17:04.573312 systemd[1]: Finished ensure-sysext.service. Sep 12 10:17:04.582830 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.591669 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:17:04.594475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:17:04.595803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:17:04.600953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:17:04.606019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:17:04.612677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:17:04.613885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:17:04.613937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:17:04.617734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:17:04.622662 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 10:17:04.634710 augenrules[1414]: /sbin/augenrules: No change Sep 12 10:17:04.635710 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:17:04.637220 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:17:04.637261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:04.638443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:17:04.639455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:17:04.643087 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:17:04.643414 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:17:04.645751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:17:04.646596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:17:04.648356 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:17:04.648625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:17:04.650453 augenrules[1443]: No rules Sep 12 10:17:04.652628 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:17:04.652918 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:17:04.662510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1407) Sep 12 10:17:04.668543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:17:04.673377 systemd-resolved[1345]: Positive Trust Anchors: Sep 12 10:17:04.673399 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:17:04.673432 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:17:04.678252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:17:04.678360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:17:04.679720 systemd-resolved[1345]: Defaulting to hostname 'linux'. Sep 12 10:17:04.681698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:17:04.700233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:17:04.711920 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:17:04.724685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:17:04.731396 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:17:04.743520 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:17:04.746256 systemd-networkd[1425]: lo: Link UP Sep 12 10:17:04.746270 systemd-networkd[1425]: lo: Gained carrier Sep 12 10:17:04.748346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:17:04.748948 systemd-networkd[1425]: Enumeration completed Sep 12 10:17:04.749351 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:17:04.749362 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:17:04.749908 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:17:04.750313 systemd-networkd[1425]: eth0: Link UP Sep 12 10:17:04.750323 systemd-networkd[1425]: eth0: Gained carrier Sep 12 10:17:04.750337 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:17:04.752296 systemd[1]: Reached target network.target - Network. Sep 12 10:17:04.759602 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:17:04.763729 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:17:04.764568 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:17:04.766019 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Sep 12 10:17:05.803147 systemd-resolved[1345]: Clock change detected. Flushing caches. Sep 12 10:17:05.804815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:17:05.805714 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 10:17:05.805777 systemd-timesyncd[1427]: Initial clock synchronization to Fri 2025-09-12 10:17:05.803098 UTC. Sep 12 10:17:05.808598 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 10:17:05.814674 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 10:17:05.813507 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:17:05.826316 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 10:17:05.826786 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 10:17:05.827046 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 10:17:05.827342 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 10:17:05.836535 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:17:05.918662 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:17:05.923860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:05.932238 kernel: kvm_amd: TSC scaling supported Sep 12 10:17:05.932289 kernel: kvm_amd: Nested Virtualization enabled Sep 12 10:17:05.932303 kernel: kvm_amd: Nested Paging enabled Sep 12 10:17:05.932981 kernel: kvm_amd: LBR virtualization supported Sep 12 10:17:05.933015 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 10:17:05.933960 kernel: kvm_amd: Virtual GIF supported Sep 12 10:17:05.935018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:17:05.935551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:05.944266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:05.961650 kernel: EDAC MC: Ver: 3.0.0 Sep 12 10:17:05.996323 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:17:05.998099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:06.014803 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:17:06.025014 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:17:06.060668 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:17:06.062234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:17:06.063400 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:17:06.064658 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:17:06.065940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:17:06.067400 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:17:06.068628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:17:06.070045 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:17:06.071331 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:17:06.071367 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:17:06.072308 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:17:06.074426 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:17:06.077145 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:17:06.081010 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:17:06.082515 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:17:06.083856 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:17:06.087836 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:17:06.089310 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:17:06.091811 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:17:06.093588 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:17:06.094811 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:17:06.095878 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:17:06.096927 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:17:06.096959 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:17:06.098042 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:17:06.100328 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:17:06.102760 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:17:06.104135 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:17:06.106809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:17:06.107920 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:17:06.111377 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:17:06.114550 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:17:06.116887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:17:06.119645 jq[1488]: false Sep 12 10:17:06.121544 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:17:06.127975 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:17:06.129943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:17:06.130452 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:17:06.135460 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:17:06.137748 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:17:06.140574 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:17:06.142526 extend-filesystems[1489]: Found loop3 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found loop4 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found loop5 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found sr0 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda1 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda2 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda3 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found usr Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda4 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda6 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda7 Sep 12 10:17:06.143570 extend-filesystems[1489]: Found vda9 Sep 12 10:17:06.175738 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 10:17:06.168208 dbus-daemon[1487]: [system] SELinux support is enabled Sep 12 10:17:06.182824 update_engine[1501]: I20250912 10:17:06.166602 1501 main.cc:92] Flatcar Update Engine starting Sep 12 10:17:06.182824 update_engine[1501]: I20250912 10:17:06.181080 1501 update_check_scheduler.cc:74] Next update check in 11m3s Sep 12 10:17:06.143603 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:17:06.183127 extend-filesystems[1489]: Checking size of /dev/vda9 Sep 12 10:17:06.183127 extend-filesystems[1489]: Resized partition /dev/vda9 Sep 12 10:17:06.144047 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:17:06.188237 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:17:06.146074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:17:06.190540 jq[1504]: true Sep 12 10:17:06.146327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:17:06.151496 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:17:06.192730 tar[1506]: linux-amd64/LICENSE Sep 12 10:17:06.192730 tar[1506]: linux-amd64/helm Sep 12 10:17:06.151863 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:17:06.169640 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:17:06.179985 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:17:06.181800 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:17:06.181844 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:17:06.186822 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:17:06.187768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:17:06.192204 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:17:06.199170 jq[1513]: true Sep 12 10:17:06.199801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1398) Sep 12 10:17:06.199842 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 10:17:06.207760 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:17:06.246029 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 10:17:06.246029 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 10:17:06.246029 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 10:17:06.253844 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Sep 12 10:17:06.248177 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:17:06.248559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:17:06.258584 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:17:06.260502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:17:06.263360 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 10:17:06.268814 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:17:06.268847 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:17:06.271766 systemd-logind[1497]: New seat seat0. Sep 12 10:17:06.273890 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:17:06.280563 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:17:06.306668 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:17:06.335895 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:17:06.342849 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:17:06.352994 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:17:06.353395 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:17:06.362335 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:17:06.373940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:17:06.384145 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:17:06.386761 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:17:06.388529 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:17:06.400420 containerd[1514]: time="2025-09-12T10:17:06.400297062Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:17:06.423369 containerd[1514]: time="2025-09-12T10:17:06.423302103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425188 containerd[1514]: time="2025-09-12T10:17:06.425148957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425188 containerd[1514]: time="2025-09-12T10:17:06.425179264Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:17:06.425278 containerd[1514]: time="2025-09-12T10:17:06.425195064Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:17:06.425413 containerd[1514]: time="2025-09-12T10:17:06.425382135Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:17:06.425443 containerd[1514]: time="2025-09-12T10:17:06.425411590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425500 containerd[1514]: time="2025-09-12T10:17:06.425481100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425500 containerd[1514]: time="2025-09-12T10:17:06.425497050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425811 containerd[1514]: time="2025-09-12T10:17:06.425782065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425811 containerd[1514]: time="2025-09-12T10:17:06.425801521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425853 containerd[1514]: time="2025-09-12T10:17:06.425814766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425853 containerd[1514]: time="2025-09-12T10:17:06.425825286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.425949 containerd[1514]: time="2025-09-12T10:17:06.425930363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.426224 containerd[1514]: time="2025-09-12T10:17:06.426195490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:17:06.426390 containerd[1514]: time="2025-09-12T10:17:06.426365028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:17:06.426390 containerd[1514]: time="2025-09-12T10:17:06.426382481Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:17:06.426526 containerd[1514]: time="2025-09-12T10:17:06.426506504Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:17:06.426599 containerd[1514]: time="2025-09-12T10:17:06.426580473Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:17:06.433224 containerd[1514]: time="2025-09-12T10:17:06.433188778Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:17:06.433263 containerd[1514]: time="2025-09-12T10:17:06.433237760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:17:06.433297 containerd[1514]: time="2025-09-12T10:17:06.433263378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:17:06.433358 containerd[1514]: time="2025-09-12T10:17:06.433337136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:17:06.433388 containerd[1514]: time="2025-09-12T10:17:06.433363586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:17:06.433541 containerd[1514]: time="2025-09-12T10:17:06.433509610Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:17:06.433829 containerd[1514]: time="2025-09-12T10:17:06.433797339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:17:06.433961 containerd[1514]: time="2025-09-12T10:17:06.433940007Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:17:06.433983 containerd[1514]: time="2025-09-12T10:17:06.433960095Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:17:06.433983 containerd[1514]: time="2025-09-12T10:17:06.433976736Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:17:06.434021 containerd[1514]: time="2025-09-12T10:17:06.433993117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434021 containerd[1514]: time="2025-09-12T10:17:06.434009016Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434069 containerd[1514]: time="2025-09-12T10:17:06.434021069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434069 containerd[1514]: time="2025-09-12T10:17:06.434036027Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434069 containerd[1514]: time="2025-09-12T10:17:06.434050063Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434069 containerd[1514]: time="2025-09-12T10:17:06.434062216Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434139 containerd[1514]: time="2025-09-12T10:17:06.434074098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434139 containerd[1514]: time="2025-09-12T10:17:06.434085800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:17:06.434139 containerd[1514]: time="2025-09-12T10:17:06.434104706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434139 containerd[1514]: time="2025-09-12T10:17:06.434119273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434139 containerd[1514]: time="2025-09-12T10:17:06.434131306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434143719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434157114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434169598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434180508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434192030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434204453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434218018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434229159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434235 containerd[1514]: time="2025-09-12T10:17:06.434240260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434252333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434266580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434284824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434296977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434312075Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434372378Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434387637Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:17:06.434405 containerd[1514]: time="2025-09-12T10:17:06.434405801Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:17:06.434561 containerd[1514]: time="2025-09-12T10:17:06.434417743Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:17:06.434561 containerd[1514]: time="2025-09-12T10:17:06.434427181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434561 containerd[1514]: time="2025-09-12T10:17:06.434438262Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:17:06.434561 containerd[1514]: time="2025-09-12T10:17:06.434447880Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:17:06.434561 containerd[1514]: time="2025-09-12T10:17:06.434457898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:17:06.434774 containerd[1514]: time="2025-09-12T10:17:06.434731021Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:17:06.434774 containerd[1514]: time="2025-09-12T10:17:06.434774031Z" level=info msg="Connect containerd service" Sep 12 10:17:06.434944 containerd[1514]: time="2025-09-12T10:17:06.434811371Z" level=info msg="using legacy CRI server" Sep 12 10:17:06.434944 containerd[1514]: time="2025-09-12T10:17:06.434819156Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:17:06.434944 containerd[1514]: time="2025-09-12T10:17:06.434916619Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:17:06.436321 containerd[1514]: time="2025-09-12T10:17:06.436235823Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:17:06.436488 containerd[1514]: time="2025-09-12T10:17:06.436425429Z" level=info msg="Start subscribing containerd event" Sep 12 10:17:06.436488 containerd[1514]: time="2025-09-12T10:17:06.436472637Z" level=info msg="Start recovering state" Sep 12 10:17:06.436586 containerd[1514]: time="2025-09-12T10:17:06.436561013Z" level=info msg="Start event monitor" Sep 12 10:17:06.436586 containerd[1514]: time="2025-09-12T10:17:06.436584247Z" level=info msg="Start snapshots syncer" Sep 12 10:17:06.436638 containerd[1514]: time="2025-09-12T10:17:06.436594767Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:17:06.436638 containerd[1514]: time="2025-09-12T10:17:06.436609164Z" level=info msg="Start streaming server" Sep 12 10:17:06.436781 containerd[1514]: time="2025-09-12T10:17:06.436743055Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:17:06.436842 containerd[1514]: time="2025-09-12T10:17:06.436823686Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:17:06.437107 containerd[1514]: time="2025-09-12T10:17:06.436894068Z" level=info msg="containerd successfully booted in 0.037969s" Sep 12 10:17:06.436989 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:17:06.788429 tar[1506]: linux-amd64/README.md Sep 12 10:17:06.807557 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:17:07.468843 systemd-networkd[1425]: eth0: Gained IPv6LL Sep 12 10:17:07.472333 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:17:07.474299 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:17:07.492999 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 10:17:07.495892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:07.498115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:17:07.516843 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 10:17:07.517136 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 10:17:07.518677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:17:07.524288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:17:08.822231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:08.824012 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:17:08.825376 systemd[1]: Startup finished in 1.479s (kernel) + 6.415s (initrd) + 5.611s (userspace) = 13.506s. Sep 12 10:17:08.828456 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:17:09.497830 kubelet[1601]: E0912 10:17:09.497680 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:17:09.502119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:17:09.502355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:17:09.502817 systemd[1]: kubelet.service: Consumed 1.849s CPU time, 265.7M memory peak. Sep 12 10:17:10.097029 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:17:10.112040 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:48544.service - OpenSSH per-connection server daemon (10.0.0.1:48544). Sep 12 10:17:10.162480 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 48544 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:10.164627 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:10.176180 systemd-logind[1497]: New session 1 of user core. Sep 12 10:17:10.177560 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:17:10.191009 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:17:10.206903 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:17:10.227965 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:17:10.232010 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:17:10.234507 systemd-logind[1497]: New session c1 of user core. Sep 12 10:17:10.503979 systemd[1618]: Queued start job for default target default.target. Sep 12 10:17:10.515191 systemd[1618]: Created slice app.slice - User Application Slice. Sep 12 10:17:10.515223 systemd[1618]: Reached target paths.target - Paths. Sep 12 10:17:10.515271 systemd[1618]: Reached target timers.target - Timers. Sep 12 10:17:10.517227 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:17:10.530154 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:17:10.530323 systemd[1618]: Reached target sockets.target - Sockets. Sep 12 10:17:10.530375 systemd[1618]: Reached target basic.target - Basic System. Sep 12 10:17:10.530420 systemd[1618]: Reached target default.target - Main User Target. Sep 12 10:17:10.530456 systemd[1618]: Startup finished in 287ms. Sep 12 10:17:10.531140 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:17:10.533336 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:17:10.599525 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Sep 12 10:17:10.658376 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:10.660060 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:10.664891 systemd-logind[1497]: New session 2 of user core. Sep 12 10:17:10.674773 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:17:10.730571 sshd[1631]: Connection closed by 10.0.0.1 port 48552 Sep 12 10:17:10.730996 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:10.744854 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:48552.service: Deactivated successfully. Sep 12 10:17:10.747774 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:17:10.750009 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:17:10.764245 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:48558.service - OpenSSH per-connection server daemon (10.0.0.1:48558). Sep 12 10:17:10.765710 systemd-logind[1497]: Removed session 2. Sep 12 10:17:10.801723 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 48558 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:10.803366 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:10.809005 systemd-logind[1497]: New session 3 of user core. Sep 12 10:17:10.826948 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:17:10.879656 sshd[1639]: Connection closed by 10.0.0.1 port 48558 Sep 12 10:17:10.880218 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:10.889635 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:48558.service: Deactivated successfully. Sep 12 10:17:10.891793 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:17:10.893593 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:17:10.903929 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:48574.service - OpenSSH per-connection server daemon (10.0.0.1:48574). Sep 12 10:17:10.905014 systemd-logind[1497]: Removed session 3. Sep 12 10:17:10.943286 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 48574 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:10.944850 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:10.949725 systemd-logind[1497]: New session 4 of user core. Sep 12 10:17:10.958771 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:17:11.013476 sshd[1647]: Connection closed by 10.0.0.1 port 48574 Sep 12 10:17:11.013912 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:11.023572 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:48574.service: Deactivated successfully. Sep 12 10:17:11.025664 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:17:11.027556 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:17:11.037869 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:48576.service - OpenSSH per-connection server daemon (10.0.0.1:48576). Sep 12 10:17:11.038974 systemd-logind[1497]: Removed session 4. Sep 12 10:17:11.075811 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 48576 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:11.077338 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:11.082394 systemd-logind[1497]: New session 5 of user core. Sep 12 10:17:11.091757 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:17:11.152025 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:17:11.152401 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:11.168972 sudo[1656]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:11.170909 sshd[1655]: Connection closed by 10.0.0.1 port 48576 Sep 12 10:17:11.171345 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:11.180726 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:48576.service: Deactivated successfully. Sep 12 10:17:11.182754 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:17:11.184471 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:17:11.199908 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:48584.service - OpenSSH per-connection server daemon (10.0.0.1:48584). Sep 12 10:17:11.200967 systemd-logind[1497]: Removed session 5. Sep 12 10:17:11.236313 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 48584 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:11.237900 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:11.242716 systemd-logind[1497]: New session 6 of user core. Sep 12 10:17:11.256794 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:17:11.312149 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:17:11.312504 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:11.317131 sudo[1666]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:11.324028 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:17:11.324384 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:11.343902 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:17:11.379241 augenrules[1688]: No rules Sep 12 10:17:11.381577 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:17:11.382003 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:17:11.383298 sudo[1665]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:11.384974 sshd[1664]: Connection closed by 10.0.0.1 port 48584 Sep 12 10:17:11.385345 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:11.398500 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:48584.service: Deactivated successfully. Sep 12 10:17:11.401147 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:17:11.403315 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:17:11.413943 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:48592.service - OpenSSH per-connection server daemon (10.0.0.1:48592). Sep 12 10:17:11.414995 systemd-logind[1497]: Removed session 6. Sep 12 10:17:11.450521 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 48592 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:17:11.452160 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:17:11.457286 systemd-logind[1497]: New session 7 of user core. Sep 12 10:17:11.466766 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:17:11.521910 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:17:11.522277 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:17:12.347980 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:17:12.348079 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:17:12.844812 dockerd[1720]: time="2025-09-12T10:17:12.844574956Z" level=info msg="Starting up" Sep 12 10:17:13.577859 dockerd[1720]: time="2025-09-12T10:17:13.577800177Z" level=info msg="Loading containers: start." Sep 12 10:17:13.799644 kernel: Initializing XFRM netlink socket Sep 12 10:17:13.901850 systemd-networkd[1425]: docker0: Link UP Sep 12 10:17:13.946451 dockerd[1720]: time="2025-09-12T10:17:13.946387621Z" level=info msg="Loading containers: done." Sep 12 10:17:13.961880 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck215524099-merged.mount: Deactivated successfully. Sep 12 10:17:13.963629 dockerd[1720]: time="2025-09-12T10:17:13.963561705Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:17:13.963731 dockerd[1720]: time="2025-09-12T10:17:13.963704773Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:17:13.963936 dockerd[1720]: time="2025-09-12T10:17:13.963902995Z" level=info msg="Daemon has completed initialization" Sep 12 10:17:14.005766 dockerd[1720]: time="2025-09-12T10:17:14.005680758Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:17:14.005879 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:17:14.999032 containerd[1514]: time="2025-09-12T10:17:14.998955528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 10:17:15.677322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779047033.mount: Deactivated successfully. Sep 12 10:17:16.883486 containerd[1514]: time="2025-09-12T10:17:16.883419474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:16.884012 containerd[1514]: time="2025-09-12T10:17:16.883946462Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 10:17:16.885101 containerd[1514]: time="2025-09-12T10:17:16.885072114Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:16.888251 containerd[1514]: time="2025-09-12T10:17:16.888207074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:16.889236 containerd[1514]: time="2025-09-12T10:17:16.889205066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.890188764s" Sep 12 10:17:16.889270 containerd[1514]: time="2025-09-12T10:17:16.889243959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 10:17:16.890252 containerd[1514]: time="2025-09-12T10:17:16.890226562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 10:17:18.767862 containerd[1514]: time="2025-09-12T10:17:18.767761861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:18.768972 containerd[1514]: time="2025-09-12T10:17:18.768924512Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 10:17:18.770676 containerd[1514]: time="2025-09-12T10:17:18.770609072Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:18.774732 containerd[1514]: time="2025-09-12T10:17:18.774668667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:18.776116 containerd[1514]: time="2025-09-12T10:17:18.776041472Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.88578276s" Sep 12 10:17:18.776116 containerd[1514]: time="2025-09-12T10:17:18.776103518Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 10:17:18.776752 containerd[1514]: time="2025-09-12T10:17:18.776713472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 10:17:19.546402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:17:19.555802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:19.794508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:19.799368 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:17:20.289026 kubelet[1987]: E0912 10:17:20.288864 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:17:20.295739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:17:20.295970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:17:20.296383 systemd[1]: kubelet.service: Consumed 335ms CPU time, 111M memory peak. Sep 12 10:17:21.600317 containerd[1514]: time="2025-09-12T10:17:21.600237486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:21.601117 containerd[1514]: time="2025-09-12T10:17:21.601071831Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 10:17:21.602336 containerd[1514]: time="2025-09-12T10:17:21.602283874Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:21.605072 containerd[1514]: time="2025-09-12T10:17:21.605042078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:21.606411 containerd[1514]: time="2025-09-12T10:17:21.606373616Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.829476259s" Sep 12 10:17:21.606411 containerd[1514]: time="2025-09-12T10:17:21.606411376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 10:17:21.607088 containerd[1514]: time="2025-09-12T10:17:21.607052128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 10:17:22.840337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864964794.mount: Deactivated successfully. Sep 12 10:17:24.067648 containerd[1514]: time="2025-09-12T10:17:24.067567582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:24.068720 containerd[1514]: time="2025-09-12T10:17:24.068675129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 10:17:24.069982 containerd[1514]: time="2025-09-12T10:17:24.069944991Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:24.072459 containerd[1514]: time="2025-09-12T10:17:24.072423420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:24.073112 containerd[1514]: time="2025-09-12T10:17:24.073050937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.465798302s" Sep 12 10:17:24.073112 containerd[1514]: time="2025-09-12T10:17:24.073099068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 10:17:24.073637 containerd[1514]: time="2025-09-12T10:17:24.073587293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 10:17:24.627046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964156812.mount: Deactivated successfully. Sep 12 10:17:26.038912 containerd[1514]: time="2025-09-12T10:17:26.038773401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.040031 containerd[1514]: time="2025-09-12T10:17:26.039982008Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 10:17:26.041752 containerd[1514]: time="2025-09-12T10:17:26.041699720Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.049669 containerd[1514]: time="2025-09-12T10:17:26.049591683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.050767 containerd[1514]: time="2025-09-12T10:17:26.050721081Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.977101668s" Sep 12 10:17:26.050767 containerd[1514]: time="2025-09-12T10:17:26.050760104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 10:17:26.051308 containerd[1514]: time="2025-09-12T10:17:26.051221590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:17:26.605183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288541613.mount: Deactivated successfully. Sep 12 10:17:26.612122 containerd[1514]: time="2025-09-12T10:17:26.612056436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.613044 containerd[1514]: time="2025-09-12T10:17:26.612984256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 10:17:26.614474 containerd[1514]: time="2025-09-12T10:17:26.614426812Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.617269 containerd[1514]: time="2025-09-12T10:17:26.617208450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:26.618337 containerd[1514]: time="2025-09-12T10:17:26.618276262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 566.97345ms" Sep 12 10:17:26.618337 containerd[1514]: time="2025-09-12T10:17:26.618331045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:17:26.619073 containerd[1514]: time="2025-09-12T10:17:26.619015599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 10:17:27.411728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481393767.mount: Deactivated successfully. Sep 12 10:17:29.833732 containerd[1514]: time="2025-09-12T10:17:29.833646257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:29.834856 containerd[1514]: time="2025-09-12T10:17:29.834745910Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 10:17:29.836105 containerd[1514]: time="2025-09-12T10:17:29.836062820Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:29.839253 containerd[1514]: time="2025-09-12T10:17:29.839198221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:17:29.840585 containerd[1514]: time="2025-09-12T10:17:29.840534688Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.221478433s" Sep 12 10:17:29.840585 containerd[1514]: time="2025-09-12T10:17:29.840572008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 10:17:30.296379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 10:17:30.305894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:30.486224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:30.492533 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:17:30.549890 kubelet[2148]: E0912 10:17:30.549601 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:17:30.555266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:17:30.555564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:17:30.556100 systemd[1]: kubelet.service: Consumed 249ms CPU time, 112.1M memory peak. Sep 12 10:17:31.933739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:31.933914 systemd[1]: kubelet.service: Consumed 249ms CPU time, 112.1M memory peak. Sep 12 10:17:31.955834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:31.980144 systemd[1]: Reload requested from client PID 2164 ('systemctl') (unit session-7.scope)... Sep 12 10:17:31.980161 systemd[1]: Reloading... Sep 12 10:17:32.082659 zram_generator::config[2212]: No configuration found. Sep 12 10:17:33.781561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:33.890964 systemd[1]: Reloading finished in 1910 ms. Sep 12 10:17:33.950904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:33.955162 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:17:33.959677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:33.961587 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:17:33.962048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:33.962117 systemd[1]: kubelet.service: Consumed 161ms CPU time, 100.2M memory peak. Sep 12 10:17:33.975979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:34.157929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:34.162367 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:17:34.203565 kubelet[2264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:34.203565 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:17:34.203565 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:34.204016 kubelet[2264]: I0912 10:17:34.203680 2264 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:17:34.523684 kubelet[2264]: I0912 10:17:34.523520 2264 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 10:17:34.523684 kubelet[2264]: I0912 10:17:34.523560 2264 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:17:34.523897 kubelet[2264]: I0912 10:17:34.523881 2264 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 10:17:34.551849 kubelet[2264]: E0912 10:17:34.551788 2264 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:34.552093 kubelet[2264]: I0912 10:17:34.552060 2264 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:17:34.560230 kubelet[2264]: E0912 10:17:34.560187 2264 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:17:34.560230 kubelet[2264]: I0912 10:17:34.560221 2264 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:17:34.565951 kubelet[2264]: I0912 10:17:34.565914 2264 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:17:34.566220 kubelet[2264]: I0912 10:17:34.566176 2264 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:17:34.566414 kubelet[2264]: I0912 10:17:34.566207 2264 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:17:34.566570 kubelet[2264]: I0912 10:17:34.566423 2264 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:17:34.566570 kubelet[2264]: I0912 10:17:34.566432 2264 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 10:17:34.566632 kubelet[2264]: I0912 10:17:34.566582 2264 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:34.569628 kubelet[2264]: I0912 10:17:34.569594 2264 kubelet.go:446] "Attempting to sync node with API server" Sep 12 10:17:34.569691 kubelet[2264]: I0912 10:17:34.569637 2264 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:17:34.569691 kubelet[2264]: I0912 10:17:34.569677 2264 kubelet.go:352] "Adding apiserver pod source" Sep 12 10:17:34.569691 kubelet[2264]: I0912 10:17:34.569691 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:17:34.575647 kubelet[2264]: W0912 10:17:34.575236 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:34.575647 kubelet[2264]: E0912 10:17:34.575296 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:34.576177 kubelet[2264]: I0912 10:17:34.576134 2264 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:17:34.576766 kubelet[2264]: I0912 10:17:34.576743 2264 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:17:34.577386 kubelet[2264]: W0912 10:17:34.577336 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:34.577448 kubelet[2264]: E0912 10:17:34.577383 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:34.577477 kubelet[2264]: W0912 10:17:34.577444 2264 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:17:34.579840 kubelet[2264]: I0912 10:17:34.579812 2264 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:17:34.579891 kubelet[2264]: I0912 10:17:34.579861 2264 server.go:1287] "Started kubelet" Sep 12 10:17:34.582056 kubelet[2264]: I0912 10:17:34.582021 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:17:34.583507 kubelet[2264]: I0912 10:17:34.583143 2264 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:17:34.584588 kubelet[2264]: I0912 10:17:34.583585 2264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:17:34.584588 kubelet[2264]: I0912 10:17:34.583923 2264 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:17:34.584588 kubelet[2264]: I0912 10:17:34.584118 2264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:17:34.584588 kubelet[2264]: I0912 10:17:34.584257 2264 server.go:479] "Adding debug handlers to kubelet server" Sep 12 10:17:34.585287 kubelet[2264]: I0912 10:17:34.585268 2264 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:17:34.585482 kubelet[2264]: E0912 10:17:34.585465 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:34.585896 kubelet[2264]: I0912 10:17:34.585880 2264 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:17:34.586069 kubelet[2264]: I0912 10:17:34.586056 2264 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:17:34.587599 kubelet[2264]: W0912 10:17:34.587540 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:34.587676 kubelet[2264]: E0912 10:17:34.587635 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:34.590408 kubelet[2264]: I0912 10:17:34.589517 2264 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:17:34.590523 kubelet[2264]: E0912 10:17:34.587400 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864819a5dca7f74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 10:17:34.57983474 +0000 UTC m=+0.411770455,LastTimestamp:2025-09-12 10:17:34.57983474 +0000 UTC m=+0.411770455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 10:17:34.590766 kubelet[2264]: E0912 10:17:34.590742 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Sep 12 10:17:34.591779 kubelet[2264]: E0912 10:17:34.591750 2264 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:17:34.591969 kubelet[2264]: I0912 10:17:34.591947 2264 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:17:34.593535 kubelet[2264]: I0912 10:17:34.593509 2264 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:17:34.607665 kubelet[2264]: I0912 10:17:34.607597 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:17:34.608919 kubelet[2264]: I0912 10:17:34.608900 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:17:34.608962 kubelet[2264]: I0912 10:17:34.608937 2264 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 10:17:34.608994 kubelet[2264]: I0912 10:17:34.608970 2264 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:17:34.608994 kubelet[2264]: I0912 10:17:34.608979 2264 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 10:17:34.609061 kubelet[2264]: E0912 10:17:34.609040 2264 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:17:34.609752 kubelet[2264]: W0912 10:17:34.609723 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:34.609818 kubelet[2264]: E0912 10:17:34.609767 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:34.613377 kubelet[2264]: I0912 10:17:34.613348 2264 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:17:34.613377 kubelet[2264]: I0912 10:17:34.613367 2264 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:17:34.613461 kubelet[2264]: I0912 10:17:34.613386 2264 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:34.686789 kubelet[2264]: E0912 10:17:34.686704 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:34.709999 kubelet[2264]: E0912 10:17:34.709910 2264 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:17:34.787408 kubelet[2264]: E0912 10:17:34.787312 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:34.791969 kubelet[2264]: E0912 10:17:34.791930 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Sep 12 10:17:34.888166 kubelet[2264]: E0912 10:17:34.888123 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:34.910402 kubelet[2264]: E0912 10:17:34.910339 2264 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:17:34.988856 kubelet[2264]: E0912 10:17:34.988792 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.089766 kubelet[2264]: E0912 10:17:35.089723 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.190279 kubelet[2264]: E0912 10:17:35.190233 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.192814 kubelet[2264]: E0912 10:17:35.192770 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Sep 12 10:17:35.291321 kubelet[2264]: E0912 10:17:35.291242 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.311533 kubelet[2264]: E0912 10:17:35.311465 2264 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:17:35.392196 kubelet[2264]: E0912 10:17:35.392025 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.413722 kubelet[2264]: W0912 10:17:35.413661 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:35.413776 kubelet[2264]: E0912 10:17:35.413724 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:35.492429 kubelet[2264]: E0912 10:17:35.492355 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.593170 kubelet[2264]: E0912 10:17:35.593079 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.693994 kubelet[2264]: E0912 10:17:35.693836 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.794403 kubelet[2264]: E0912 10:17:35.794334 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.894928 kubelet[2264]: E0912 10:17:35.894870 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:35.962669 kubelet[2264]: W0912 10:17:35.962499 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:35.962669 kubelet[2264]: E0912 10:17:35.962560 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:35.984105 kubelet[2264]: W0912 10:17:35.984073 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:35.984166 kubelet[2264]: E0912 10:17:35.984100 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:35.993778 kubelet[2264]: E0912 10:17:35.993734 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Sep 12 10:17:35.995993 kubelet[2264]: E0912 10:17:35.995953 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.075950 kubelet[2264]: W0912 10:17:36.075875 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:36.075950 kubelet[2264]: E0912 10:17:36.075935 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:36.096721 kubelet[2264]: E0912 10:17:36.096603 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.111855 kubelet[2264]: E0912 10:17:36.111792 2264 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:17:36.197403 kubelet[2264]: E0912 10:17:36.197333 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.298102 kubelet[2264]: E0912 10:17:36.298045 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.337853 kubelet[2264]: I0912 10:17:36.337794 2264 policy_none.go:49] "None policy: Start" Sep 12 10:17:36.338073 kubelet[2264]: I0912 10:17:36.337903 2264 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:17:36.338073 kubelet[2264]: I0912 10:17:36.337947 2264 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:17:36.398728 kubelet[2264]: E0912 10:17:36.398661 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.499255 kubelet[2264]: E0912 10:17:36.499201 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:36.527505 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:17:36.541283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:17:36.545070 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:17:36.559789 kubelet[2264]: I0912 10:17:36.559586 2264 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:17:36.559901 kubelet[2264]: I0912 10:17:36.559882 2264 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:17:36.559967 kubelet[2264]: I0912 10:17:36.559900 2264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:17:36.560296 kubelet[2264]: I0912 10:17:36.560278 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:17:36.560965 kubelet[2264]: E0912 10:17:36.560915 2264 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:17:36.560965 kubelet[2264]: E0912 10:17:36.560961 2264 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 10:17:36.600170 kubelet[2264]: E0912 10:17:36.600111 2264 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:36.662257 kubelet[2264]: I0912 10:17:36.662218 2264 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:36.662708 kubelet[2264]: E0912 10:17:36.662669 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 12 10:17:36.859605 kubelet[2264]: E0912 10:17:36.859314 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864819a5dca7f74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 10:17:34.57983474 +0000 UTC m=+0.411770455,LastTimestamp:2025-09-12 10:17:34.57983474 +0000 UTC m=+0.411770455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 10:17:36.864393 kubelet[2264]: I0912 10:17:36.864360 2264 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:36.864737 kubelet[2264]: E0912 10:17:36.864688 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 12 10:17:37.260381 kubelet[2264]: W0912 10:17:37.260195 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:37.260381 kubelet[2264]: E0912 10:17:37.260245 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:37.266882 kubelet[2264]: I0912 10:17:37.266824 2264 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:37.267311 kubelet[2264]: E0912 10:17:37.267276 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 12 10:17:37.595144 kubelet[2264]: E0912 10:17:37.595070 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="3.2s" Sep 12 10:17:37.721349 systemd[1]: Created slice kubepods-burstable-pod461d9aa9f51e8c68778be6eb225947db.slice - libcontainer container kubepods-burstable-pod461d9aa9f51e8c68778be6eb225947db.slice. Sep 12 10:17:37.731672 kubelet[2264]: E0912 10:17:37.731629 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:37.734693 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 10:17:37.741827 kubelet[2264]: E0912 10:17:37.741794 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:37.744521 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 10:17:37.746380 kubelet[2264]: E0912 10:17:37.746343 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:37.806838 kubelet[2264]: I0912 10:17:37.806769 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:37.806838 kubelet[2264]: I0912 10:17:37.806825 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:37.806958 kubelet[2264]: I0912 10:17:37.806850 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:37.806958 kubelet[2264]: I0912 10:17:37.806871 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:37.806958 kubelet[2264]: I0912 10:17:37.806892 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:37.806958 kubelet[2264]: I0912 10:17:37.806908 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:37.806958 kubelet[2264]: I0912 10:17:37.806948 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:37.807112 kubelet[2264]: I0912 10:17:37.806971 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:37.807112 kubelet[2264]: I0912 10:17:37.807010 2264 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:37.809240 kubelet[2264]: W0912 10:17:37.809214 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:37.809300 kubelet[2264]: E0912 10:17:37.809251 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:38.033718 containerd[1514]: time="2025-09-12T10:17:38.033588155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:461d9aa9f51e8c68778be6eb225947db,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:38.043167 containerd[1514]: time="2025-09-12T10:17:38.043121567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:38.047779 containerd[1514]: time="2025-09-12T10:17:38.047747233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:38.069259 kubelet[2264]: I0912 10:17:38.069206 2264 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:38.069722 kubelet[2264]: E0912 10:17:38.069680 2264 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 12 10:17:38.119159 kubelet[2264]: W0912 10:17:38.119028 2264 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 12 10:17:38.119159 kubelet[2264]: E0912 10:17:38.119075 2264 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:17:38.552959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219337742.mount: Deactivated successfully. Sep 12 10:17:38.560706 containerd[1514]: time="2025-09-12T10:17:38.560654110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:38.563964 containerd[1514]: time="2025-09-12T10:17:38.563910127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 10:17:38.564985 containerd[1514]: time="2025-09-12T10:17:38.564936472Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:38.566783 containerd[1514]: time="2025-09-12T10:17:38.566755975Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:38.569634 containerd[1514]: time="2025-09-12T10:17:38.567759497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:17:38.569634 containerd[1514]: time="2025-09-12T10:17:38.569501375Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:38.569781 containerd[1514]: time="2025-09-12T10:17:38.569751604Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:17:38.573445 containerd[1514]: time="2025-09-12T10:17:38.573414645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:17:38.574631 containerd[1514]: time="2025-09-12T10:17:38.574585772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.862824ms" Sep 12 10:17:38.576174 containerd[1514]: time="2025-09-12T10:17:38.576138163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.932048ms" Sep 12 10:17:38.579013 containerd[1514]: time="2025-09-12T10:17:38.578974363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.15783ms" Sep 12 10:17:38.717597 containerd[1514]: time="2025-09-12T10:17:38.717409819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:38.717597 containerd[1514]: time="2025-09-12T10:17:38.717503444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:38.717597 containerd[1514]: time="2025-09-12T10:17:38.717528040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.717846 containerd[1514]: time="2025-09-12T10:17:38.717661621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.717846 containerd[1514]: time="2025-09-12T10:17:38.716150006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.718050520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.718064066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.718185403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.717882435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.717928802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.717943109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.718276 containerd[1514]: time="2025-09-12T10:17:38.718006618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:38.747854 systemd[1]: Started cri-containerd-f7fba4ba37bc01a24320955a18f6866031d133bc0987080417d5a2d949748463.scope - libcontainer container f7fba4ba37bc01a24320955a18f6866031d133bc0987080417d5a2d949748463. Sep 12 10:17:38.753469 systemd[1]: Started cri-containerd-6d407d863079c1f479a1dda5068cf7d8aadd0235b31c4d288a055523acbd62c7.scope - libcontainer container 6d407d863079c1f479a1dda5068cf7d8aadd0235b31c4d288a055523acbd62c7. Sep 12 10:17:38.756275 systemd[1]: Started cri-containerd-f490cc770d6b2dfefd982551b783ca413fab4ad6abaefd1f8709a8bac04eb79e.scope - libcontainer container f490cc770d6b2dfefd982551b783ca413fab4ad6abaefd1f8709a8bac04eb79e. Sep 12 10:17:38.805780 containerd[1514]: time="2025-09-12T10:17:38.805201190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:461d9aa9f51e8c68778be6eb225947db,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7fba4ba37bc01a24320955a18f6866031d133bc0987080417d5a2d949748463\"" Sep 12 10:17:38.807152 containerd[1514]: time="2025-09-12T10:17:38.807126481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d407d863079c1f479a1dda5068cf7d8aadd0235b31c4d288a055523acbd62c7\"" Sep 12 10:17:38.811864 containerd[1514]: time="2025-09-12T10:17:38.811824454Z" level=info msg="CreateContainer within sandbox \"f7fba4ba37bc01a24320955a18f6866031d133bc0987080417d5a2d949748463\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:17:38.811919 containerd[1514]: time="2025-09-12T10:17:38.811873976Z" level=info msg="CreateContainer within sandbox \"6d407d863079c1f479a1dda5068cf7d8aadd0235b31c4d288a055523acbd62c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:17:38.812136 containerd[1514]: time="2025-09-12T10:17:38.812103317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f490cc770d6b2dfefd982551b783ca413fab4ad6abaefd1f8709a8bac04eb79e\"" Sep 12 10:17:38.815469 containerd[1514]: time="2025-09-12T10:17:38.815432090Z" level=info msg="CreateContainer within sandbox \"f490cc770d6b2dfefd982551b783ca413fab4ad6abaefd1f8709a8bac04eb79e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:17:38.832494 containerd[1514]: time="2025-09-12T10:17:38.832388887Z" level=info msg="CreateContainer within sandbox \"f7fba4ba37bc01a24320955a18f6866031d133bc0987080417d5a2d949748463\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8fbdfdfd7a442491c18115ca4375f0a92e7f6f029194efb58bb95fb217849ec\"" Sep 12 10:17:38.833234 containerd[1514]: time="2025-09-12T10:17:38.833207021Z" level=info msg="StartContainer for \"c8fbdfdfd7a442491c18115ca4375f0a92e7f6f029194efb58bb95fb217849ec\"" Sep 12 10:17:38.838997 containerd[1514]: time="2025-09-12T10:17:38.838890943Z" level=info msg="CreateContainer within sandbox \"f490cc770d6b2dfefd982551b783ca413fab4ad6abaefd1f8709a8bac04eb79e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21e90ac06e426eb294cdb81f927cd3a083f66e18bd3938ab749946a94ad034db\"" Sep 12 10:17:38.839369 containerd[1514]: time="2025-09-12T10:17:38.839336659Z" level=info msg="StartContainer for \"21e90ac06e426eb294cdb81f927cd3a083f66e18bd3938ab749946a94ad034db\"" Sep 12 10:17:38.845647 containerd[1514]: time="2025-09-12T10:17:38.845526519Z" level=info msg="CreateContainer within sandbox \"6d407d863079c1f479a1dda5068cf7d8aadd0235b31c4d288a055523acbd62c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"608a8f3878e21ef73cd36a2e7ece1adb0141cf4cf3dc367e488f5e86d88ecea4\"" Sep 12 10:17:38.846196 containerd[1514]: time="2025-09-12T10:17:38.846163664Z" level=info msg="StartContainer for \"608a8f3878e21ef73cd36a2e7ece1adb0141cf4cf3dc367e488f5e86d88ecea4\"" Sep 12 10:17:38.864233 systemd[1]: Started cri-containerd-c8fbdfdfd7a442491c18115ca4375f0a92e7f6f029194efb58bb95fb217849ec.scope - libcontainer container c8fbdfdfd7a442491c18115ca4375f0a92e7f6f029194efb58bb95fb217849ec. Sep 12 10:17:38.873767 systemd[1]: Started cri-containerd-21e90ac06e426eb294cdb81f927cd3a083f66e18bd3938ab749946a94ad034db.scope - libcontainer container 21e90ac06e426eb294cdb81f927cd3a083f66e18bd3938ab749946a94ad034db. Sep 12 10:17:38.877239 systemd[1]: Started cri-containerd-608a8f3878e21ef73cd36a2e7ece1adb0141cf4cf3dc367e488f5e86d88ecea4.scope - libcontainer container 608a8f3878e21ef73cd36a2e7ece1adb0141cf4cf3dc367e488f5e86d88ecea4. Sep 12 10:17:38.912183 containerd[1514]: time="2025-09-12T10:17:38.912038436Z" level=info msg="StartContainer for \"c8fbdfdfd7a442491c18115ca4375f0a92e7f6f029194efb58bb95fb217849ec\" returns successfully" Sep 12 10:17:38.933183 containerd[1514]: time="2025-09-12T10:17:38.933113928Z" level=info msg="StartContainer for \"608a8f3878e21ef73cd36a2e7ece1adb0141cf4cf3dc367e488f5e86d88ecea4\" returns successfully" Sep 12 10:17:38.933364 containerd[1514]: time="2025-09-12T10:17:38.933227771Z" level=info msg="StartContainer for \"21e90ac06e426eb294cdb81f927cd3a083f66e18bd3938ab749946a94ad034db\" returns successfully" Sep 12 10:17:39.620092 kubelet[2264]: E0912 10:17:39.620040 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:39.621957 kubelet[2264]: E0912 10:17:39.621933 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:39.623029 kubelet[2264]: E0912 10:17:39.622999 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:39.671506 kubelet[2264]: I0912 10:17:39.671454 2264 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:39.922064 kubelet[2264]: I0912 10:17:39.921611 2264 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 10:17:39.922064 kubelet[2264]: E0912 10:17:39.921681 2264 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 10:17:39.935934 kubelet[2264]: E0912 10:17:39.935885 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.036807 kubelet[2264]: E0912 10:17:40.036745 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.137764 kubelet[2264]: E0912 10:17:40.137685 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.238612 kubelet[2264]: E0912 10:17:40.238460 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.339032 kubelet[2264]: E0912 10:17:40.338999 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.439588 kubelet[2264]: E0912 10:17:40.439529 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.540551 kubelet[2264]: E0912 10:17:40.540317 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.625391 kubelet[2264]: E0912 10:17:40.625333 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:40.626065 kubelet[2264]: E0912 10:17:40.625432 2264 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:17:40.640540 kubelet[2264]: E0912 10:17:40.640458 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.741588 kubelet[2264]: E0912 10:17:40.741499 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.842453 kubelet[2264]: E0912 10:17:40.842395 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:40.942607 kubelet[2264]: E0912 10:17:40.942545 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:41.043585 kubelet[2264]: E0912 10:17:41.043516 2264 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:41.186474 kubelet[2264]: I0912 10:17:41.186297 2264 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:41.228704 kubelet[2264]: I0912 10:17:41.228651 2264 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:41.234178 kubelet[2264]: I0912 10:17:41.234102 2264 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:41.575979 kubelet[2264]: I0912 10:17:41.575928 2264 apiserver.go:52] "Watching apiserver" Sep 12 10:17:41.587008 kubelet[2264]: I0912 10:17:41.586967 2264 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:17:41.625383 kubelet[2264]: I0912 10:17:41.625347 2264 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:41.859962 kubelet[2264]: E0912 10:17:41.859807 2264 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:43.676395 systemd[1]: Reload requested from client PID 2545 ('systemctl') (unit session-7.scope)... Sep 12 10:17:43.676411 systemd[1]: Reloading... Sep 12 10:17:43.784676 zram_generator::config[2596]: No configuration found. Sep 12 10:17:43.895330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:17:44.013858 systemd[1]: Reloading finished in 337 ms. Sep 12 10:17:44.037272 kubelet[2264]: I0912 10:17:44.037187 2264 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:17:44.037432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:44.061686 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:17:44.062084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:44.062163 systemd[1]: kubelet.service: Consumed 1.021s CPU time, 133.5M memory peak. Sep 12 10:17:44.075131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:17:44.288639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:17:44.293594 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:17:44.332328 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:44.332328 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:17:44.332328 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:17:44.332800 kubelet[2634]: I0912 10:17:44.332367 2634 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:17:44.340874 kubelet[2634]: I0912 10:17:44.340824 2634 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 10:17:44.340874 kubelet[2634]: I0912 10:17:44.340853 2634 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:17:44.341105 kubelet[2634]: I0912 10:17:44.341086 2634 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 10:17:44.342385 kubelet[2634]: I0912 10:17:44.342365 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 10:17:44.344459 kubelet[2634]: I0912 10:17:44.344422 2634 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:17:44.347612 kubelet[2634]: E0912 10:17:44.347576 2634 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:17:44.347612 kubelet[2634]: I0912 10:17:44.347608 2634 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:17:44.354447 kubelet[2634]: I0912 10:17:44.354416 2634 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:17:44.354718 kubelet[2634]: I0912 10:17:44.354676 2634 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:17:44.354893 kubelet[2634]: I0912 10:17:44.354711 2634 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:17:44.354893 kubelet[2634]: I0912 10:17:44.354892 2634 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:17:44.355008 kubelet[2634]: I0912 10:17:44.354901 2634 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 10:17:44.355008 kubelet[2634]: I0912 10:17:44.354951 2634 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:44.355138 kubelet[2634]: I0912 10:17:44.355119 2634 kubelet.go:446] "Attempting to sync node with API server" Sep 12 10:17:44.355167 kubelet[2634]: I0912 10:17:44.355150 2634 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:17:44.355197 kubelet[2634]: I0912 10:17:44.355174 2634 kubelet.go:352] "Adding apiserver pod source" Sep 12 10:17:44.355197 kubelet[2634]: I0912 10:17:44.355185 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:17:44.356235 kubelet[2634]: I0912 10:17:44.356208 2634 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:17:44.356568 kubelet[2634]: I0912 10:17:44.356543 2634 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:17:44.357013 kubelet[2634]: I0912 10:17:44.356992 2634 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:17:44.357055 kubelet[2634]: I0912 10:17:44.357022 2634 server.go:1287] "Started kubelet" Sep 12 10:17:44.360046 kubelet[2634]: I0912 10:17:44.359983 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:17:44.360319 kubelet[2634]: I0912 10:17:44.360296 2634 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:17:44.360371 kubelet[2634]: I0912 10:17:44.360345 2634 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:17:44.361303 kubelet[2634]: I0912 10:17:44.361276 2634 server.go:479] "Adding debug handlers to kubelet server" Sep 12 10:17:44.363354 kubelet[2634]: E0912 10:17:44.363335 2634 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:17:44.366750 kubelet[2634]: I0912 10:17:44.363866 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:17:44.366750 kubelet[2634]: I0912 10:17:44.363993 2634 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:17:44.366750 kubelet[2634]: I0912 10:17:44.364105 2634 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:17:44.366750 kubelet[2634]: I0912 10:17:44.364231 2634 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:17:44.366750 kubelet[2634]: E0912 10:17:44.364436 2634 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:17:44.366750 kubelet[2634]: I0912 10:17:44.365662 2634 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:17:44.367654 kubelet[2634]: I0912 10:17:44.367596 2634 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:17:44.367883 kubelet[2634]: I0912 10:17:44.367698 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:17:44.368876 kubelet[2634]: I0912 10:17:44.368853 2634 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:17:44.382851 kubelet[2634]: I0912 10:17:44.382726 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:17:44.384550 kubelet[2634]: I0912 10:17:44.384530 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:17:44.385025 kubelet[2634]: I0912 10:17:44.384752 2634 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 10:17:44.385025 kubelet[2634]: I0912 10:17:44.384776 2634 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:17:44.385025 kubelet[2634]: I0912 10:17:44.384784 2634 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 10:17:44.385025 kubelet[2634]: E0912 10:17:44.384836 2634 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:17:44.406939 kubelet[2634]: I0912 10:17:44.406902 2634 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:17:44.406939 kubelet[2634]: I0912 10:17:44.406920 2634 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:17:44.406939 kubelet[2634]: I0912 10:17:44.406939 2634 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:17:44.407157 kubelet[2634]: I0912 10:17:44.407077 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:17:44.407157 kubelet[2634]: I0912 10:17:44.407088 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:17:44.407157 kubelet[2634]: I0912 10:17:44.407107 2634 policy_none.go:49] "None policy: Start" Sep 12 10:17:44.407157 kubelet[2634]: I0912 10:17:44.407116 2634 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:17:44.407157 kubelet[2634]: I0912 10:17:44.407126 2634 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:17:44.407359 kubelet[2634]: I0912 10:17:44.407231 2634 state_mem.go:75] "Updated machine memory state" Sep 12 10:17:44.411536 kubelet[2634]: I0912 10:17:44.411511 2634 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:17:44.411822 kubelet[2634]: I0912 10:17:44.411798 2634 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:17:44.411911 kubelet[2634]: I0912 10:17:44.411816 2634 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:17:44.412139 kubelet[2634]: I0912 10:17:44.411993 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:17:44.412822 kubelet[2634]: E0912 10:17:44.412800 2634 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:17:44.485793 kubelet[2634]: I0912 10:17:44.485753 2634 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:44.485793 kubelet[2634]: I0912 10:17:44.485799 2634 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:44.486034 kubelet[2634]: I0912 10:17:44.485947 2634 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:44.513645 kubelet[2634]: I0912 10:17:44.513559 2634 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:17:44.565831 kubelet[2634]: I0912 10:17:44.565791 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:44.565909 kubelet[2634]: I0912 10:17:44.565834 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:44.565909 kubelet[2634]: I0912 10:17:44.565859 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:44.565909 kubelet[2634]: I0912 10:17:44.565880 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:44.565909 kubelet[2634]: I0912 10:17:44.565897 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:44.566005 kubelet[2634]: I0912 10:17:44.565913 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:44.566005 kubelet[2634]: I0912 10:17:44.565931 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/461d9aa9f51e8c68778be6eb225947db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"461d9aa9f51e8c68778be6eb225947db\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:44.566005 kubelet[2634]: I0912 10:17:44.565948 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:44.566005 kubelet[2634]: I0912 10:17:44.565963 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:45.090109 kubelet[2634]: E0912 10:17:45.090042 2634 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:17:45.091388 kubelet[2634]: E0912 10:17:45.090387 2634 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 10:17:45.091388 kubelet[2634]: E0912 10:17:45.090530 2634 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 10:17:45.091967 kubelet[2634]: I0912 10:17:45.091918 2634 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 10:17:45.092020 kubelet[2634]: I0912 10:17:45.092008 2634 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 10:17:45.356352 kubelet[2634]: I0912 10:17:45.356212 2634 apiserver.go:52] "Watching apiserver" Sep 12 10:17:45.364954 kubelet[2634]: I0912 10:17:45.364920 2634 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:17:45.819892 kubelet[2634]: I0912 10:17:45.819798 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.819772061 podStartE2EDuration="4.819772061s" podCreationTimestamp="2025-09-12 10:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:45.804837329 +0000 UTC m=+1.507100064" watchObservedRunningTime="2025-09-12 10:17:45.819772061 +0000 UTC m=+1.522034796" Sep 12 10:17:45.831085 kubelet[2634]: I0912 10:17:45.831009 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.830985408 podStartE2EDuration="4.830985408s" podCreationTimestamp="2025-09-12 10:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:45.820347489 +0000 UTC m=+1.522610224" watchObservedRunningTime="2025-09-12 10:17:45.830985408 +0000 UTC m=+1.533248143" Sep 12 10:17:45.840654 kubelet[2634]: I0912 10:17:45.840528 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.840503008 podStartE2EDuration="4.840503008s" podCreationTimestamp="2025-09-12 10:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:45.831074318 +0000 UTC m=+1.533337053" watchObservedRunningTime="2025-09-12 10:17:45.840503008 +0000 UTC m=+1.542765743" Sep 12 10:17:45.883979 sudo[2668]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:17:45.884375 sudo[2668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:17:46.374191 sudo[2668]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:47.911416 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 12 10:17:47.913115 sshd[1700]: Connection closed by 10.0.0.1 port 48592 Sep 12 10:17:47.913920 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 12 10:17:47.918924 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:48592.service: Deactivated successfully. Sep 12 10:17:47.921589 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:17:47.921870 systemd[1]: session-7.scope: Consumed 4.604s CPU time, 249.2M memory peak. Sep 12 10:17:47.923483 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:17:47.924719 systemd-logind[1497]: Removed session 7. Sep 12 10:17:48.296437 kubelet[2634]: I0912 10:17:48.296413 2634 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:17:48.296863 containerd[1514]: time="2025-09-12T10:17:48.296769771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:17:48.297126 kubelet[2634]: I0912 10:17:48.296915 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:17:50.371835 systemd[1]: Created slice kubepods-besteffort-pod5c2f53b1_5ad8_49fe_b202_31ccf88c3df8.slice - libcontainer container kubepods-besteffort-pod5c2f53b1_5ad8_49fe_b202_31ccf88c3df8.slice. Sep 12 10:17:50.382851 systemd[1]: Created slice kubepods-besteffort-podf188a585_f61d_44ef_9956_b6e57086bd2b.slice - libcontainer container kubepods-besteffort-podf188a585_f61d_44ef_9956_b6e57086bd2b.slice. Sep 12 10:17:50.396477 systemd[1]: Created slice kubepods-burstable-pod5c897f9b_0a9d_46a5_8ff1_8c6b2d638cca.slice - libcontainer container kubepods-burstable-pod5c897f9b_0a9d_46a5_8ff1_8c6b2d638cca.slice. Sep 12 10:17:50.399011 kubelet[2634]: I0912 10:17:50.398980 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cni-path\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399011 kubelet[2634]: I0912 10:17:50.399012 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-net\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399428 kubelet[2634]: I0912 10:17:50.399030 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-kernel\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399428 kubelet[2634]: I0912 10:17:50.399045 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-run\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399428 kubelet[2634]: I0912 10:17:50.399060 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-xtables-lock\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399428 kubelet[2634]: I0912 10:17:50.399074 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-config-path\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399428 kubelet[2634]: I0912 10:17:50.399089 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g454s\" (UniqueName: \"kubernetes.io/projected/5c2f53b1-5ad8-49fe-b202-31ccf88c3df8-kube-api-access-g454s\") pod \"kube-proxy-6vdmk\" (UID: \"5c2f53b1-5ad8-49fe-b202-31ccf88c3df8\") " pod="kube-system/kube-proxy-6vdmk" Sep 12 10:17:50.399692 kubelet[2634]: I0912 10:17:50.399108 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c2f53b1-5ad8-49fe-b202-31ccf88c3df8-xtables-lock\") pod \"kube-proxy-6vdmk\" (UID: \"5c2f53b1-5ad8-49fe-b202-31ccf88c3df8\") " pod="kube-system/kube-proxy-6vdmk" Sep 12 10:17:50.399692 kubelet[2634]: I0912 10:17:50.399124 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-clustermesh-secrets\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399692 kubelet[2634]: I0912 10:17:50.399139 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc5qk\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-kube-api-access-sc5qk\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399692 kubelet[2634]: I0912 10:17:50.399159 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwxnj\" (UniqueName: \"kubernetes.io/projected/f188a585-f61d-44ef-9956-b6e57086bd2b-kube-api-access-fwxnj\") pod \"cilium-operator-6c4d7847fc-j2f6h\" (UID: \"f188a585-f61d-44ef-9956-b6e57086bd2b\") " pod="kube-system/cilium-operator-6c4d7847fc-j2f6h" Sep 12 10:17:50.399692 kubelet[2634]: I0912 10:17:50.399180 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-cgroup\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399815 kubelet[2634]: I0912 10:17:50.399202 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c2f53b1-5ad8-49fe-b202-31ccf88c3df8-kube-proxy\") pod \"kube-proxy-6vdmk\" (UID: \"5c2f53b1-5ad8-49fe-b202-31ccf88c3df8\") " pod="kube-system/kube-proxy-6vdmk" Sep 12 10:17:50.399815 kubelet[2634]: I0912 10:17:50.399226 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-bpf-maps\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399815 kubelet[2634]: I0912 10:17:50.399296 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-etc-cni-netd\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399815 kubelet[2634]: I0912 10:17:50.399341 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-lib-modules\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399815 kubelet[2634]: I0912 10:17:50.399366 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hubble-tls\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.399925 kubelet[2634]: I0912 10:17:50.399391 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f188a585-f61d-44ef-9956-b6e57086bd2b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j2f6h\" (UID: \"f188a585-f61d-44ef-9956-b6e57086bd2b\") " pod="kube-system/cilium-operator-6c4d7847fc-j2f6h" Sep 12 10:17:50.399925 kubelet[2634]: I0912 10:17:50.399415 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c2f53b1-5ad8-49fe-b202-31ccf88c3df8-lib-modules\") pod \"kube-proxy-6vdmk\" (UID: \"5c2f53b1-5ad8-49fe-b202-31ccf88c3df8\") " pod="kube-system/kube-proxy-6vdmk" Sep 12 10:17:50.399925 kubelet[2634]: I0912 10:17:50.399440 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hostproc\") pod \"cilium-s552v\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " pod="kube-system/cilium-s552v" Sep 12 10:17:50.680559 containerd[1514]: time="2025-09-12T10:17:50.680397131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vdmk,Uid:5c2f53b1-5ad8-49fe-b202-31ccf88c3df8,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:50.688663 containerd[1514]: time="2025-09-12T10:17:50.688578224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j2f6h,Uid:f188a585-f61d-44ef-9956-b6e57086bd2b,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:50.702002 containerd[1514]: time="2025-09-12T10:17:50.701939218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s552v,Uid:5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca,Namespace:kube-system,Attempt:0,}" Sep 12 10:17:51.171506 containerd[1514]: time="2025-09-12T10:17:51.171192028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:51.171506 containerd[1514]: time="2025-09-12T10:17:51.171256991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:51.171506 containerd[1514]: time="2025-09-12T10:17:51.171277089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.171506 containerd[1514]: time="2025-09-12T10:17:51.171409911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.185842 containerd[1514]: time="2025-09-12T10:17:51.185612804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:51.185842 containerd[1514]: time="2025-09-12T10:17:51.185700801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:51.185842 containerd[1514]: time="2025-09-12T10:17:51.185714607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.185842 containerd[1514]: time="2025-09-12T10:17:51.185795280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.193338 containerd[1514]: time="2025-09-12T10:17:51.190797325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:17:51.193338 containerd[1514]: time="2025-09-12T10:17:51.190853722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:17:51.193338 containerd[1514]: time="2025-09-12T10:17:51.190868140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.193338 containerd[1514]: time="2025-09-12T10:17:51.190954564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:17:51.199839 systemd[1]: Started cri-containerd-dedb6cfe1f040298929612cfa7cdb363a6a2e70379d99385fb352ccbd6083097.scope - libcontainer container dedb6cfe1f040298929612cfa7cdb363a6a2e70379d99385fb352ccbd6083097. Sep 12 10:17:51.205282 systemd[1]: Started cri-containerd-3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9.scope - libcontainer container 3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9. Sep 12 10:17:51.223808 systemd[1]: Started cri-containerd-187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2.scope - libcontainer container 187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2. Sep 12 10:17:51.255658 containerd[1514]: time="2025-09-12T10:17:51.255263374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s552v,Uid:5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca,Namespace:kube-system,Attempt:0,} returns sandbox id \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\"" Sep 12 10:17:51.257136 containerd[1514]: time="2025-09-12T10:17:51.257106143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:17:51.257437 containerd[1514]: time="2025-09-12T10:17:51.257201574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vdmk,Uid:5c2f53b1-5ad8-49fe-b202-31ccf88c3df8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dedb6cfe1f040298929612cfa7cdb363a6a2e70379d99385fb352ccbd6083097\"" Sep 12 10:17:51.261210 containerd[1514]: time="2025-09-12T10:17:51.261160049Z" level=info msg="CreateContainer within sandbox \"dedb6cfe1f040298929612cfa7cdb363a6a2e70379d99385fb352ccbd6083097\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:17:51.271245 containerd[1514]: time="2025-09-12T10:17:51.271209987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j2f6h,Uid:f188a585-f61d-44ef-9956-b6e57086bd2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\"" Sep 12 10:17:51.284984 containerd[1514]: time="2025-09-12T10:17:51.284927898Z" level=info msg="CreateContainer within sandbox \"dedb6cfe1f040298929612cfa7cdb363a6a2e70379d99385fb352ccbd6083097\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1ea0d9d3c66b8a140a925918f35e926d49a746fe4393fe2e9fe0955080deb2ae\"" Sep 12 10:17:51.285434 containerd[1514]: time="2025-09-12T10:17:51.285407389Z" level=info msg="StartContainer for \"1ea0d9d3c66b8a140a925918f35e926d49a746fe4393fe2e9fe0955080deb2ae\"" Sep 12 10:17:51.316794 systemd[1]: Started cri-containerd-1ea0d9d3c66b8a140a925918f35e926d49a746fe4393fe2e9fe0955080deb2ae.scope - libcontainer container 1ea0d9d3c66b8a140a925918f35e926d49a746fe4393fe2e9fe0955080deb2ae. Sep 12 10:17:51.319817 update_engine[1501]: I20250912 10:17:51.319733 1501 update_attempter.cc:509] Updating boot flags... Sep 12 10:17:51.453692 containerd[1514]: time="2025-09-12T10:17:51.453398355Z" level=info msg="StartContainer for \"1ea0d9d3c66b8a140a925918f35e926d49a746fe4393fe2e9fe0955080deb2ae\" returns successfully" Sep 12 10:17:51.490825 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2890) Sep 12 10:17:51.567713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2900) Sep 12 10:17:51.629223 kubelet[2634]: I0912 10:17:51.629126 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6vdmk" podStartSLOduration=1.629107743 podStartE2EDuration="1.629107743s" podCreationTimestamp="2025-09-12 10:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:17:51.471528021 +0000 UTC m=+7.173790766" watchObservedRunningTime="2025-09-12 10:17:51.629107743 +0000 UTC m=+7.331370478" Sep 12 10:18:03.654635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405324784.mount: Deactivated successfully. Sep 12 10:18:07.125589 containerd[1514]: time="2025-09-12T10:18:07.125488268Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:07.126885 containerd[1514]: time="2025-09-12T10:18:07.126813383Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:18:07.129901 containerd[1514]: time="2025-09-12T10:18:07.129823374Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:07.131805 containerd[1514]: time="2025-09-12T10:18:07.131762045Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.87451778s" Sep 12 10:18:07.131860 containerd[1514]: time="2025-09-12T10:18:07.131804676Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:18:07.137687 containerd[1514]: time="2025-09-12T10:18:07.137644597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:18:07.235835 containerd[1514]: time="2025-09-12T10:18:07.235780042Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:18:07.597189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120688773.mount: Deactivated successfully. Sep 12 10:18:07.903548 containerd[1514]: time="2025-09-12T10:18:07.903389739Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\"" Sep 12 10:18:07.905461 containerd[1514]: time="2025-09-12T10:18:07.905414472Z" level=info msg="StartContainer for \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\"" Sep 12 10:18:07.950804 systemd[1]: Started cri-containerd-22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9.scope - libcontainer container 22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9. Sep 12 10:18:08.025901 systemd[1]: cri-containerd-22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9.scope: Deactivated successfully. Sep 12 10:18:08.104517 containerd[1514]: time="2025-09-12T10:18:08.104466380Z" level=info msg="StartContainer for \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\" returns successfully" Sep 12 10:18:08.593493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9-rootfs.mount: Deactivated successfully. Sep 12 10:18:08.847987 containerd[1514]: time="2025-09-12T10:18:08.847813805Z" level=info msg="shim disconnected" id=22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9 namespace=k8s.io Sep 12 10:18:08.847987 containerd[1514]: time="2025-09-12T10:18:08.847876312Z" level=warning msg="cleaning up after shim disconnected" id=22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9 namespace=k8s.io Sep 12 10:18:08.847987 containerd[1514]: time="2025-09-12T10:18:08.847889146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:09.497605 containerd[1514]: time="2025-09-12T10:18:09.497557855Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:18:09.595429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944471440.mount: Deactivated successfully. Sep 12 10:18:09.612041 containerd[1514]: time="2025-09-12T10:18:09.611986070Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\"" Sep 12 10:18:09.612583 containerd[1514]: time="2025-09-12T10:18:09.612546646Z" level=info msg="StartContainer for \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\"" Sep 12 10:18:09.643877 systemd[1]: Started cri-containerd-1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344.scope - libcontainer container 1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344. Sep 12 10:18:09.678402 containerd[1514]: time="2025-09-12T10:18:09.678081255Z" level=info msg="StartContainer for \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\" returns successfully" Sep 12 10:18:09.693703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:18:09.693945 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:18:09.694157 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:18:09.706126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:18:09.706487 systemd[1]: cri-containerd-1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344.scope: Deactivated successfully. Sep 12 10:18:09.721069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:18:09.903195 containerd[1514]: time="2025-09-12T10:18:09.903131529Z" level=info msg="shim disconnected" id=1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344 namespace=k8s.io Sep 12 10:18:09.903195 containerd[1514]: time="2025-09-12T10:18:09.903183036Z" level=warning msg="cleaning up after shim disconnected" id=1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344 namespace=k8s.io Sep 12 10:18:09.903195 containerd[1514]: time="2025-09-12T10:18:09.903193525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:09.947347 containerd[1514]: time="2025-09-12T10:18:09.947280091Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:09.948140 containerd[1514]: time="2025-09-12T10:18:09.948079848Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:18:09.949330 containerd[1514]: time="2025-09-12T10:18:09.949289966Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:09.951180 containerd[1514]: time="2025-09-12T10:18:09.951149477Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.813463533s" Sep 12 10:18:09.951180 containerd[1514]: time="2025-09-12T10:18:09.951180766Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:18:09.956449 containerd[1514]: time="2025-09-12T10:18:09.956419109Z" level=info msg="CreateContainer within sandbox \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:18:09.972023 containerd[1514]: time="2025-09-12T10:18:09.971948047Z" level=info msg="CreateContainer within sandbox \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\"" Sep 12 10:18:09.972708 containerd[1514]: time="2025-09-12T10:18:09.972472905Z" level=info msg="StartContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\"" Sep 12 10:18:10.007803 systemd[1]: Started cri-containerd-4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8.scope - libcontainer container 4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8. Sep 12 10:18:10.035582 containerd[1514]: time="2025-09-12T10:18:10.035537628Z" level=info msg="StartContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" returns successfully" Sep 12 10:18:10.506172 containerd[1514]: time="2025-09-12T10:18:10.506001268Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:18:10.533859 containerd[1514]: time="2025-09-12T10:18:10.533794565Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\"" Sep 12 10:18:10.534360 containerd[1514]: time="2025-09-12T10:18:10.534331406Z" level=info msg="StartContainer for \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\"" Sep 12 10:18:10.597597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344-rootfs.mount: Deactivated successfully. Sep 12 10:18:10.643785 systemd[1]: Started cri-containerd-cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829.scope - libcontainer container cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829. Sep 12 10:18:10.690939 systemd[1]: cri-containerd-cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829.scope: Deactivated successfully. Sep 12 10:18:10.955975 containerd[1514]: time="2025-09-12T10:18:10.955911850Z" level=info msg="StartContainer for \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\" returns successfully" Sep 12 10:18:10.978406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829-rootfs.mount: Deactivated successfully. Sep 12 10:18:11.134521 containerd[1514]: time="2025-09-12T10:18:11.134437485Z" level=info msg="shim disconnected" id=cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829 namespace=k8s.io Sep 12 10:18:11.134521 containerd[1514]: time="2025-09-12T10:18:11.134502306Z" level=warning msg="cleaning up after shim disconnected" id=cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829 namespace=k8s.io Sep 12 10:18:11.134521 containerd[1514]: time="2025-09-12T10:18:11.134514270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:11.511538 containerd[1514]: time="2025-09-12T10:18:11.511477832Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:18:11.529195 containerd[1514]: time="2025-09-12T10:18:11.529141525Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\"" Sep 12 10:18:11.529347 kubelet[2634]: I0912 10:18:11.529154 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j2f6h" podStartSLOduration=3.849026088 podStartE2EDuration="22.529119914s" podCreationTimestamp="2025-09-12 10:17:49 +0000 UTC" firstStartedPulling="2025-09-12 10:17:51.272195678 +0000 UTC m=+6.974458413" lastFinishedPulling="2025-09-12 10:18:09.952289504 +0000 UTC m=+25.654552239" observedRunningTime="2025-09-12 10:18:10.546725655 +0000 UTC m=+26.248988391" watchObservedRunningTime="2025-09-12 10:18:11.529119914 +0000 UTC m=+27.231382650" Sep 12 10:18:11.531378 containerd[1514]: time="2025-09-12T10:18:11.531306168Z" level=info msg="StartContainer for \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\"" Sep 12 10:18:11.561797 systemd[1]: Started cri-containerd-24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22.scope - libcontainer container 24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22. Sep 12 10:18:11.591226 systemd[1]: cri-containerd-24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22.scope: Deactivated successfully. Sep 12 10:18:11.592905 containerd[1514]: time="2025-09-12T10:18:11.592853978Z" level=info msg="StartContainer for \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\" returns successfully" Sep 12 10:18:11.617063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22-rootfs.mount: Deactivated successfully. Sep 12 10:18:11.620274 containerd[1514]: time="2025-09-12T10:18:11.620213747Z" level=info msg="shim disconnected" id=24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22 namespace=k8s.io Sep 12 10:18:11.620274 containerd[1514]: time="2025-09-12T10:18:11.620272858Z" level=warning msg="cleaning up after shim disconnected" id=24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22 namespace=k8s.io Sep 12 10:18:11.620405 containerd[1514]: time="2025-09-12T10:18:11.620281474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:11.633872 containerd[1514]: time="2025-09-12T10:18:11.633823701Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:18:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:18:12.516480 containerd[1514]: time="2025-09-12T10:18:12.516433459Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:18:12.614652 containerd[1514]: time="2025-09-12T10:18:12.614573815Z" level=info msg="CreateContainer within sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\"" Sep 12 10:18:12.616034 containerd[1514]: time="2025-09-12T10:18:12.616008935Z" level=info msg="StartContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\"" Sep 12 10:18:12.647765 systemd[1]: Started cri-containerd-a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601.scope - libcontainer container a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601. Sep 12 10:18:12.681908 containerd[1514]: time="2025-09-12T10:18:12.681857162Z" level=info msg="StartContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" returns successfully" Sep 12 10:18:12.862474 kubelet[2634]: I0912 10:18:12.862427 2634 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 10:18:12.895746 systemd[1]: Created slice kubepods-burstable-podbc12d953_99fb_484f_8494_2546edec72dd.slice - libcontainer container kubepods-burstable-podbc12d953_99fb_484f_8494_2546edec72dd.slice. Sep 12 10:18:12.909452 systemd[1]: Created slice kubepods-burstable-poda5d073d7_e77c_4509_8c22_38d92c3ef854.slice - libcontainer container kubepods-burstable-poda5d073d7_e77c_4509_8c22_38d92c3ef854.slice. Sep 12 10:18:12.950037 kubelet[2634]: I0912 10:18:12.949964 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2qb\" (UniqueName: \"kubernetes.io/projected/a5d073d7-e77c-4509-8c22-38d92c3ef854-kube-api-access-6d2qb\") pod \"coredns-668d6bf9bc-ttcw6\" (UID: \"a5d073d7-e77c-4509-8c22-38d92c3ef854\") " pod="kube-system/coredns-668d6bf9bc-ttcw6" Sep 12 10:18:12.950037 kubelet[2634]: I0912 10:18:12.950028 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc12d953-99fb-484f-8494-2546edec72dd-config-volume\") pod \"coredns-668d6bf9bc-zbrm5\" (UID: \"bc12d953-99fb-484f-8494-2546edec72dd\") " pod="kube-system/coredns-668d6bf9bc-zbrm5" Sep 12 10:18:12.950197 kubelet[2634]: I0912 10:18:12.950054 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdtf\" (UniqueName: \"kubernetes.io/projected/bc12d953-99fb-484f-8494-2546edec72dd-kube-api-access-jvdtf\") pod \"coredns-668d6bf9bc-zbrm5\" (UID: \"bc12d953-99fb-484f-8494-2546edec72dd\") " pod="kube-system/coredns-668d6bf9bc-zbrm5" Sep 12 10:18:12.950197 kubelet[2634]: I0912 10:18:12.950069 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5d073d7-e77c-4509-8c22-38d92c3ef854-config-volume\") pod \"coredns-668d6bf9bc-ttcw6\" (UID: \"a5d073d7-e77c-4509-8c22-38d92c3ef854\") " pod="kube-system/coredns-668d6bf9bc-ttcw6" Sep 12 10:18:13.112916 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:46736.service - OpenSSH per-connection server daemon (10.0.0.1:46736). Sep 12 10:18:13.161683 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 46736 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:13.162415 sshd-session[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:13.168180 systemd-logind[1497]: New session 8 of user core. Sep 12 10:18:13.176839 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:18:13.205187 containerd[1514]: time="2025-09-12T10:18:13.205141414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zbrm5,Uid:bc12d953-99fb-484f-8494-2546edec72dd,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:13.214847 containerd[1514]: time="2025-09-12T10:18:13.214810313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttcw6,Uid:a5d073d7-e77c-4509-8c22-38d92c3ef854,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:13.353678 sshd[3436]: Connection closed by 10.0.0.1 port 46736 Sep 12 10:18:13.354756 sshd-session[3425]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:13.361353 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:46736.service: Deactivated successfully. Sep 12 10:18:13.364445 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:18:13.366332 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:18:13.367800 systemd-logind[1497]: Removed session 8. Sep 12 10:18:13.541659 kubelet[2634]: I0912 10:18:13.541564 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s552v" podStartSLOduration=7.6608257 podStartE2EDuration="23.541541777s" podCreationTimestamp="2025-09-12 10:17:50 +0000 UTC" firstStartedPulling="2025-09-12 10:17:51.256574283 +0000 UTC m=+6.958837028" lastFinishedPulling="2025-09-12 10:18:07.13729037 +0000 UTC m=+22.839553105" observedRunningTime="2025-09-12 10:18:13.540752531 +0000 UTC m=+29.243015266" watchObservedRunningTime="2025-09-12 10:18:13.541541777 +0000 UTC m=+29.243804512" Sep 12 10:18:14.974481 systemd-networkd[1425]: cilium_host: Link UP Sep 12 10:18:14.976090 systemd-networkd[1425]: cilium_net: Link UP Sep 12 10:18:14.977089 systemd-networkd[1425]: cilium_net: Gained carrier Sep 12 10:18:14.977508 systemd-networkd[1425]: cilium_host: Gained carrier Sep 12 10:18:15.053870 systemd-networkd[1425]: cilium_host: Gained IPv6LL Sep 12 10:18:15.108911 systemd-networkd[1425]: cilium_vxlan: Link UP Sep 12 10:18:15.108927 systemd-networkd[1425]: cilium_vxlan: Gained carrier Sep 12 10:18:15.148891 systemd-networkd[1425]: cilium_net: Gained IPv6LL Sep 12 10:18:15.355651 kernel: NET: Registered PF_ALG protocol family Sep 12 10:18:16.080678 systemd-networkd[1425]: lxc_health: Link UP Sep 12 10:18:16.080993 systemd-networkd[1425]: lxc_health: Gained carrier Sep 12 10:18:16.324588 systemd-networkd[1425]: lxc33d77d4be258: Link UP Sep 12 10:18:16.333684 kernel: eth0: renamed from tmpc4495 Sep 12 10:18:16.338752 systemd-networkd[1425]: lxc33d77d4be258: Gained carrier Sep 12 10:18:16.343569 systemd-networkd[1425]: lxc9b56b033961c: Link UP Sep 12 10:18:16.352674 kernel: eth0: renamed from tmp6b5e8 Sep 12 10:18:16.357979 systemd-networkd[1425]: lxc9b56b033961c: Gained carrier Sep 12 10:18:16.654657 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Sep 12 10:18:17.676836 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 12 10:18:17.844787 kubelet[2634]: I0912 10:18:17.844717 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 10:18:18.188784 systemd-networkd[1425]: lxc9b56b033961c: Gained IPv6LL Sep 12 10:18:18.255081 systemd-networkd[1425]: lxc33d77d4be258: Gained IPv6LL Sep 12 10:18:18.373884 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:46750.service - OpenSSH per-connection server daemon (10.0.0.1:46750). Sep 12 10:18:18.416149 sshd[3876]: Accepted publickey for core from 10.0.0.1 port 46750 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:18.417794 sshd-session[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:18.422175 systemd-logind[1497]: New session 9 of user core. Sep 12 10:18:18.427743 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:18:18.696051 sshd[3878]: Connection closed by 10.0.0.1 port 46750 Sep 12 10:18:18.696478 sshd-session[3876]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:18.700707 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:46750.service: Deactivated successfully. Sep 12 10:18:18.703503 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:18:18.704321 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:18:18.705228 systemd-logind[1497]: Removed session 9. Sep 12 10:18:19.894325 containerd[1514]: time="2025-09-12T10:18:19.894169357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:19.895141 containerd[1514]: time="2025-09-12T10:18:19.894928544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:19.895141 containerd[1514]: time="2025-09-12T10:18:19.894957899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:19.895141 containerd[1514]: time="2025-09-12T10:18:19.895088203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:19.902419 containerd[1514]: time="2025-09-12T10:18:19.902085344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:19.902419 containerd[1514]: time="2025-09-12T10:18:19.902191684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:19.902419 containerd[1514]: time="2025-09-12T10:18:19.902207394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:19.902419 containerd[1514]: time="2025-09-12T10:18:19.902304426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:19.930870 systemd[1]: Started cri-containerd-c4495b6134884a6ab9cdb3e8613110d7983cd7b70c3397ac43541972274d305d.scope - libcontainer container c4495b6134884a6ab9cdb3e8613110d7983cd7b70c3397ac43541972274d305d. Sep 12 10:18:19.936668 systemd[1]: Started cri-containerd-6b5e8dfbeafa00f5445fa0734fabba30167c899f34f351f100906b6f93c05f01.scope - libcontainer container 6b5e8dfbeafa00f5445fa0734fabba30167c899f34f351f100906b6f93c05f01. Sep 12 10:18:19.947113 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:18:19.952358 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:18:19.980812 containerd[1514]: time="2025-09-12T10:18:19.980752894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttcw6,Uid:a5d073d7-e77c-4509-8c22-38d92c3ef854,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4495b6134884a6ab9cdb3e8613110d7983cd7b70c3397ac43541972274d305d\"" Sep 12 10:18:19.988270 containerd[1514]: time="2025-09-12T10:18:19.987657802Z" level=info msg="CreateContainer within sandbox \"c4495b6134884a6ab9cdb3e8613110d7983cd7b70c3397ac43541972274d305d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:18:19.989503 containerd[1514]: time="2025-09-12T10:18:19.989445721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zbrm5,Uid:bc12d953-99fb-484f-8494-2546edec72dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b5e8dfbeafa00f5445fa0734fabba30167c899f34f351f100906b6f93c05f01\"" Sep 12 10:18:19.992908 containerd[1514]: time="2025-09-12T10:18:19.992851072Z" level=info msg="CreateContainer within sandbox \"6b5e8dfbeafa00f5445fa0734fabba30167c899f34f351f100906b6f93c05f01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:18:20.016196 containerd[1514]: time="2025-09-12T10:18:20.016128386Z" level=info msg="CreateContainer within sandbox \"c4495b6134884a6ab9cdb3e8613110d7983cd7b70c3397ac43541972274d305d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52bd295b786793fb7dff2f050222399677ae4619be83a7abfe2c365a9818e397\"" Sep 12 10:18:20.017103 containerd[1514]: time="2025-09-12T10:18:20.017033157Z" level=info msg="StartContainer for \"52bd295b786793fb7dff2f050222399677ae4619be83a7abfe2c365a9818e397\"" Sep 12 10:18:20.029850 containerd[1514]: time="2025-09-12T10:18:20.029791813Z" level=info msg="CreateContainer within sandbox \"6b5e8dfbeafa00f5445fa0734fabba30167c899f34f351f100906b6f93c05f01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d927aad8a07bbc335eecd9f3e4d8ca38037ed5b5b9c102462ed4b336a28db2ae\"" Sep 12 10:18:20.030420 containerd[1514]: time="2025-09-12T10:18:20.030370360Z" level=info msg="StartContainer for \"d927aad8a07bbc335eecd9f3e4d8ca38037ed5b5b9c102462ed4b336a28db2ae\"" Sep 12 10:18:20.055763 systemd[1]: Started cri-containerd-52bd295b786793fb7dff2f050222399677ae4619be83a7abfe2c365a9818e397.scope - libcontainer container 52bd295b786793fb7dff2f050222399677ae4619be83a7abfe2c365a9818e397. Sep 12 10:18:20.061289 systemd[1]: Started cri-containerd-d927aad8a07bbc335eecd9f3e4d8ca38037ed5b5b9c102462ed4b336a28db2ae.scope - libcontainer container d927aad8a07bbc335eecd9f3e4d8ca38037ed5b5b9c102462ed4b336a28db2ae. Sep 12 10:18:20.108299 containerd[1514]: time="2025-09-12T10:18:20.108134253Z" level=info msg="StartContainer for \"52bd295b786793fb7dff2f050222399677ae4619be83a7abfe2c365a9818e397\" returns successfully" Sep 12 10:18:20.108299 containerd[1514]: time="2025-09-12T10:18:20.108197914Z" level=info msg="StartContainer for \"d927aad8a07bbc335eecd9f3e4d8ca38037ed5b5b9c102462ed4b336a28db2ae\" returns successfully" Sep 12 10:18:20.599065 kubelet[2634]: I0912 10:18:20.598707 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zbrm5" podStartSLOduration=31.598685266 podStartE2EDuration="31.598685266s" podCreationTimestamp="2025-09-12 10:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:20.598555702 +0000 UTC m=+36.300818458" watchObservedRunningTime="2025-09-12 10:18:20.598685266 +0000 UTC m=+36.300948002" Sep 12 10:18:20.789197 kubelet[2634]: I0912 10:18:20.789115 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ttcw6" podStartSLOduration=30.789095174 podStartE2EDuration="30.789095174s" podCreationTimestamp="2025-09-12 10:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:20.787330939 +0000 UTC m=+36.489593674" watchObservedRunningTime="2025-09-12 10:18:20.789095174 +0000 UTC m=+36.491357909" Sep 12 10:18:23.710054 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:59770.service - OpenSSH per-connection server daemon (10.0.0.1:59770). Sep 12 10:18:23.758481 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 59770 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:23.760888 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:23.766773 systemd-logind[1497]: New session 10 of user core. Sep 12 10:18:23.777890 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:18:23.948429 sshd[4076]: Connection closed by 10.0.0.1 port 59770 Sep 12 10:18:23.948825 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:23.953294 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:59770.service: Deactivated successfully. Sep 12 10:18:23.955685 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:18:23.956468 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:18:23.957355 systemd-logind[1497]: Removed session 10. Sep 12 10:18:28.961539 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:59772.service - OpenSSH per-connection server daemon (10.0.0.1:59772). Sep 12 10:18:29.001788 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 59772 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:29.003346 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:29.007518 systemd-logind[1497]: New session 11 of user core. Sep 12 10:18:29.019745 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:18:29.136502 sshd[4094]: Connection closed by 10.0.0.1 port 59772 Sep 12 10:18:29.136921 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:29.145717 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:59772.service: Deactivated successfully. Sep 12 10:18:29.147917 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:18:29.149479 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:18:29.155978 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:59788.service - OpenSSH per-connection server daemon (10.0.0.1:59788). Sep 12 10:18:29.157257 systemd-logind[1497]: Removed session 11. Sep 12 10:18:29.192018 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:29.193477 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:29.197767 systemd-logind[1497]: New session 12 of user core. Sep 12 10:18:29.207767 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:18:29.369891 sshd[4110]: Connection closed by 10.0.0.1 port 59788 Sep 12 10:18:29.370390 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:29.382980 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:59788.service: Deactivated successfully. Sep 12 10:18:29.386717 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:18:29.388793 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:18:29.397166 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:59798.service - OpenSSH per-connection server daemon (10.0.0.1:59798). Sep 12 10:18:29.398327 systemd-logind[1497]: Removed session 12. Sep 12 10:18:29.433130 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 59798 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:29.434793 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:29.439576 systemd-logind[1497]: New session 13 of user core. Sep 12 10:18:29.449736 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:18:29.564520 sshd[4123]: Connection closed by 10.0.0.1 port 59798 Sep 12 10:18:29.564915 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:29.568906 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:59798.service: Deactivated successfully. Sep 12 10:18:29.571164 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:18:29.571961 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:18:29.572801 systemd-logind[1497]: Removed session 13. Sep 12 10:18:34.581787 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:52418.service - OpenSSH per-connection server daemon (10.0.0.1:52418). Sep 12 10:18:34.622294 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 52418 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:34.623867 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:34.628317 systemd-logind[1497]: New session 14 of user core. Sep 12 10:18:34.633738 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:18:34.746695 sshd[4139]: Connection closed by 10.0.0.1 port 52418 Sep 12 10:18:34.747063 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:34.750661 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:52418.service: Deactivated successfully. Sep 12 10:18:34.753021 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:18:34.753719 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:18:34.754512 systemd-logind[1497]: Removed session 14. Sep 12 10:18:39.760373 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:52428.service - OpenSSH per-connection server daemon (10.0.0.1:52428). Sep 12 10:18:39.801179 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 52428 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:39.802948 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:39.807292 systemd-logind[1497]: New session 15 of user core. Sep 12 10:18:39.814783 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:18:39.945021 sshd[4154]: Connection closed by 10.0.0.1 port 52428 Sep 12 10:18:39.945395 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:39.954610 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:52428.service: Deactivated successfully. Sep 12 10:18:39.956942 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:18:39.958712 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:18:39.965940 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:56890.service - OpenSSH per-connection server daemon (10.0.0.1:56890). Sep 12 10:18:39.966874 systemd-logind[1497]: Removed session 15. Sep 12 10:18:40.001829 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 56890 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:40.003381 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:40.008565 systemd-logind[1497]: New session 16 of user core. Sep 12 10:18:40.015813 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:18:40.520972 sshd[4169]: Connection closed by 10.0.0.1 port 56890 Sep 12 10:18:40.521499 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:40.538109 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:56890.service: Deactivated successfully. Sep 12 10:18:40.540297 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:18:40.542092 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:18:40.549887 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Sep 12 10:18:40.551341 systemd-logind[1497]: Removed session 16. Sep 12 10:18:40.590520 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:40.592013 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:40.596589 systemd-logind[1497]: New session 17 of user core. Sep 12 10:18:40.606756 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:18:41.117739 sshd[4182]: Connection closed by 10.0.0.1 port 56902 Sep 12 10:18:41.118931 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:41.129052 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:56902.service: Deactivated successfully. Sep 12 10:18:41.132698 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:18:41.134855 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:18:41.145026 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:56914.service - OpenSSH per-connection server daemon (10.0.0.1:56914). Sep 12 10:18:41.146828 systemd-logind[1497]: Removed session 17. Sep 12 10:18:41.182760 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 56914 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:41.184381 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:41.189971 systemd-logind[1497]: New session 18 of user core. Sep 12 10:18:41.203771 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:18:41.468797 sshd[4206]: Connection closed by 10.0.0.1 port 56914 Sep 12 10:18:41.469034 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:41.480237 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:56914.service: Deactivated successfully. Sep 12 10:18:41.482769 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:18:41.484742 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:18:41.502940 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:56926.service - OpenSSH per-connection server daemon (10.0.0.1:56926). Sep 12 10:18:41.503892 systemd-logind[1497]: Removed session 18. Sep 12 10:18:41.540853 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 56926 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:41.542321 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:41.546932 systemd-logind[1497]: New session 19 of user core. Sep 12 10:18:41.556772 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:18:41.671313 sshd[4220]: Connection closed by 10.0.0.1 port 56926 Sep 12 10:18:41.671774 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:41.676465 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:56926.service: Deactivated successfully. Sep 12 10:18:41.678531 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:18:41.679396 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:18:41.680298 systemd-logind[1497]: Removed session 19. Sep 12 10:18:46.712923 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:56930.service - OpenSSH per-connection server daemon (10.0.0.1:56930). Sep 12 10:18:46.751321 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 56930 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:46.753979 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:46.759667 systemd-logind[1497]: New session 20 of user core. Sep 12 10:18:46.775822 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:18:46.919837 sshd[4238]: Connection closed by 10.0.0.1 port 56930 Sep 12 10:18:46.920553 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:46.925760 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:56930.service: Deactivated successfully. Sep 12 10:18:46.928883 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:18:46.929939 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:18:46.931223 systemd-logind[1497]: Removed session 20. Sep 12 10:18:51.938837 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Sep 12 10:18:51.986503 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:51.988286 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:51.992708 systemd-logind[1497]: New session 21 of user core. Sep 12 10:18:52.002860 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:18:52.120534 sshd[4259]: Connection closed by 10.0.0.1 port 54736 Sep 12 10:18:52.120984 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:52.125006 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:54736.service: Deactivated successfully. Sep 12 10:18:52.127502 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:18:52.128388 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:18:52.129331 systemd-logind[1497]: Removed session 21. Sep 12 10:18:57.136023 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Sep 12 10:18:57.175929 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:18:57.177483 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:57.181805 systemd-logind[1497]: New session 22 of user core. Sep 12 10:18:57.188775 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:18:57.299423 sshd[4274]: Connection closed by 10.0.0.1 port 54738 Sep 12 10:18:57.299847 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:57.304229 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:54738.service: Deactivated successfully. Sep 12 10:18:57.306715 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:18:57.307655 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:18:57.308638 systemd-logind[1497]: Removed session 22. Sep 12 10:19:02.312506 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:43158.service - OpenSSH per-connection server daemon (10.0.0.1:43158). Sep 12 10:19:02.352115 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 43158 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:19:02.353577 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:02.357651 systemd-logind[1497]: New session 23 of user core. Sep 12 10:19:02.364743 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:19:02.478162 sshd[4290]: Connection closed by 10.0.0.1 port 43158 Sep 12 10:19:02.478664 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:02.491950 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:43158.service: Deactivated successfully. Sep 12 10:19:02.494157 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:19:02.495914 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:19:02.504885 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:43160.service - OpenSSH per-connection server daemon (10.0.0.1:43160). Sep 12 10:19:02.506004 systemd-logind[1497]: Removed session 23. Sep 12 10:19:02.539571 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 43160 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:19:02.540951 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:02.545318 systemd-logind[1497]: New session 24 of user core. Sep 12 10:19:02.555765 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:19:03.912471 containerd[1514]: time="2025-09-12T10:19:03.912071469Z" level=info msg="StopContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" with timeout 30 (s)" Sep 12 10:19:03.919714 containerd[1514]: time="2025-09-12T10:19:03.919521477Z" level=info msg="Stop container \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" with signal terminated" Sep 12 10:19:03.939938 systemd[1]: run-containerd-runc-k8s.io-a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601-runc.fCOGC4.mount: Deactivated successfully. Sep 12 10:19:03.942297 systemd[1]: cri-containerd-4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8.scope: Deactivated successfully. Sep 12 10:19:03.964423 containerd[1514]: time="2025-09-12T10:19:03.963517441Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:19:03.964705 containerd[1514]: time="2025-09-12T10:19:03.963831121Z" level=info msg="StopContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" with timeout 2 (s)" Sep 12 10:19:03.965370 containerd[1514]: time="2025-09-12T10:19:03.965183716Z" level=info msg="Stop container \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" with signal terminated" Sep 12 10:19:03.966766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8-rootfs.mount: Deactivated successfully. Sep 12 10:19:03.972886 systemd-networkd[1425]: lxc_health: Link DOWN Sep 12 10:19:03.972896 systemd-networkd[1425]: lxc_health: Lost carrier Sep 12 10:19:03.975969 containerd[1514]: time="2025-09-12T10:19:03.975919604Z" level=info msg="shim disconnected" id=4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8 namespace=k8s.io Sep 12 10:19:03.975969 containerd[1514]: time="2025-09-12T10:19:03.975964269Z" level=warning msg="cleaning up after shim disconnected" id=4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8 namespace=k8s.io Sep 12 10:19:03.975969 containerd[1514]: time="2025-09-12T10:19:03.975972936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:03.996557 systemd[1]: cri-containerd-a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601.scope: Deactivated successfully. Sep 12 10:19:03.996960 systemd[1]: cri-containerd-a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601.scope: Consumed 7.128s CPU time, 125.1M memory peak, 212K read from disk, 13.3M written to disk. Sep 12 10:19:03.997694 containerd[1514]: time="2025-09-12T10:19:03.997646649Z" level=info msg="StopContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" returns successfully" Sep 12 10:19:04.002522 containerd[1514]: time="2025-09-12T10:19:04.002457943Z" level=info msg="StopPodSandbox for \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\"" Sep 12 10:19:04.002734 containerd[1514]: time="2025-09-12T10:19:04.002533448Z" level=info msg="Container to stop \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.005777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9-shm.mount: Deactivated successfully. Sep 12 10:19:04.012545 systemd[1]: cri-containerd-3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9.scope: Deactivated successfully. Sep 12 10:19:04.026208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601-rootfs.mount: Deactivated successfully. Sep 12 10:19:04.060829 containerd[1514]: time="2025-09-12T10:19:04.060700853Z" level=info msg="shim disconnected" id=3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9 namespace=k8s.io Sep 12 10:19:04.060829 containerd[1514]: time="2025-09-12T10:19:04.060788180Z" level=warning msg="cleaning up after shim disconnected" id=3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9 namespace=k8s.io Sep 12 10:19:04.060829 containerd[1514]: time="2025-09-12T10:19:04.060815442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:04.061127 containerd[1514]: time="2025-09-12T10:19:04.060835691Z" level=info msg="shim disconnected" id=a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601 namespace=k8s.io Sep 12 10:19:04.061127 containerd[1514]: time="2025-09-12T10:19:04.060895375Z" level=warning msg="cleaning up after shim disconnected" id=a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601 namespace=k8s.io Sep 12 10:19:04.061127 containerd[1514]: time="2025-09-12T10:19:04.060903340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:04.077673 containerd[1514]: time="2025-09-12T10:19:04.077406406Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:19:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:19:04.078725 containerd[1514]: time="2025-09-12T10:19:04.078686331Z" level=info msg="StopContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" returns successfully" Sep 12 10:19:04.079218 containerd[1514]: time="2025-09-12T10:19:04.079193800Z" level=info msg="TearDown network for sandbox \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\" successfully" Sep 12 10:19:04.079218 containerd[1514]: time="2025-09-12T10:19:04.079215632Z" level=info msg="StopPodSandbox for \"3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9\" returns successfully" Sep 12 10:19:04.079318 containerd[1514]: time="2025-09-12T10:19:04.079204390Z" level=info msg="StopPodSandbox for \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\"" Sep 12 10:19:04.079393 containerd[1514]: time="2025-09-12T10:19:04.079326584Z" level=info msg="Container to stop \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.079393 containerd[1514]: time="2025-09-12T10:19:04.079376599Z" level=info msg="Container to stop \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.079393 containerd[1514]: time="2025-09-12T10:19:04.079386498Z" level=info msg="Container to stop \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.079393 containerd[1514]: time="2025-09-12T10:19:04.079395656Z" level=info msg="Container to stop \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.079393 containerd[1514]: time="2025-09-12T10:19:04.079404823Z" level=info msg="Container to stop \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:19:04.089130 systemd[1]: cri-containerd-187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2.scope: Deactivated successfully. Sep 12 10:19:04.113233 containerd[1514]: time="2025-09-12T10:19:04.113160446Z" level=info msg="shim disconnected" id=187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2 namespace=k8s.io Sep 12 10:19:04.113233 containerd[1514]: time="2025-09-12T10:19:04.113225290Z" level=warning msg="cleaning up after shim disconnected" id=187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2 namespace=k8s.io Sep 12 10:19:04.113233 containerd[1514]: time="2025-09-12T10:19:04.113234187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:04.127718 containerd[1514]: time="2025-09-12T10:19:04.127656848Z" level=info msg="TearDown network for sandbox \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" successfully" Sep 12 10:19:04.127718 containerd[1514]: time="2025-09-12T10:19:04.127700903Z" level=info msg="StopPodSandbox for \"187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2\" returns successfully" Sep 12 10:19:04.166877 kubelet[2634]: I0912 10:19:04.166733 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwxnj\" (UniqueName: \"kubernetes.io/projected/f188a585-f61d-44ef-9956-b6e57086bd2b-kube-api-access-fwxnj\") pod \"f188a585-f61d-44ef-9956-b6e57086bd2b\" (UID: \"f188a585-f61d-44ef-9956-b6e57086bd2b\") " Sep 12 10:19:04.166877 kubelet[2634]: I0912 10:19:04.166785 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f188a585-f61d-44ef-9956-b6e57086bd2b-cilium-config-path\") pod \"f188a585-f61d-44ef-9956-b6e57086bd2b\" (UID: \"f188a585-f61d-44ef-9956-b6e57086bd2b\") " Sep 12 10:19:04.175958 kubelet[2634]: I0912 10:19:04.175391 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f188a585-f61d-44ef-9956-b6e57086bd2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f188a585-f61d-44ef-9956-b6e57086bd2b" (UID: "f188a585-f61d-44ef-9956-b6e57086bd2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:19:04.179877 kubelet[2634]: I0912 10:19:04.179829 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f188a585-f61d-44ef-9956-b6e57086bd2b-kube-api-access-fwxnj" (OuterVolumeSpecName: "kube-api-access-fwxnj") pod "f188a585-f61d-44ef-9956-b6e57086bd2b" (UID: "f188a585-f61d-44ef-9956-b6e57086bd2b"). InnerVolumeSpecName "kube-api-access-fwxnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:04.267086 kubelet[2634]: I0912 10:19:04.267019 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-kernel\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267086 kubelet[2634]: I0912 10:19:04.267076 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-clustermesh-secrets\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267286 kubelet[2634]: I0912 10:19:04.267144 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-cgroup\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267286 kubelet[2634]: I0912 10:19:04.267164 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-net\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267286 kubelet[2634]: I0912 10:19:04.267162 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.267286 kubelet[2634]: I0912 10:19:04.267207 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.267286 kubelet[2634]: I0912 10:19:04.267186 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-run\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267226 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267247 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hubble-tls\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267250 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267264 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cni-path\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267278 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-lib-modules\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267419 kubelet[2634]: I0912 10:19:04.267294 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-xtables-lock\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267310 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-config-path\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267327 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-etc-cni-netd\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267342 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc5qk\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-kube-api-access-sc5qk\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267359 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hostproc\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267372 2634 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-bpf-maps\") pod \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\" (UID: \"5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca\") " Sep 12 10:19:04.267572 kubelet[2634]: I0912 10:19:04.267407 2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fwxnj\" (UniqueName: \"kubernetes.io/projected/f188a585-f61d-44ef-9956-b6e57086bd2b-kube-api-access-fwxnj\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267417 2634 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f188a585-f61d-44ef-9956-b6e57086bd2b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267427 2634 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267436 2634 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267444 2634 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267452 2634 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267470 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.267740 kubelet[2634]: I0912 10:19:04.267488 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.268041 kubelet[2634]: I0912 10:19:04.267502 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.268041 kubelet[2634]: I0912 10:19:04.267515 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.270677 kubelet[2634]: I0912 10:19:04.270645 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:04.270760 kubelet[2634]: I0912 10:19:04.270688 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.270760 kubelet[2634]: I0912 10:19:04.270706 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:19:04.271065 kubelet[2634]: I0912 10:19:04.271021 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:19:04.271065 kubelet[2634]: I0912 10:19:04.271009 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 10:19:04.271277 kubelet[2634]: I0912 10:19:04.271249 2634 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-kube-api-access-sc5qk" (OuterVolumeSpecName: "kube-api-access-sc5qk") pod "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" (UID: "5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca"). InnerVolumeSpecName "kube-api-access-sc5qk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:19:04.368307 kubelet[2634]: I0912 10:19:04.368276 2634 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368307 kubelet[2634]: I0912 10:19:04.368297 2634 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368307 kubelet[2634]: I0912 10:19:04.368306 2634 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368316 2634 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368325 2634 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368334 2634 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368342 2634 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368349 2634 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368357 2634 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sc5qk\" (UniqueName: \"kubernetes.io/projected/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-kube-api-access-sc5qk\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.368426 kubelet[2634]: I0912 10:19:04.368366 2634 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 10:19:04.393843 systemd[1]: Removed slice kubepods-besteffort-podf188a585_f61d_44ef_9956_b6e57086bd2b.slice - libcontainer container kubepods-besteffort-podf188a585_f61d_44ef_9956_b6e57086bd2b.slice. Sep 12 10:19:04.395241 systemd[1]: Removed slice kubepods-burstable-pod5c897f9b_0a9d_46a5_8ff1_8c6b2d638cca.slice - libcontainer container kubepods-burstable-pod5c897f9b_0a9d_46a5_8ff1_8c6b2d638cca.slice. Sep 12 10:19:04.395337 systemd[1]: kubepods-burstable-pod5c897f9b_0a9d_46a5_8ff1_8c6b2d638cca.slice: Consumed 7.245s CPU time, 125.4M memory peak, 308K read from disk, 13.3M written to disk. Sep 12 10:19:04.446144 kubelet[2634]: E0912 10:19:04.446011 2634 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:19:04.626788 kubelet[2634]: I0912 10:19:04.626742 2634 scope.go:117] "RemoveContainer" containerID="a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601" Sep 12 10:19:04.633982 containerd[1514]: time="2025-09-12T10:19:04.633940153Z" level=info msg="RemoveContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\"" Sep 12 10:19:04.640603 containerd[1514]: time="2025-09-12T10:19:04.640560209Z" level=info msg="RemoveContainer for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" returns successfully" Sep 12 10:19:04.640856 kubelet[2634]: I0912 10:19:04.640823 2634 scope.go:117] "RemoveContainer" containerID="24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22" Sep 12 10:19:04.641906 containerd[1514]: time="2025-09-12T10:19:04.641869140Z" level=info msg="RemoveContainer for \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\"" Sep 12 10:19:04.647068 containerd[1514]: time="2025-09-12T10:19:04.647021201Z" level=info msg="RemoveContainer for \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\" returns successfully" Sep 12 10:19:04.647277 kubelet[2634]: I0912 10:19:04.647246 2634 scope.go:117] "RemoveContainer" containerID="cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829" Sep 12 10:19:04.648807 containerd[1514]: time="2025-09-12T10:19:04.648468666Z" level=info msg="RemoveContainer for \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\"" Sep 12 10:19:04.652686 containerd[1514]: time="2025-09-12T10:19:04.652663300Z" level=info msg="RemoveContainer for \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\" returns successfully" Sep 12 10:19:04.652947 kubelet[2634]: I0912 10:19:04.652886 2634 scope.go:117] "RemoveContainer" containerID="1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344" Sep 12 10:19:04.653936 containerd[1514]: time="2025-09-12T10:19:04.653902156Z" level=info msg="RemoveContainer for \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\"" Sep 12 10:19:04.657007 containerd[1514]: time="2025-09-12T10:19:04.656975738Z" level=info msg="RemoveContainer for \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\" returns successfully" Sep 12 10:19:04.657161 kubelet[2634]: I0912 10:19:04.657123 2634 scope.go:117] "RemoveContainer" containerID="22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9" Sep 12 10:19:04.658120 containerd[1514]: time="2025-09-12T10:19:04.658094255Z" level=info msg="RemoveContainer for \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\"" Sep 12 10:19:04.661200 containerd[1514]: time="2025-09-12T10:19:04.661170320Z" level=info msg="RemoveContainer for \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\" returns successfully" Sep 12 10:19:04.661397 kubelet[2634]: I0912 10:19:04.661327 2634 scope.go:117] "RemoveContainer" containerID="a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601" Sep 12 10:19:04.661590 containerd[1514]: time="2025-09-12T10:19:04.661523174Z" level=error msg="ContainerStatus for \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\": not found" Sep 12 10:19:04.668297 kubelet[2634]: E0912 10:19:04.668263 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\": not found" containerID="a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601" Sep 12 10:19:04.668394 kubelet[2634]: I0912 10:19:04.668308 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601"} err="failed to get container status \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\": rpc error: code = NotFound desc = an error occurred when try to find container \"a63332b2aae8db56a9f0e032e4e65fec9ee411e80789074de1ca93e2ac6b4601\": not found" Sep 12 10:19:04.668428 kubelet[2634]: I0912 10:19:04.668397 2634 scope.go:117] "RemoveContainer" containerID="24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22" Sep 12 10:19:04.668624 containerd[1514]: time="2025-09-12T10:19:04.668573142Z" level=error msg="ContainerStatus for \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\": not found" Sep 12 10:19:04.668816 kubelet[2634]: E0912 10:19:04.668789 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\": not found" containerID="24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22" Sep 12 10:19:04.668875 kubelet[2634]: I0912 10:19:04.668823 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22"} err="failed to get container status \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\": rpc error: code = NotFound desc = an error occurred when try to find container \"24325fb2df19a16eca3a0611a729d26bace066dc3684e2d04c66bef6d546ce22\": not found" Sep 12 10:19:04.668875 kubelet[2634]: I0912 10:19:04.668851 2634 scope.go:117] "RemoveContainer" containerID="cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829" Sep 12 10:19:04.669088 containerd[1514]: time="2025-09-12T10:19:04.669036888Z" level=error msg="ContainerStatus for \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\": not found" Sep 12 10:19:04.669206 kubelet[2634]: E0912 10:19:04.669184 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\": not found" containerID="cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829" Sep 12 10:19:04.669206 kubelet[2634]: I0912 10:19:04.669206 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829"} err="failed to get container status \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\": rpc error: code = NotFound desc = an error occurred when try to find container \"cde622aeb882ad8c088a79f50756f7f87c8056c603dbd6ceadc14fcd6ea35829\": not found" Sep 12 10:19:04.669288 kubelet[2634]: I0912 10:19:04.669223 2634 scope.go:117] "RemoveContainer" containerID="1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344" Sep 12 10:19:04.669388 containerd[1514]: time="2025-09-12T10:19:04.669354645Z" level=error msg="ContainerStatus for \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\": not found" Sep 12 10:19:04.669498 kubelet[2634]: E0912 10:19:04.669466 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\": not found" containerID="1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344" Sep 12 10:19:04.669533 kubelet[2634]: I0912 10:19:04.669494 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344"} err="failed to get container status \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f0c1601b9683b2b3835bf3c84b65ab6f3407f592811cd07922b5bc90699a344\": not found" Sep 12 10:19:04.669533 kubelet[2634]: I0912 10:19:04.669507 2634 scope.go:117] "RemoveContainer" containerID="22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9" Sep 12 10:19:04.669719 containerd[1514]: time="2025-09-12T10:19:04.669679255Z" level=error msg="ContainerStatus for \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\": not found" Sep 12 10:19:04.669882 kubelet[2634]: E0912 10:19:04.669856 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\": not found" containerID="22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9" Sep 12 10:19:04.669948 kubelet[2634]: I0912 10:19:04.669892 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9"} err="failed to get container status \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"22178007d44fbed7a4b1884f538c89aaca60478c8abfec8c11a7dd6dc7097bd9\": not found" Sep 12 10:19:04.669948 kubelet[2634]: I0912 10:19:04.669919 2634 scope.go:117] "RemoveContainer" containerID="4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8" Sep 12 10:19:04.670839 containerd[1514]: time="2025-09-12T10:19:04.670809033Z" level=info msg="RemoveContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\"" Sep 12 10:19:04.674195 containerd[1514]: time="2025-09-12T10:19:04.674158722Z" level=info msg="RemoveContainer for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" returns successfully" Sep 12 10:19:04.674363 kubelet[2634]: I0912 10:19:04.674336 2634 scope.go:117] "RemoveContainer" containerID="4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8" Sep 12 10:19:04.674538 containerd[1514]: time="2025-09-12T10:19:04.674503160Z" level=error msg="ContainerStatus for \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\": not found" Sep 12 10:19:04.674679 kubelet[2634]: E0912 10:19:04.674650 2634 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\": not found" containerID="4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8" Sep 12 10:19:04.674750 kubelet[2634]: I0912 10:19:04.674676 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8"} err="failed to get container status \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b3f5bb93bc62ff6006e1ef34bb3699d76efff43244d7084866f9133092a36c8\": not found" Sep 12 10:19:04.935861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2-rootfs.mount: Deactivated successfully. Sep 12 10:19:04.935991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3556497188b862a4b5de05251ce17cbd6f15f60dbf0ddb6662746699e2c7dfd9-rootfs.mount: Deactivated successfully. Sep 12 10:19:04.936111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-187e926193588d29dd399e1ebdba701e4439943b2f9c4e5594958c131c7e8de2-shm.mount: Deactivated successfully. Sep 12 10:19:04.936206 systemd[1]: var-lib-kubelet-pods-f188a585\x2df61d\x2d44ef\x2d9956\x2db6e57086bd2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfwxnj.mount: Deactivated successfully. Sep 12 10:19:04.936300 systemd[1]: var-lib-kubelet-pods-5c897f9b\x2d0a9d\x2d46a5\x2d8ff1\x2d8c6b2d638cca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsc5qk.mount: Deactivated successfully. Sep 12 10:19:04.936387 systemd[1]: var-lib-kubelet-pods-5c897f9b\x2d0a9d\x2d46a5\x2d8ff1\x2d8c6b2d638cca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:19:04.936471 systemd[1]: var-lib-kubelet-pods-5c897f9b\x2d0a9d\x2d46a5\x2d8ff1\x2d8c6b2d638cca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:19:05.880444 sshd[4305]: Connection closed by 10.0.0.1 port 43160 Sep 12 10:19:05.881114 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:05.891567 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:43160.service: Deactivated successfully. Sep 12 10:19:05.893830 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:19:05.895318 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:19:05.900873 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:43168.service - OpenSSH per-connection server daemon (10.0.0.1:43168). Sep 12 10:19:05.902146 systemd-logind[1497]: Removed session 24. Sep 12 10:19:05.940945 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 43168 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:19:05.942437 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:05.947096 systemd-logind[1497]: New session 25 of user core. Sep 12 10:19:05.957744 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:19:06.388542 kubelet[2634]: I0912 10:19:06.388492 2634 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" path="/var/lib/kubelet/pods/5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca/volumes" Sep 12 10:19:06.389553 kubelet[2634]: I0912 10:19:06.389519 2634 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f188a585-f61d-44ef-9956-b6e57086bd2b" path="/var/lib/kubelet/pods/f188a585-f61d-44ef-9956-b6e57086bd2b/volumes" Sep 12 10:19:06.560718 sshd[4467]: Connection closed by 10.0.0.1 port 43168 Sep 12 10:19:06.560466 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:06.571354 kubelet[2634]: I0912 10:19:06.570564 2634 memory_manager.go:355] "RemoveStaleState removing state" podUID="f188a585-f61d-44ef-9956-b6e57086bd2b" containerName="cilium-operator" Sep 12 10:19:06.571354 kubelet[2634]: I0912 10:19:06.570627 2634 memory_manager.go:355] "RemoveStaleState removing state" podUID="5c897f9b-0a9d-46a5-8ff1-8c6b2d638cca" containerName="cilium-agent" Sep 12 10:19:06.572950 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:43168.service: Deactivated successfully. Sep 12 10:19:06.576364 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:19:06.578841 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:19:06.586690 systemd-logind[1497]: Removed session 25. Sep 12 10:19:06.595986 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Sep 12 10:19:06.604110 systemd[1]: Created slice kubepods-burstable-pod01dd665b_bdf4_4a93_abf9_83520de689be.slice - libcontainer container kubepods-burstable-pod01dd665b_bdf4_4a93_abf9_83520de689be.slice. Sep 12 10:19:06.638336 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:19:06.639862 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:06.644873 systemd-logind[1497]: New session 26 of user core. Sep 12 10:19:06.659812 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:19:06.681466 kubelet[2634]: I0912 10:19:06.681401 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01dd665b-bdf4-4a93-abf9-83520de689be-clustermesh-secrets\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681466 kubelet[2634]: I0912 10:19:06.681435 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01dd665b-bdf4-4a93-abf9-83520de689be-hubble-tls\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681466 kubelet[2634]: I0912 10:19:06.681457 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-lib-modules\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681466 kubelet[2634]: I0912 10:19:06.681473 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01dd665b-bdf4-4a93-abf9-83520de689be-cilium-config-path\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681466 kubelet[2634]: I0912 10:19:06.681489 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-cni-path\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681504 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-xtables-lock\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681564 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-host-proc-sys-net\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681643 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-cilium-cgroup\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681671 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-etc-cni-netd\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681716 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-bpf-maps\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.681821 kubelet[2634]: I0912 10:19:06.681736 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnln\" (UniqueName: \"kubernetes.io/projected/01dd665b-bdf4-4a93-abf9-83520de689be-kube-api-access-tsnln\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.682017 kubelet[2634]: I0912 10:19:06.681751 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-cilium-run\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.682017 kubelet[2634]: I0912 10:19:06.681770 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01dd665b-bdf4-4a93-abf9-83520de689be-cilium-ipsec-secrets\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.682017 kubelet[2634]: I0912 10:19:06.681793 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-hostproc\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.682017 kubelet[2634]: I0912 10:19:06.681819 2634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01dd665b-bdf4-4a93-abf9-83520de689be-host-proc-sys-kernel\") pod \"cilium-xn2mk\" (UID: \"01dd665b-bdf4-4a93-abf9-83520de689be\") " pod="kube-system/cilium-xn2mk" Sep 12 10:19:06.710726 sshd[4481]: Connection closed by 10.0.0.1 port 43174 Sep 12 10:19:06.711184 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:06.719870 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:43174.service: Deactivated successfully. Sep 12 10:19:06.722103 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:19:06.723638 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:19:06.733880 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:43184.service - OpenSSH per-connection server daemon (10.0.0.1:43184). Sep 12 10:19:06.734929 systemd-logind[1497]: Removed session 26. Sep 12 10:19:06.771450 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 43184 ssh2: RSA SHA256:gmzr2s0b/fyVS9LtAAsCbgkEdimDPvMEYQ1RIUPtWIg Sep 12 10:19:06.773052 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:06.777505 systemd-logind[1497]: New session 27 of user core. Sep 12 10:19:06.787915 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:19:06.911095 containerd[1514]: time="2025-09-12T10:19:06.910944074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xn2mk,Uid:01dd665b-bdf4-4a93-abf9-83520de689be,Namespace:kube-system,Attempt:0,}" Sep 12 10:19:06.932785 containerd[1514]: time="2025-09-12T10:19:06.932021833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:19:06.932785 containerd[1514]: time="2025-09-12T10:19:06.932743229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:19:06.932785 containerd[1514]: time="2025-09-12T10:19:06.932755082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:06.933148 containerd[1514]: time="2025-09-12T10:19:06.933076795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:06.956864 systemd[1]: Started cri-containerd-60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185.scope - libcontainer container 60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185. Sep 12 10:19:06.980866 containerd[1514]: time="2025-09-12T10:19:06.980814567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xn2mk,Uid:01dd665b-bdf4-4a93-abf9-83520de689be,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\"" Sep 12 10:19:06.983163 containerd[1514]: time="2025-09-12T10:19:06.983127371Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:19:06.996200 containerd[1514]: time="2025-09-12T10:19:06.996142852Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000\"" Sep 12 10:19:06.996549 containerd[1514]: time="2025-09-12T10:19:06.996526123Z" level=info msg="StartContainer for \"3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000\"" Sep 12 10:19:07.024763 systemd[1]: Started cri-containerd-3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000.scope - libcontainer container 3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000. Sep 12 10:19:07.050884 containerd[1514]: time="2025-09-12T10:19:07.050835146Z" level=info msg="StartContainer for \"3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000\" returns successfully" Sep 12 10:19:07.061717 systemd[1]: cri-containerd-3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000.scope: Deactivated successfully. Sep 12 10:19:07.096908 containerd[1514]: time="2025-09-12T10:19:07.096838670Z" level=info msg="shim disconnected" id=3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000 namespace=k8s.io Sep 12 10:19:07.096908 containerd[1514]: time="2025-09-12T10:19:07.096905567Z" level=warning msg="cleaning up after shim disconnected" id=3732a952c98b5607b8eca90d41203c3791aadd9b3cf8fdc5b1353d9771d48000 namespace=k8s.io Sep 12 10:19:07.097155 containerd[1514]: time="2025-09-12T10:19:07.096914815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:07.533140 kubelet[2634]: I0912 10:19:07.533074 2634 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:19:07Z","lastTransitionTime":"2025-09-12T10:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:19:07.647796 containerd[1514]: time="2025-09-12T10:19:07.647752410Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:19:07.660053 containerd[1514]: time="2025-09-12T10:19:07.660012319Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30\"" Sep 12 10:19:07.660876 containerd[1514]: time="2025-09-12T10:19:07.660840840Z" level=info msg="StartContainer for \"f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30\"" Sep 12 10:19:07.686748 systemd[1]: Started cri-containerd-f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30.scope - libcontainer container f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30. Sep 12 10:19:07.714806 containerd[1514]: time="2025-09-12T10:19:07.714757249Z" level=info msg="StartContainer for \"f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30\" returns successfully" Sep 12 10:19:07.721905 systemd[1]: cri-containerd-f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30.scope: Deactivated successfully. Sep 12 10:19:07.746386 containerd[1514]: time="2025-09-12T10:19:07.746313882Z" level=info msg="shim disconnected" id=f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30 namespace=k8s.io Sep 12 10:19:07.746386 containerd[1514]: time="2025-09-12T10:19:07.746378755Z" level=warning msg="cleaning up after shim disconnected" id=f802aa791eae468ff5d4553c43031b8d796477382f2f550ae56b2b05cd699d30 namespace=k8s.io Sep 12 10:19:07.746386 containerd[1514]: time="2025-09-12T10:19:07.746388083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:08.651109 containerd[1514]: time="2025-09-12T10:19:08.650925879Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:19:08.668088 containerd[1514]: time="2025-09-12T10:19:08.668034755Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864\"" Sep 12 10:19:08.668578 containerd[1514]: time="2025-09-12T10:19:08.668543676Z" level=info msg="StartContainer for \"52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864\"" Sep 12 10:19:08.701755 systemd[1]: Started cri-containerd-52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864.scope - libcontainer container 52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864. Sep 12 10:19:08.735201 containerd[1514]: time="2025-09-12T10:19:08.735055021Z" level=info msg="StartContainer for \"52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864\" returns successfully" Sep 12 10:19:08.735576 systemd[1]: cri-containerd-52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864.scope: Deactivated successfully. Sep 12 10:19:08.765475 containerd[1514]: time="2025-09-12T10:19:08.765409025Z" level=info msg="shim disconnected" id=52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864 namespace=k8s.io Sep 12 10:19:08.765475 containerd[1514]: time="2025-09-12T10:19:08.765463038Z" level=warning msg="cleaning up after shim disconnected" id=52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864 namespace=k8s.io Sep 12 10:19:08.765475 containerd[1514]: time="2025-09-12T10:19:08.765471514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:08.790121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cee651d6dbababcb3bf14fdf5e478e0594731d633bb8b835580b44d761c864-rootfs.mount: Deactivated successfully. Sep 12 10:19:09.447764 kubelet[2634]: E0912 10:19:09.447705 2634 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:19:09.654426 containerd[1514]: time="2025-09-12T10:19:09.654363585Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:19:09.668675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512194329.mount: Deactivated successfully. Sep 12 10:19:09.672178 containerd[1514]: time="2025-09-12T10:19:09.672123779Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1\"" Sep 12 10:19:09.672732 containerd[1514]: time="2025-09-12T10:19:09.672702823Z" level=info msg="StartContainer for \"d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1\"" Sep 12 10:19:09.702757 systemd[1]: Started cri-containerd-d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1.scope - libcontainer container d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1. Sep 12 10:19:09.730434 systemd[1]: cri-containerd-d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1.scope: Deactivated successfully. Sep 12 10:19:09.732378 containerd[1514]: time="2025-09-12T10:19:09.732344893Z" level=info msg="StartContainer for \"d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1\" returns successfully" Sep 12 10:19:09.769361 containerd[1514]: time="2025-09-12T10:19:09.769289156Z" level=info msg="shim disconnected" id=d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1 namespace=k8s.io Sep 12 10:19:09.769361 containerd[1514]: time="2025-09-12T10:19:09.769349742Z" level=warning msg="cleaning up after shim disconnected" id=d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1 namespace=k8s.io Sep 12 10:19:09.769361 containerd[1514]: time="2025-09-12T10:19:09.769360993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:09.790995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d88c7b2240709624fb77f3dae9edfa3fe9b5c13aa1d4adb739864fb16910edb1-rootfs.mount: Deactivated successfully. Sep 12 10:19:10.664360 containerd[1514]: time="2025-09-12T10:19:10.664318022Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:19:10.680370 containerd[1514]: time="2025-09-12T10:19:10.680327698Z" level=info msg="CreateContainer within sandbox \"60c577d390a54aa25f16c422d085dc0f7803da42cb8cd3a829c3f9b10d5ae185\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a6e6e16a94807f7e918124ca4942c33300a4448fda9c81c26ec64711658c63ac\"" Sep 12 10:19:10.680813 containerd[1514]: time="2025-09-12T10:19:10.680783486Z" level=info msg="StartContainer for \"a6e6e16a94807f7e918124ca4942c33300a4448fda9c81c26ec64711658c63ac\"" Sep 12 10:19:10.708755 systemd[1]: Started cri-containerd-a6e6e16a94807f7e918124ca4942c33300a4448fda9c81c26ec64711658c63ac.scope - libcontainer container a6e6e16a94807f7e918124ca4942c33300a4448fda9c81c26ec64711658c63ac. Sep 12 10:19:10.739503 containerd[1514]: time="2025-09-12T10:19:10.739441958Z" level=info msg="StartContainer for \"a6e6e16a94807f7e918124ca4942c33300a4448fda9c81c26ec64711658c63ac\" returns successfully" Sep 12 10:19:11.193657 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:19:11.681187 kubelet[2634]: I0912 10:19:11.681028 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xn2mk" podStartSLOduration=5.6810029669999995 podStartE2EDuration="5.681002967s" podCreationTimestamp="2025-09-12 10:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:19:11.680987999 +0000 UTC m=+87.383250734" watchObservedRunningTime="2025-09-12 10:19:11.681002967 +0000 UTC m=+87.383265702" Sep 12 10:19:14.298005 systemd-networkd[1425]: lxc_health: Link UP Sep 12 10:19:14.298327 systemd-networkd[1425]: lxc_health: Gained carrier Sep 12 10:19:15.532907 systemd-networkd[1425]: lxc_health: Gained IPv6LL Sep 12 10:19:19.696948 sshd[4494]: Connection closed by 10.0.0.1 port 43184 Sep 12 10:19:19.697369 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:19.701886 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:43184.service: Deactivated successfully. Sep 12 10:19:19.704307 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:19:19.705062 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:19:19.706163 systemd-logind[1497]: Removed session 27.