Sep 9 02:20:25.937623 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 02:20:25.937669 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 02:20:25.937687 kernel: BIOS-provided physical RAM map: Sep 9 02:20:25.937698 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 02:20:25.937708 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 02:20:25.937718 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 02:20:25.937729 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 9 02:20:25.937739 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 9 02:20:25.937750 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 02:20:25.937760 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 02:20:25.937774 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 02:20:25.937784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 02:20:25.937795 kernel: NX (Execute Disable) protection: active Sep 9 02:20:25.937805 kernel: APIC: Static calls initialized Sep 9 02:20:25.937817 kernel: SMBIOS 2.8 present. Sep 9 02:20:25.937845 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 9 02:20:25.937856 kernel: DMI: Memory slots populated: 1/1 Sep 9 02:20:25.937868 kernel: Hypervisor detected: KVM Sep 9 02:20:25.937879 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 02:20:25.937891 kernel: kvm-clock: using sched offset of 5716647664 cycles Sep 9 02:20:25.937903 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 02:20:25.937915 kernel: tsc: Detected 2500.032 MHz processor Sep 9 02:20:25.937927 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 02:20:25.937939 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 02:20:25.937950 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 9 02:20:25.937966 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 02:20:25.937978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 02:20:25.937990 kernel: Using GB pages for direct mapping Sep 9 02:20:25.938001 kernel: ACPI: Early table checksum verification disabled Sep 9 02:20:25.938013 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 9 02:20:25.938024 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938036 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938048 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938059 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 9 02:20:25.938075 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938087 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938099 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938110 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 02:20:25.938122 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 9 02:20:25.938134 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 9 02:20:25.938151 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 9 02:20:25.938167 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 9 02:20:25.938179 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 9 02:20:25.938191 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 9 02:20:25.938204 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 9 02:20:25.938216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 02:20:25.938281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 02:20:25.938296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 9 02:20:25.938327 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Sep 9 02:20:25.938339 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Sep 9 02:20:25.938352 kernel: Zone ranges: Sep 9 02:20:25.938364 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 02:20:25.938376 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 9 02:20:25.938388 kernel: Normal empty Sep 9 02:20:25.938400 kernel: Device empty Sep 9 02:20:25.938412 kernel: Movable zone start for each node Sep 9 02:20:25.938424 kernel: Early memory node ranges Sep 9 02:20:25.938441 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 02:20:25.938453 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 9 02:20:25.938465 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 9 02:20:25.938477 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 02:20:25.938489 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 02:20:25.938502 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 9 02:20:25.938514 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 02:20:25.938526 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 02:20:25.938538 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 02:20:25.938554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 02:20:25.938566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 02:20:25.938578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 02:20:25.938590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 02:20:25.938602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 02:20:25.938614 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 02:20:25.938626 kernel: TSC deadline timer available Sep 9 02:20:25.938638 kernel: CPU topo: Max. logical packages: 16 Sep 9 02:20:25.938650 kernel: CPU topo: Max. logical dies: 16 Sep 9 02:20:25.938662 kernel: CPU topo: Max. dies per package: 1 Sep 9 02:20:25.938678 kernel: CPU topo: Max. threads per core: 1 Sep 9 02:20:25.938690 kernel: CPU topo: Num. cores per package: 1 Sep 9 02:20:25.938702 kernel: CPU topo: Num. threads per package: 1 Sep 9 02:20:25.938714 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Sep 9 02:20:25.938726 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 02:20:25.938738 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 02:20:25.938750 kernel: Booting paravirtualized kernel on KVM Sep 9 02:20:25.938762 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 02:20:25.938774 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 9 02:20:25.938790 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 9 02:20:25.938803 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 9 02:20:25.938815 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 9 02:20:25.938826 kernel: kvm-guest: PV spinlocks enabled Sep 9 02:20:25.938838 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 02:20:25.938852 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 02:20:25.938864 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 02:20:25.938876 kernel: random: crng init done Sep 9 02:20:25.938892 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 02:20:25.938905 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 02:20:25.938917 kernel: Fallback order for Node 0: 0 Sep 9 02:20:25.938929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Sep 9 02:20:25.938941 kernel: Policy zone: DMA32 Sep 9 02:20:25.938953 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 02:20:25.938965 kernel: software IO TLB: area num 16. Sep 9 02:20:25.938977 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 9 02:20:25.938989 kernel: Kernel/User page tables isolation: enabled Sep 9 02:20:25.939005 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 02:20:25.939017 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 02:20:25.939029 kernel: Dynamic Preempt: voluntary Sep 9 02:20:25.939041 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 02:20:25.939054 kernel: rcu: RCU event tracing is enabled. Sep 9 02:20:25.939067 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 9 02:20:25.939079 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 02:20:25.939091 kernel: Rude variant of Tasks RCU enabled. Sep 9 02:20:25.939103 kernel: Tracing variant of Tasks RCU enabled. Sep 9 02:20:25.939119 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 02:20:25.939131 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 9 02:20:25.939144 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 02:20:25.939156 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 02:20:25.939168 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 02:20:25.939180 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 9 02:20:25.939193 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 02:20:25.940258 kernel: Console: colour VGA+ 80x25 Sep 9 02:20:25.940275 kernel: printk: legacy console [tty0] enabled Sep 9 02:20:25.940288 kernel: printk: legacy console [ttyS0] enabled Sep 9 02:20:25.940312 kernel: ACPI: Core revision 20240827 Sep 9 02:20:25.940325 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 02:20:25.940344 kernel: x2apic enabled Sep 9 02:20:25.940357 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 02:20:25.940370 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Sep 9 02:20:25.940383 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Sep 9 02:20:25.940399 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 02:20:25.940412 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 02:20:25.940425 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 02:20:25.940437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 02:20:25.940450 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 02:20:25.940462 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 02:20:25.940475 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 02:20:25.940487 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 02:20:25.940499 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 02:20:25.940512 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 02:20:25.940524 kernel: MMIO Stale Data: Unknown: No mitigations Sep 9 02:20:25.940540 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 02:20:25.940553 kernel: active return thunk: its_return_thunk Sep 9 02:20:25.940565 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 02:20:25.940578 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 02:20:25.940590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 02:20:25.940602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 02:20:25.940615 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 02:20:25.940627 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 02:20:25.940640 kernel: Freeing SMP alternatives memory: 32K Sep 9 02:20:25.940652 kernel: pid_max: default: 32768 minimum: 301 Sep 9 02:20:25.940664 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 02:20:25.940681 kernel: landlock: Up and running. Sep 9 02:20:25.940693 kernel: SELinux: Initializing. Sep 9 02:20:25.940706 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 02:20:25.940718 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 02:20:25.940731 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 9 02:20:25.940750 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 9 02:20:25.940762 kernel: signal: max sigframe size: 1776 Sep 9 02:20:25.940775 kernel: rcu: Hierarchical SRCU implementation. Sep 9 02:20:25.940789 kernel: rcu: Max phase no-delay instances is 400. Sep 9 02:20:25.940802 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Sep 9 02:20:25.940819 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 02:20:25.940831 kernel: smp: Bringing up secondary CPUs ... Sep 9 02:20:25.940844 kernel: smpboot: x86: Booting SMP configuration: Sep 9 02:20:25.940857 kernel: .... node #0, CPUs: #1 Sep 9 02:20:25.940869 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 02:20:25.940882 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Sep 9 02:20:25.940895 kernel: Memory: 1897728K/2096616K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 192880K reserved, 0K cma-reserved) Sep 9 02:20:25.940908 kernel: devtmpfs: initialized Sep 9 02:20:25.940921 kernel: x86/mm: Memory block size: 128MB Sep 9 02:20:25.940938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 02:20:25.940951 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 9 02:20:25.940963 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 02:20:25.940976 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 02:20:25.940989 kernel: audit: initializing netlink subsys (disabled) Sep 9 02:20:25.941001 kernel: audit: type=2000 audit(1757384421.693:1): state=initialized audit_enabled=0 res=1 Sep 9 02:20:25.941014 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 02:20:25.941027 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 02:20:25.941039 kernel: cpuidle: using governor menu Sep 9 02:20:25.941056 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 02:20:25.941069 kernel: dca service started, version 1.12.1 Sep 9 02:20:25.941081 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 02:20:25.941094 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 02:20:25.941107 kernel: PCI: Using configuration type 1 for base access Sep 9 02:20:25.941119 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 02:20:25.941132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 02:20:25.941145 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 02:20:25.941157 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 02:20:25.941174 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 02:20:25.941187 kernel: ACPI: Added _OSI(Module Device) Sep 9 02:20:25.941199 kernel: ACPI: Added _OSI(Processor Device) Sep 9 02:20:25.941225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 02:20:25.941240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 02:20:25.941253 kernel: ACPI: Interpreter enabled Sep 9 02:20:25.941266 kernel: ACPI: PM: (supports S0 S5) Sep 9 02:20:25.941278 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 02:20:25.941291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 02:20:25.941318 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 02:20:25.941331 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 02:20:25.941344 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 02:20:25.941612 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 02:20:25.941780 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 02:20:25.941938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 02:20:25.941958 kernel: PCI host bridge to bus 0000:00 Sep 9 02:20:25.942133 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 02:20:25.942897 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 02:20:25.943053 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 02:20:25.945385 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 9 02:20:25.945542 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 02:20:25.945700 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 9 02:20:25.945868 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 02:20:25.946079 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 02:20:25.946318 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Sep 9 02:20:25.946487 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Sep 9 02:20:25.946656 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Sep 9 02:20:25.946831 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Sep 9 02:20:25.947001 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 02:20:25.947185 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.947406 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Sep 9 02:20:25.947567 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 02:20:25.947725 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 02:20:25.947881 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 02:20:25.948059 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.948247 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Sep 9 02:20:25.948432 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 02:20:25.948592 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 02:20:25.948750 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 02:20:25.948926 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.949094 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Sep 9 02:20:25.951281 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 02:20:25.951478 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 02:20:25.951656 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 02:20:25.951837 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.952001 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Sep 9 02:20:25.952164 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 02:20:25.952362 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 02:20:25.952525 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 02:20:25.952694 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.952862 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Sep 9 02:20:25.953022 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 02:20:25.953180 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 02:20:25.953372 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 02:20:25.953543 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.953726 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Sep 9 02:20:25.953893 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 02:20:25.954057 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 02:20:25.954213 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 02:20:25.954454 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.954616 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Sep 9 02:20:25.954786 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 02:20:25.954967 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 02:20:25.955136 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 02:20:25.956314 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 9 02:20:25.956488 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Sep 9 02:20:25.956651 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 02:20:25.956817 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 02:20:25.956965 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 02:20:25.957131 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 02:20:25.957324 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 02:20:25.957495 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Sep 9 02:20:25.957654 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 9 02:20:25.957812 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Sep 9 02:20:25.957985 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 02:20:25.958147 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Sep 9 02:20:25.959385 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Sep 9 02:20:25.959551 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Sep 9 02:20:25.959731 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 02:20:25.959891 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 02:20:25.960060 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 02:20:25.960236 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Sep 9 02:20:25.960411 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Sep 9 02:20:25.960587 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 02:20:25.960747 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 02:20:25.960930 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Sep 9 02:20:25.961115 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Sep 9 02:20:25.962431 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 02:20:25.962609 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 02:20:25.962771 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 02:20:25.962945 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 02:20:25.963145 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Sep 9 02:20:25.963394 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Sep 9 02:20:25.963566 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 02:20:25.963749 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 9 02:20:25.963916 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Sep 9 02:20:25.964087 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 02:20:25.965309 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 9 02:20:25.965497 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 9 02:20:25.965660 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 02:20:25.965821 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 02:20:25.965981 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 02:20:25.966155 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 02:20:25.970950 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 02:20:25.971134 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 02:20:25.971163 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 02:20:25.971176 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 02:20:25.971189 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 02:20:25.971201 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 02:20:25.971214 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 02:20:25.971254 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 02:20:25.971268 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 02:20:25.971281 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 02:20:25.971310 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 02:20:25.971324 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 02:20:25.971337 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 02:20:25.971349 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 02:20:25.971362 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 02:20:25.971375 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 02:20:25.971388 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 02:20:25.971401 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 02:20:25.971414 kernel: iommu: Default domain type: Translated Sep 9 02:20:25.971432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 02:20:25.971445 kernel: PCI: Using ACPI for IRQ routing Sep 9 02:20:25.971458 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 02:20:25.971470 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 02:20:25.971483 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 9 02:20:25.971648 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 02:20:25.971811 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 02:20:25.971972 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 02:20:25.971998 kernel: vgaarb: loaded Sep 9 02:20:25.972012 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 02:20:25.972025 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 02:20:25.972038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 02:20:25.972051 kernel: pnp: PnP ACPI init Sep 9 02:20:25.972251 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 02:20:25.972272 kernel: pnp: PnP ACPI: found 5 devices Sep 9 02:20:25.972308 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 02:20:25.972322 kernel: NET: Registered PF_INET protocol family Sep 9 02:20:25.972342 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 02:20:25.972355 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 02:20:25.972368 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 02:20:25.972381 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 02:20:25.972394 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 02:20:25.972407 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 02:20:25.972419 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 02:20:25.972432 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 02:20:25.972449 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 02:20:25.972462 kernel: NET: Registered PF_XDP protocol family Sep 9 02:20:25.972622 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 9 02:20:25.972783 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 9 02:20:25.972945 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 9 02:20:25.973106 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 9 02:20:25.973413 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 02:20:25.973580 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 02:20:25.973749 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 02:20:25.973910 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 02:20:25.974069 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 9 02:20:25.974260 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 9 02:20:25.974437 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 9 02:20:25.974603 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 9 02:20:25.974764 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 9 02:20:25.974924 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 9 02:20:25.975092 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 9 02:20:25.975270 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 9 02:20:25.975453 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 02:20:25.975647 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 02:20:25.975809 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 02:20:25.975969 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 9 02:20:25.976129 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 02:20:25.976321 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 02:20:25.976484 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 02:20:25.976652 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 9 02:20:25.976812 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 02:20:25.976971 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 02:20:25.977130 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 02:20:25.977365 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 9 02:20:25.977528 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 02:20:25.977688 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 02:20:25.977856 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 02:20:25.978019 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 9 02:20:25.978180 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 02:20:25.978370 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 02:20:25.978540 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 02:20:25.978701 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 9 02:20:25.978885 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 02:20:25.979046 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 02:20:25.979207 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 02:20:25.981924 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 9 02:20:25.982096 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 02:20:25.982304 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 02:20:25.982475 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 02:20:25.982648 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 9 02:20:25.982811 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 02:20:25.982972 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 02:20:25.983133 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 02:20:25.983324 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 9 02:20:25.983493 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 02:20:25.983654 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 02:20:25.983807 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 02:20:25.983963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 02:20:25.984129 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 02:20:25.984314 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 9 02:20:25.984463 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 02:20:25.984619 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 9 02:20:25.984783 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 9 02:20:25.984930 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 9 02:20:25.985092 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 02:20:25.986521 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 9 02:20:25.986698 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 9 02:20:25.986855 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 9 02:20:25.987008 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 02:20:25.987170 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 9 02:20:25.987358 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 9 02:20:25.987512 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 02:20:25.987701 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 9 02:20:25.987844 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 9 02:20:25.987999 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 02:20:25.988174 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 9 02:20:25.989427 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 9 02:20:25.989600 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 02:20:25.989758 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 9 02:20:25.989923 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 9 02:20:25.990065 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 02:20:25.990247 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 9 02:20:25.990444 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 9 02:20:25.990608 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 02:20:25.990767 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 9 02:20:25.990915 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 9 02:20:25.991079 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 02:20:25.991104 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 02:20:25.991118 kernel: PCI: CLS 0 bytes, default 64 Sep 9 02:20:25.991130 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 02:20:25.991143 kernel: software IO TLB: mapped [mem 0x0000000074000000-0x0000000078000000] (64MB) Sep 9 02:20:25.991156 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 02:20:25.991181 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Sep 9 02:20:25.991195 kernel: Initialise system trusted keyrings Sep 9 02:20:25.991213 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 02:20:25.991226 kernel: Key type asymmetric registered Sep 9 02:20:25.991239 kernel: Asymmetric key parser 'x509' registered Sep 9 02:20:25.993283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 02:20:25.993312 kernel: io scheduler mq-deadline registered Sep 9 02:20:25.993326 kernel: io scheduler kyber registered Sep 9 02:20:25.993340 kernel: io scheduler bfq registered Sep 9 02:20:25.993548 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 9 02:20:25.993719 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 9 02:20:25.993906 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.994086 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 9 02:20:25.994255 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 9 02:20:25.994534 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.994708 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 9 02:20:25.994863 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 9 02:20:25.995045 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.995206 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 9 02:20:25.995413 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 9 02:20:25.995578 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.995749 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 9 02:20:25.995915 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 9 02:20:25.996104 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.996281 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 9 02:20:25.996458 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 9 02:20:25.996621 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.996792 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 9 02:20:25.996948 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 9 02:20:25.997123 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.998390 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 9 02:20:25.998564 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 9 02:20:25.998729 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 02:20:25.998751 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 02:20:25.998766 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 02:20:25.998788 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 02:20:25.998802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 02:20:25.998815 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 02:20:25.998829 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 02:20:25.998854 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 02:20:25.998867 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 02:20:25.999041 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 02:20:25.999061 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 02:20:25.999206 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 02:20:26.000393 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T02:20:25 UTC (1757384425) Sep 9 02:20:26.000555 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 02:20:26.000576 kernel: intel_pstate: CPU model not supported Sep 9 02:20:26.000590 kernel: NET: Registered PF_INET6 protocol family Sep 9 02:20:26.000604 kernel: Segment Routing with IPv6 Sep 9 02:20:26.000617 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 02:20:26.000631 kernel: NET: Registered PF_PACKET protocol family Sep 9 02:20:26.000644 kernel: Key type dns_resolver registered Sep 9 02:20:26.000665 kernel: IPI shorthand broadcast: enabled Sep 9 02:20:26.000682 kernel: sched_clock: Marking stable (3544003837, 224827767)->(3900382782, -131551178) Sep 9 02:20:26.000696 kernel: registered taskstats version 1 Sep 9 02:20:26.000710 kernel: Loading compiled-in X.509 certificates Sep 9 02:20:26.000723 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 02:20:26.000736 kernel: Demotion targets for Node 0: null Sep 9 02:20:26.000749 kernel: Key type .fscrypt registered Sep 9 02:20:26.000763 kernel: Key type fscrypt-provisioning registered Sep 9 02:20:26.000776 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 02:20:26.000794 kernel: ima: Allocated hash algorithm: sha1 Sep 9 02:20:26.000811 kernel: ima: No architecture policies found Sep 9 02:20:26.000825 kernel: clk: Disabling unused clocks Sep 9 02:20:26.000838 kernel: Warning: unable to open an initial console. Sep 9 02:20:26.000852 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 02:20:26.000865 kernel: Write protecting the kernel read-only data: 24576k Sep 9 02:20:26.000879 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 02:20:26.000892 kernel: Run /init as init process Sep 9 02:20:26.000909 kernel: with arguments: Sep 9 02:20:26.000922 kernel: /init Sep 9 02:20:26.000936 kernel: with environment: Sep 9 02:20:26.000949 kernel: HOME=/ Sep 9 02:20:26.000962 kernel: TERM=linux Sep 9 02:20:26.000975 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 02:20:26.000998 systemd[1]: Successfully made /usr/ read-only. Sep 9 02:20:26.001017 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 02:20:26.001049 systemd[1]: Detected virtualization kvm. Sep 9 02:20:26.001063 systemd[1]: Detected architecture x86-64. Sep 9 02:20:26.001076 systemd[1]: Running in initrd. Sep 9 02:20:26.001090 systemd[1]: No hostname configured, using default hostname. Sep 9 02:20:26.001104 systemd[1]: Hostname set to . Sep 9 02:20:26.001118 systemd[1]: Initializing machine ID from VM UUID. Sep 9 02:20:26.001131 systemd[1]: Queued start job for default target initrd.target. Sep 9 02:20:26.001145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 02:20:26.001163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 02:20:26.001178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 02:20:26.001192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 02:20:26.001206 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 02:20:26.001220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 02:20:26.002341 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 02:20:26.002358 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 02:20:26.002380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 02:20:26.002394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 02:20:26.002409 systemd[1]: Reached target paths.target - Path Units. Sep 9 02:20:26.002423 systemd[1]: Reached target slices.target - Slice Units. Sep 9 02:20:26.002437 systemd[1]: Reached target swap.target - Swaps. Sep 9 02:20:26.002451 systemd[1]: Reached target timers.target - Timer Units. Sep 9 02:20:26.002466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 02:20:26.002480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 02:20:26.002495 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 02:20:26.002514 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 02:20:26.002529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 02:20:26.002543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 02:20:26.002558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 02:20:26.002572 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 02:20:26.002586 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 02:20:26.002600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 02:20:26.002615 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 02:20:26.002634 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 02:20:26.002648 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 02:20:26.002663 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 02:20:26.002677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 02:20:26.002691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 02:20:26.002757 systemd-journald[231]: Collecting audit messages is disabled. Sep 9 02:20:26.002798 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 02:20:26.002814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 02:20:26.002832 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 02:20:26.002847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 02:20:26.002863 systemd-journald[231]: Journal started Sep 9 02:20:26.002894 systemd-journald[231]: Runtime Journal (/run/log/journal/ac2cb9acb60d49b59487df5ee8397f3c) is 4.7M, max 38.2M, 33.4M free. Sep 9 02:20:25.988677 systemd-modules-load[232]: Inserted module 'overlay' Sep 9 02:20:26.053345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 02:20:26.053398 kernel: Bridge firewalling registered Sep 9 02:20:26.053420 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 02:20:26.028851 systemd-modules-load[232]: Inserted module 'br_netfilter' Sep 9 02:20:26.056424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 02:20:26.057688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 02:20:26.064448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 02:20:26.069394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 02:20:26.072199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 02:20:26.077389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 02:20:26.080452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 02:20:26.094083 systemd-tmpfiles[249]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 02:20:26.096777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 02:20:26.105755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 02:20:26.108599 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 02:20:26.112197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 02:20:26.113326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 02:20:26.117379 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 02:20:26.147779 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 02:20:26.170545 systemd-resolved[268]: Positive Trust Anchors: Sep 9 02:20:26.170575 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 02:20:26.170619 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 02:20:26.175776 systemd-resolved[268]: Defaulting to hostname 'linux'. Sep 9 02:20:26.177621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 02:20:26.180196 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 02:20:26.267318 kernel: SCSI subsystem initialized Sep 9 02:20:26.279261 kernel: Loading iSCSI transport class v2.0-870. Sep 9 02:20:26.292261 kernel: iscsi: registered transport (tcp) Sep 9 02:20:26.319394 kernel: iscsi: registered transport (qla4xxx) Sep 9 02:20:26.319483 kernel: QLogic iSCSI HBA Driver Sep 9 02:20:26.346156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 02:20:26.364022 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 02:20:26.366903 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 02:20:26.429014 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 02:20:26.431666 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 02:20:26.494312 kernel: raid6: sse2x4 gen() 13848 MB/s Sep 9 02:20:26.512258 kernel: raid6: sse2x2 gen() 9685 MB/s Sep 9 02:20:26.530879 kernel: raid6: sse2x1 gen() 10144 MB/s Sep 9 02:20:26.530963 kernel: raid6: using algorithm sse2x4 gen() 13848 MB/s Sep 9 02:20:26.549903 kernel: raid6: .... xor() 7772 MB/s, rmw enabled Sep 9 02:20:26.550008 kernel: raid6: using ssse3x2 recovery algorithm Sep 9 02:20:26.575345 kernel: xor: automatically using best checksumming function avx Sep 9 02:20:26.765246 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 02:20:26.775027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 02:20:26.778874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 02:20:26.810744 systemd-udevd[478]: Using default interface naming scheme 'v255'. Sep 9 02:20:26.820137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 02:20:26.822807 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 02:20:26.855863 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 9 02:20:26.890451 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 02:20:26.892845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 02:20:27.010179 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 02:20:27.014113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 02:20:27.122280 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 9 02:20:27.133460 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 02:20:27.145386 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 02:20:27.145411 kernel: GPT:17805311 != 125829119 Sep 9 02:20:27.145430 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 02:20:27.145447 kernel: GPT:17805311 != 125829119 Sep 9 02:20:27.145463 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 02:20:27.145480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 02:20:27.155240 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 02:20:27.189502 kernel: AES CTR mode by8 optimization enabled Sep 9 02:20:27.189564 kernel: ACPI: bus type USB registered Sep 9 02:20:27.189584 kernel: usbcore: registered new interface driver usbfs Sep 9 02:20:27.192146 kernel: usbcore: registered new interface driver hub Sep 9 02:20:27.194238 kernel: usbcore: registered new device driver usb Sep 9 02:20:27.225906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 02:20:27.227472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 02:20:27.233711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 02:20:27.238069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 02:20:27.251421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 02:20:27.251454 kernel: libata version 3.00 loaded. Sep 9 02:20:27.249705 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 02:20:27.257859 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 02:20:27.258134 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 02:20:27.263148 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 02:20:27.263408 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 02:20:27.263608 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 02:20:27.295244 kernel: scsi host0: ahci Sep 9 02:20:27.296549 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 02:20:27.392648 kernel: scsi host1: ahci Sep 9 02:20:27.392971 kernel: scsi host2: ahci Sep 9 02:20:27.393174 kernel: scsi host3: ahci Sep 9 02:20:27.393412 kernel: scsi host4: ahci Sep 9 02:20:27.393620 kernel: scsi host5: ahci Sep 9 02:20:27.393810 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Sep 9 02:20:27.393832 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Sep 9 02:20:27.393849 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Sep 9 02:20:27.393874 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Sep 9 02:20:27.393893 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Sep 9 02:20:27.393910 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Sep 9 02:20:27.395237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 02:20:27.427284 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 02:20:27.439823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 02:20:27.458425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 02:20:27.459312 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 02:20:27.462545 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 02:20:27.498329 disk-uuid[631]: Primary Header is updated. Sep 9 02:20:27.498329 disk-uuid[631]: Secondary Entries is updated. Sep 9 02:20:27.498329 disk-uuid[631]: Secondary Header is updated. Sep 9 02:20:27.504267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 02:20:27.513353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 02:20:27.627256 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.627320 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.629335 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.636924 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.636978 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.636998 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 9 02:20:27.663235 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 02:20:27.669252 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 9 02:20:27.675232 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 9 02:20:27.682170 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 02:20:27.682442 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 9 02:20:27.682647 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 9 02:20:27.686555 kernel: hub 1-0:1.0: USB hub found Sep 9 02:20:27.686812 kernel: hub 1-0:1.0: 4 ports detected Sep 9 02:20:27.687764 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 9 02:20:27.692730 kernel: hub 2-0:1.0: USB hub found Sep 9 02:20:27.692971 kernel: hub 2-0:1.0: 4 ports detected Sep 9 02:20:27.710879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 02:20:27.725656 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 02:20:27.727421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 02:20:27.728204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 02:20:27.731099 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 02:20:27.758473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 02:20:27.924415 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 9 02:20:28.065300 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 02:20:28.071578 kernel: usbcore: registered new interface driver usbhid Sep 9 02:20:28.071618 kernel: usbhid: USB HID core driver Sep 9 02:20:28.079956 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 9 02:20:28.080007 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 9 02:20:28.513986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 02:20:28.515410 disk-uuid[632]: The operation has completed successfully. Sep 9 02:20:28.571836 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 02:20:28.572012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 02:20:28.616403 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 02:20:28.636919 sh[658]: Success Sep 9 02:20:28.661294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 02:20:28.661390 kernel: device-mapper: uevent: version 1.0.3 Sep 9 02:20:28.664774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 02:20:28.681007 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Sep 9 02:20:28.746091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 02:20:28.761597 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 02:20:28.776415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 02:20:28.800269 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (670) Sep 9 02:20:28.803761 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 02:20:28.803812 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 02:20:28.816443 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 02:20:28.816516 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 02:20:28.819849 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 02:20:28.821852 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 02:20:28.823593 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 02:20:28.824641 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 02:20:28.829235 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 02:20:28.861357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Sep 9 02:20:28.865255 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 02:20:28.865300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 02:20:28.877966 kernel: BTRFS info (device vda6): turning on async discard Sep 9 02:20:28.878024 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 02:20:28.886321 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 02:20:28.886860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 02:20:28.890426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 02:20:29.004676 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 02:20:29.009403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 02:20:29.060628 systemd-networkd[840]: lo: Link UP Sep 9 02:20:29.060642 systemd-networkd[840]: lo: Gained carrier Sep 9 02:20:29.065451 systemd-networkd[840]: Enumeration completed Sep 9 02:20:29.066284 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 02:20:29.067972 systemd[1]: Reached target network.target - Network. Sep 9 02:20:29.068839 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 02:20:29.068845 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 02:20:29.070481 systemd-networkd[840]: eth0: Link UP Sep 9 02:20:29.070718 systemd-networkd[840]: eth0: Gained carrier Sep 9 02:20:29.070732 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 02:20:29.091343 systemd-networkd[840]: eth0: DHCPv4 address 10.230.31.10/30, gateway 10.230.31.9 acquired from 10.230.31.9 Sep 9 02:20:29.092721 ignition[760]: Ignition 2.21.0 Sep 9 02:20:29.092741 ignition[760]: Stage: fetch-offline Sep 9 02:20:29.092835 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:29.092855 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:29.093042 ignition[760]: parsed url from cmdline: "" Sep 9 02:20:29.099052 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 02:20:29.093049 ignition[760]: no config URL provided Sep 9 02:20:29.093059 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 02:20:29.093075 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 9 02:20:29.102416 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 02:20:29.093085 ignition[760]: failed to fetch config: resource requires networking Sep 9 02:20:29.094481 ignition[760]: Ignition finished successfully Sep 9 02:20:29.138422 ignition[849]: Ignition 2.21.0 Sep 9 02:20:29.138447 ignition[849]: Stage: fetch Sep 9 02:20:29.138690 ignition[849]: no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:29.138709 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:29.138830 ignition[849]: parsed url from cmdline: "" Sep 9 02:20:29.138837 ignition[849]: no config URL provided Sep 9 02:20:29.138851 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 02:20:29.138867 ignition[849]: no config at "/usr/lib/ignition/user.ign" Sep 9 02:20:29.139046 ignition[849]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 9 02:20:29.140293 ignition[849]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 9 02:20:29.140356 ignition[849]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 9 02:20:29.214860 ignition[849]: GET result: OK Sep 9 02:20:29.215954 ignition[849]: parsing config with SHA512: 268443dd1e9b2599b84ea32a422449bc33e4be81ab7749f984f38ca0424994b02b58e4452826dda4c8d5f1ad3390b45349217173aedc9a31d614eec817ee41e9 Sep 9 02:20:29.225011 unknown[849]: fetched base config from "system" Sep 9 02:20:29.225031 unknown[849]: fetched base config from "system" Sep 9 02:20:29.225605 ignition[849]: fetch: fetch complete Sep 9 02:20:29.225061 unknown[849]: fetched user config from "openstack" Sep 9 02:20:29.225633 ignition[849]: fetch: fetch passed Sep 9 02:20:29.229622 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 02:20:29.225721 ignition[849]: Ignition finished successfully Sep 9 02:20:29.232748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 02:20:29.271153 ignition[855]: Ignition 2.21.0 Sep 9 02:20:29.271177 ignition[855]: Stage: kargs Sep 9 02:20:29.273335 ignition[855]: no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:29.273363 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:29.278071 ignition[855]: kargs: kargs passed Sep 9 02:20:29.278169 ignition[855]: Ignition finished successfully Sep 9 02:20:29.281841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 02:20:29.284135 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 02:20:29.313890 ignition[861]: Ignition 2.21.0 Sep 9 02:20:29.313920 ignition[861]: Stage: disks Sep 9 02:20:29.314425 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:29.314451 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:29.317682 ignition[861]: disks: disks passed Sep 9 02:20:29.319530 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 02:20:29.317782 ignition[861]: Ignition finished successfully Sep 9 02:20:29.321059 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 02:20:29.322301 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 02:20:29.323744 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 02:20:29.325338 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 02:20:29.326945 systemd[1]: Reached target basic.target - Basic System. Sep 9 02:20:29.330392 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 02:20:29.371318 systemd-fsck[870]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 02:20:29.375109 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 02:20:29.377818 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 02:20:29.513242 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 02:20:29.514509 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 02:20:29.515747 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 02:20:29.519100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 02:20:29.521503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 02:20:29.524378 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 02:20:29.529886 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 9 02:20:29.532620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 02:20:29.532667 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 02:20:29.538414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 02:20:29.547520 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Sep 9 02:20:29.547553 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 02:20:29.547590 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 02:20:29.551257 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 02:20:29.557752 kernel: BTRFS info (device vda6): turning on async discard Sep 9 02:20:29.557798 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 02:20:29.561134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 02:20:29.641242 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:29.645459 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 02:20:29.652562 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 9 02:20:29.662131 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 02:20:29.672811 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 02:20:29.785541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 02:20:29.788915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 02:20:29.790456 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 02:20:29.806627 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 02:20:29.808992 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 02:20:29.833471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 02:20:29.842526 ignition[994]: INFO : Ignition 2.21.0 Sep 9 02:20:29.845611 ignition[994]: INFO : Stage: mount Sep 9 02:20:29.845611 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:29.845611 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:29.848057 ignition[994]: INFO : mount: mount passed Sep 9 02:20:29.848057 ignition[994]: INFO : Ignition finished successfully Sep 9 02:20:29.847468 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 02:20:30.514874 systemd-networkd[840]: eth0: Gained IPv6LL Sep 9 02:20:30.673273 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:32.022544 systemd-networkd[840]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87c2:24:19ff:fee6:1f0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87c2:24:19ff:fee6:1f0a/64 assigned by NDisc. Sep 9 02:20:32.022558 systemd-networkd[840]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 02:20:32.684948 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:36.691260 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:36.696992 coreos-metadata[880]: Sep 09 02:20:36.696 WARN failed to locate config-drive, using the metadata service API instead Sep 9 02:20:36.720916 coreos-metadata[880]: Sep 09 02:20:36.720 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 02:20:36.734723 coreos-metadata[880]: Sep 09 02:20:36.734 INFO Fetch successful Sep 9 02:20:36.735717 coreos-metadata[880]: Sep 09 02:20:36.735 INFO wrote hostname srv-9tmcm.gb1.brightbox.com to /sysroot/etc/hostname Sep 9 02:20:36.737360 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 9 02:20:36.737535 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 9 02:20:36.742002 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 02:20:36.762706 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 02:20:36.791280 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 9 02:20:36.796882 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 02:20:36.796925 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 02:20:36.802710 kernel: BTRFS info (device vda6): turning on async discard Sep 9 02:20:36.802765 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 02:20:36.806138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 02:20:36.838253 ignition[1030]: INFO : Ignition 2.21.0 Sep 9 02:20:36.838253 ignition[1030]: INFO : Stage: files Sep 9 02:20:36.840036 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:36.840036 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:36.842806 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 9 02:20:36.850163 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 02:20:36.850163 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 02:20:36.853429 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 02:20:36.854508 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 02:20:36.854508 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 02:20:36.854156 unknown[1030]: wrote ssh authorized keys file for user: core Sep 9 02:20:36.857513 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 02:20:36.857513 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 02:20:37.132475 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 02:20:37.619338 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 02:20:37.619338 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 02:20:37.622071 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 02:20:38.058447 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 02:20:38.846253 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 02:20:38.863988 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 02:20:39.149325 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 02:20:40.766168 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 02:20:40.766168 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 02:20:40.770055 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 02:20:40.776247 ignition[1030]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 02:20:40.786883 ignition[1030]: INFO : files: files passed Sep 9 02:20:40.786883 ignition[1030]: INFO : Ignition finished successfully Sep 9 02:20:40.782075 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 02:20:40.787408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 02:20:40.791559 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 02:20:40.810529 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 02:20:40.810718 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 02:20:40.820255 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 02:20:40.822324 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 02:20:40.822324 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 02:20:40.823946 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 02:20:40.825978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 02:20:40.828391 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 02:20:40.883731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 02:20:40.883981 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 02:20:40.885795 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 02:20:40.887080 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 02:20:40.888727 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 02:20:40.890412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 02:20:40.938455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 02:20:40.941938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 02:20:40.968834 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 02:20:40.969908 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 02:20:40.971648 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 02:20:40.973168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 02:20:40.973468 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 02:20:40.975080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 02:20:40.976024 systemd[1]: Stopped target basic.target - Basic System. Sep 9 02:20:40.977523 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 02:20:40.979048 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 02:20:40.980534 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 02:20:40.982158 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 02:20:40.983799 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 02:20:40.985511 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 02:20:40.987102 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 02:20:40.988616 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 02:20:40.990043 systemd[1]: Stopped target swap.target - Swaps. Sep 9 02:20:40.991505 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 02:20:40.991835 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 02:20:40.993331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 02:20:40.994345 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 02:20:40.995774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 02:20:41.002044 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 02:20:41.003381 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 02:20:41.003698 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 02:20:41.005503 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 02:20:41.005767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 02:20:41.007469 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 02:20:41.007697 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 02:20:41.011483 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 02:20:41.012172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 02:20:41.012382 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 02:20:41.017438 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 02:20:41.019332 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 02:20:41.019588 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 02:20:41.021825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 02:20:41.022060 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 02:20:41.033847 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 02:20:41.036279 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 02:20:41.062277 ignition[1084]: INFO : Ignition 2.21.0 Sep 9 02:20:41.062277 ignition[1084]: INFO : Stage: umount Sep 9 02:20:41.062277 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 02:20:41.062277 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 02:20:41.062277 ignition[1084]: INFO : umount: umount passed Sep 9 02:20:41.062277 ignition[1084]: INFO : Ignition finished successfully Sep 9 02:20:41.060477 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 02:20:41.061451 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 02:20:41.061628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 02:20:41.063842 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 02:20:41.063991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 02:20:41.065680 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 02:20:41.065830 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 02:20:41.068362 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 02:20:41.068447 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 02:20:41.070750 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 02:20:41.070869 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 02:20:41.072145 systemd[1]: Stopped target network.target - Network. Sep 9 02:20:41.073495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 02:20:41.073574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 02:20:41.074948 systemd[1]: Stopped target paths.target - Path Units. Sep 9 02:20:41.076350 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 02:20:41.080347 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 02:20:41.082464 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 02:20:41.083755 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 02:20:41.085317 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 02:20:41.085405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 02:20:41.086636 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 02:20:41.086697 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 02:20:41.088031 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 02:20:41.088145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 02:20:41.088896 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 02:20:41.088968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 02:20:41.089676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 02:20:41.089751 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 02:20:41.091418 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 02:20:41.093710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 02:20:41.098374 systemd-networkd[840]: eth0: DHCPv6 lease lost Sep 9 02:20:41.099163 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 02:20:41.099390 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 02:20:41.106327 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 02:20:41.106744 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 02:20:41.106944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 02:20:41.109577 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 02:20:41.110750 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 02:20:41.112483 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 02:20:41.112563 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 02:20:41.114925 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 02:20:41.116592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 02:20:41.116665 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 02:20:41.118659 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 02:20:41.118736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 02:20:41.121587 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 02:20:41.121670 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 02:20:41.123446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 02:20:41.123519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 02:20:41.125427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 02:20:41.128167 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 02:20:41.130171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 02:20:41.142114 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 02:20:41.142509 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 02:20:41.145784 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 02:20:41.145971 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 02:20:41.148227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 02:20:41.148359 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 02:20:41.149900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 02:20:41.149956 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 02:20:41.151470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 02:20:41.151545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 02:20:41.153735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 02:20:41.153820 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 02:20:41.155258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 02:20:41.155339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 02:20:41.158042 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 02:20:41.165894 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 02:20:41.165991 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 02:20:41.168055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 02:20:41.168121 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 02:20:41.169342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 02:20:41.169411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 02:20:41.172822 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 02:20:41.172905 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 02:20:41.172998 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 02:20:41.183362 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 02:20:41.183544 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 02:20:41.185321 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 02:20:41.188440 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 02:20:41.213004 systemd[1]: Switching root. Sep 9 02:20:41.259487 systemd-journald[231]: Journal stopped Sep 9 02:20:43.009921 systemd-journald[231]: Received SIGTERM from PID 1 (systemd). Sep 9 02:20:43.010060 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 02:20:43.010104 kernel: SELinux: policy capability open_perms=1 Sep 9 02:20:43.010132 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 02:20:43.010166 kernel: SELinux: policy capability always_check_network=0 Sep 9 02:20:43.010186 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 02:20:43.014099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 02:20:43.014141 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 02:20:43.014163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 02:20:43.014188 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 02:20:43.014207 kernel: audit: type=1403 audit(1757384441.711:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 02:20:43.014285 systemd[1]: Successfully loaded SELinux policy in 52.910ms. Sep 9 02:20:43.014344 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.760ms. Sep 9 02:20:43.014388 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 02:20:43.014418 systemd[1]: Detected virtualization kvm. Sep 9 02:20:43.014445 systemd[1]: Detected architecture x86-64. Sep 9 02:20:43.014465 systemd[1]: Detected first boot. Sep 9 02:20:43.014486 systemd[1]: Hostname set to . Sep 9 02:20:43.014513 systemd[1]: Initializing machine ID from VM UUID. Sep 9 02:20:43.014533 zram_generator::config[1128]: No configuration found. Sep 9 02:20:43.014554 kernel: Guest personality initialized and is inactive Sep 9 02:20:43.014596 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 02:20:43.014617 kernel: Initialized host personality Sep 9 02:20:43.014635 kernel: NET: Registered PF_VSOCK protocol family Sep 9 02:20:43.014655 systemd[1]: Populated /etc with preset unit settings. Sep 9 02:20:43.014678 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 02:20:43.014699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 02:20:43.014719 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 02:20:43.014755 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 02:20:43.014790 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 02:20:43.014825 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 02:20:43.014847 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 02:20:43.014887 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 02:20:43.014920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 02:20:43.014942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 02:20:43.014973 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 02:20:43.015001 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 02:20:43.015023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 02:20:43.015050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 02:20:43.015072 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 02:20:43.015094 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 02:20:43.015127 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 02:20:43.015150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 02:20:43.015171 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 02:20:43.015192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 02:20:43.015226 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 02:20:43.015252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 02:20:43.015273 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 02:20:43.015295 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 02:20:43.015316 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 02:20:43.015354 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 02:20:43.015377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 02:20:43.015398 systemd[1]: Reached target slices.target - Slice Units. Sep 9 02:20:43.015419 systemd[1]: Reached target swap.target - Swaps. Sep 9 02:20:43.015447 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 02:20:43.015481 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 02:20:43.015510 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 02:20:43.015531 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 02:20:43.015557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 02:20:43.015579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 02:20:43.015611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 02:20:43.015633 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 02:20:43.015654 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 02:20:43.015674 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 02:20:43.015701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:43.015722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 02:20:43.015757 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 02:20:43.015787 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 02:20:43.015821 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 02:20:43.015844 systemd[1]: Reached target machines.target - Containers. Sep 9 02:20:43.015865 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 02:20:43.015886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 02:20:43.015908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 02:20:43.015928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 02:20:43.015949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 02:20:43.015976 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 02:20:43.015997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 02:20:43.016030 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 02:20:43.016052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 02:20:43.016079 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 02:20:43.016101 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 02:20:43.016122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 02:20:43.016152 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 02:20:43.016173 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 02:20:43.016200 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 02:20:43.018269 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 02:20:43.018303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 02:20:43.018324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 02:20:43.018352 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 02:20:43.018380 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 02:20:43.018427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 02:20:43.018457 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 02:20:43.018479 systemd[1]: Stopped verity-setup.service. Sep 9 02:20:43.018501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:43.018535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 02:20:43.018557 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 02:20:43.018578 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 02:20:43.018604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 02:20:43.018630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 02:20:43.018651 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 02:20:43.018677 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 02:20:43.018704 kernel: loop: module loaded Sep 9 02:20:43.018725 kernel: fuse: init (API version 7.41) Sep 9 02:20:43.018769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 02:20:43.018791 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 02:20:43.018812 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 02:20:43.018835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 02:20:43.018856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 02:20:43.018877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 02:20:43.018898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 02:20:43.018921 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 02:20:43.018941 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 02:20:43.018976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 02:20:43.018998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 02:20:43.019020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 02:20:43.019041 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 02:20:43.019061 kernel: ACPI: bus type drm_connector registered Sep 9 02:20:43.019136 systemd-journald[1229]: Collecting audit messages is disabled. Sep 9 02:20:43.019194 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 02:20:43.019244 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 02:20:43.019283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 02:20:43.019307 systemd-journald[1229]: Journal started Sep 9 02:20:43.019346 systemd-journald[1229]: Runtime Journal (/run/log/journal/ac2cb9acb60d49b59487df5ee8397f3c) is 4.7M, max 38.2M, 33.4M free. Sep 9 02:20:42.573616 systemd[1]: Queued start job for default target multi-user.target. Sep 9 02:20:42.599241 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 02:20:42.600157 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 02:20:43.023303 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 02:20:43.045297 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 02:20:43.051329 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 02:20:43.055403 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 02:20:43.057329 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 02:20:43.057372 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 02:20:43.060433 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 02:20:43.067388 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 02:20:43.070763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 02:20:43.073449 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 02:20:43.077869 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 02:20:43.078680 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 02:20:43.080464 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 02:20:43.082330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 02:20:43.087643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 02:20:43.091581 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 02:20:43.102612 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 02:20:43.107382 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 02:20:43.108756 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 02:20:43.115579 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 02:20:43.137360 systemd-journald[1229]: Time spent on flushing to /var/log/journal/ac2cb9acb60d49b59487df5ee8397f3c is 134.824ms for 1168 entries. Sep 9 02:20:43.137360 systemd-journald[1229]: System Journal (/var/log/journal/ac2cb9acb60d49b59487df5ee8397f3c) is 8M, max 584.8M, 576.8M free. Sep 9 02:20:43.315041 systemd-journald[1229]: Received client request to flush runtime journal. Sep 9 02:20:43.315114 kernel: loop0: detected capacity change from 0 to 146240 Sep 9 02:20:43.315158 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 02:20:43.315192 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 02:20:43.315248 kernel: loop2: detected capacity change from 0 to 8 Sep 9 02:20:43.141617 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 02:20:43.143360 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 02:20:43.148874 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 02:20:43.248675 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 02:20:43.269193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 02:20:43.320334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 02:20:43.335048 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 02:20:43.340408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 02:20:43.347236 kernel: loop3: detected capacity change from 0 to 113872 Sep 9 02:20:43.394246 kernel: loop4: detected capacity change from 0 to 146240 Sep 9 02:20:43.418717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 02:20:43.445526 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 02:20:43.477246 kernel: loop6: detected capacity change from 0 to 8 Sep 9 02:20:43.485373 kernel: loop7: detected capacity change from 0 to 113872 Sep 9 02:20:43.486091 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 02:20:43.486119 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 02:20:43.504772 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 9 02:20:43.507745 (sd-merge)[1287]: Merged extensions into '/usr'. Sep 9 02:20:43.522182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 02:20:43.530500 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 02:20:43.530545 systemd[1]: Reloading... Sep 9 02:20:43.753239 zram_generator::config[1315]: No configuration found. Sep 9 02:20:43.905931 ldconfig[1260]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 02:20:44.002173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 02:20:44.122680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 02:20:44.123663 systemd[1]: Reloading finished in 592 ms. Sep 9 02:20:44.149176 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 02:20:44.154017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 02:20:44.169436 systemd[1]: Starting ensure-sysext.service... Sep 9 02:20:44.173427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 02:20:44.204665 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 02:20:44.210415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 02:20:44.221201 systemd[1]: Reload requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Sep 9 02:20:44.221244 systemd[1]: Reloading... Sep 9 02:20:44.259340 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 02:20:44.259394 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 02:20:44.259860 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 02:20:44.265328 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 02:20:44.266834 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 02:20:44.267270 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Sep 9 02:20:44.267372 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Sep 9 02:20:44.276624 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 02:20:44.276655 systemd-tmpfiles[1372]: Skipping /boot Sep 9 02:20:44.280980 systemd-udevd[1374]: Using default interface naming scheme 'v255'. Sep 9 02:20:44.338356 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 02:20:44.338376 systemd-tmpfiles[1372]: Skipping /boot Sep 9 02:20:44.369531 zram_generator::config[1423]: No configuration found. Sep 9 02:20:44.619480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 02:20:44.756274 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 02:20:44.770248 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 02:20:44.783248 kernel: ACPI: button: Power Button [PWRF] Sep 9 02:20:44.816492 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 02:20:44.816581 systemd[1]: Reloading finished in 594 ms. Sep 9 02:20:44.835386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 02:20:44.855651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 02:20:44.924810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 02:20:44.930888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:44.933259 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 02:20:44.934295 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 02:20:44.939304 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 02:20:44.938359 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 02:20:44.942767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 02:20:44.945476 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 02:20:44.949612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 02:20:44.957695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 02:20:44.958963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 02:20:44.968640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 02:20:44.969479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 02:20:44.973644 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 02:20:44.980095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 02:20:44.988650 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 02:20:44.994724 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 02:20:44.996299 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:45.000465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 02:20:45.000811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 02:20:45.012101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:45.012486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 02:20:45.029094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 02:20:45.045784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 02:20:45.047024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 02:20:45.047252 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 02:20:45.047475 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 02:20:45.054430 systemd[1]: Finished ensure-sysext.service. Sep 9 02:20:45.062545 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 02:20:45.063033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 02:20:45.070646 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 02:20:45.080616 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 02:20:45.082409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 02:20:45.082891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 02:20:45.084325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 02:20:45.088173 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 02:20:45.092896 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 02:20:45.132047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 02:20:45.137480 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 02:20:45.140026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 02:20:45.151077 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 02:20:45.151435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 02:20:45.169394 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 02:20:45.173737 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 02:20:45.174506 augenrules[1539]: No rules Sep 9 02:20:45.175477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 02:20:45.177500 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 02:20:45.177840 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 02:20:45.181445 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 02:20:45.207638 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 02:20:45.296611 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 02:20:45.297705 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 02:20:45.546396 systemd-resolved[1511]: Positive Trust Anchors: Sep 9 02:20:45.546929 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 02:20:45.547078 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 02:20:45.559855 systemd-resolved[1511]: Using system hostname 'srv-9tmcm.gb1.brightbox.com'. Sep 9 02:20:45.565143 systemd-networkd[1510]: lo: Link UP Sep 9 02:20:45.566252 systemd-networkd[1510]: lo: Gained carrier Sep 9 02:20:45.567800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 02:20:45.570480 systemd-networkd[1510]: Enumeration completed Sep 9 02:20:45.571029 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 02:20:45.571048 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 02:20:45.576579 systemd-networkd[1510]: eth0: Link UP Sep 9 02:20:45.578119 systemd-networkd[1510]: eth0: Gained carrier Sep 9 02:20:45.578152 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 02:20:45.602475 systemd-networkd[1510]: eth0: DHCPv4 address 10.230.31.10/30, gateway 10.230.31.9 acquired from 10.230.31.9 Sep 9 02:20:45.606104 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Sep 9 02:20:45.619708 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 02:20:45.620793 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 02:20:45.622106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 02:20:45.623790 systemd[1]: Reached target network.target - Network. Sep 9 02:20:45.624462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 02:20:45.625281 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 02:20:45.626152 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 02:20:45.627011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 02:20:45.628037 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 02:20:45.628811 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 02:20:45.629600 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 02:20:45.629667 systemd[1]: Reached target paths.target - Path Units. Sep 9 02:20:45.630302 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 02:20:45.631297 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 02:20:45.632140 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 02:20:45.632981 systemd[1]: Reached target timers.target - Timer Units. Sep 9 02:20:45.635672 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 02:20:45.638721 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 02:20:45.643261 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 02:20:45.644257 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 02:20:45.645020 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 02:20:45.656964 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 02:20:45.658326 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 02:20:45.661113 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 02:20:45.663369 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 02:20:45.666042 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 02:20:45.668492 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 02:20:45.669211 systemd[1]: Reached target basic.target - Basic System. Sep 9 02:20:45.669985 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 02:20:45.670028 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 02:20:45.672335 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 02:20:45.675381 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 02:20:45.681526 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 02:20:45.685724 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 02:20:45.691430 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 02:20:45.697356 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 02:20:45.699306 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 02:20:45.700250 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:45.701534 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 02:20:45.705609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 02:20:45.714442 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 02:20:45.720350 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 02:20:45.724995 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 02:20:45.731787 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing passwd entry cache Sep 9 02:20:45.732703 oslogin_cache_refresh[1578]: Refreshing passwd entry cache Sep 9 02:20:45.734606 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 02:20:45.737584 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 02:20:45.739926 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 02:20:45.743486 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 02:20:45.756246 jq[1574]: false Sep 9 02:20:45.755430 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 02:20:45.766945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 02:20:45.768163 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 02:20:45.771483 oslogin_cache_refresh[1578]: Failure getting users, quitting Sep 9 02:20:45.770547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 02:20:45.772479 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting users, quitting Sep 9 02:20:45.772479 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 02:20:45.772479 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Refreshing group entry cache Sep 9 02:20:45.771516 oslogin_cache_refresh[1578]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 02:20:45.771589 oslogin_cache_refresh[1578]: Refreshing group entry cache Sep 9 02:20:45.775350 oslogin_cache_refresh[1578]: Failure getting groups, quitting Sep 9 02:20:45.776392 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Failure getting groups, quitting Sep 9 02:20:45.776392 google_oslogin_nss_cache[1578]: oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 02:20:45.775364 oslogin_cache_refresh[1578]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 02:20:45.778623 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 02:20:45.778963 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 02:20:45.784264 jq[1585]: true Sep 9 02:20:45.803218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 02:20:45.803611 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 02:20:45.828241 extend-filesystems[1575]: Found /dev/vda6 Sep 9 02:20:45.839389 (ntainerd)[1608]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 02:20:45.843742 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 02:20:45.844121 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 02:20:45.852433 tar[1590]: linux-amd64/LICENSE Sep 9 02:20:45.852433 tar[1590]: linux-amd64/helm Sep 9 02:20:45.875265 extend-filesystems[1575]: Found /dev/vda9 Sep 9 02:20:45.870453 systemd-timesyncd[1524]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Sep 9 02:20:45.870536 systemd-timesyncd[1524]: Initial clock synchronization to Tue 2025-09-09 02:20:45.852377 UTC. Sep 9 02:20:45.882439 extend-filesystems[1575]: Checking size of /dev/vda9 Sep 9 02:20:45.885345 jq[1596]: true Sep 9 02:20:45.889481 update_engine[1584]: I20250909 02:20:45.889361 1584 main.cc:92] Flatcar Update Engine starting Sep 9 02:20:45.907450 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 02:20:45.938195 dbus-daemon[1572]: [system] SELinux support is enabled Sep 9 02:20:45.939698 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 02:20:45.946882 systemd-logind[1583]: Watching system buttons on /dev/input/event3 (Power Button) Sep 9 02:20:45.949208 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 02:20:45.950760 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 02:20:45.954854 dbus-daemon[1572]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1510 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 02:20:45.950814 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 02:20:45.953172 systemd-logind[1583]: New seat seat0. Sep 9 02:20:45.961014 update_engine[1584]: I20250909 02:20:45.960475 1584 update_check_scheduler.cc:74] Next update check in 6m58s Sep 9 02:20:45.965613 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 02:20:45.967748 extend-filesystems[1575]: Resized partition /dev/vda9 Sep 9 02:20:45.965680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 02:20:45.966963 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 02:20:45.971955 systemd[1]: Started update-engine.service - Update Engine. Sep 9 02:20:45.973107 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 02:20:45.979588 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 02:20:45.984602 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 02:20:45.993660 extend-filesystems[1625]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 02:20:46.013004 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 9 02:20:46.158520 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Sep 9 02:20:46.162907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 02:20:46.169908 systemd[1]: Starting sshkeys.service... Sep 9 02:20:46.246684 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 02:20:46.253779 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 02:20:46.348270 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:46.391497 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 02:20:46.407777 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 02:20:46.407777 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 02:20:46.407777 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 02:20:46.413396 extend-filesystems[1575]: Resized filesystem in /dev/vda9 Sep 9 02:20:46.408533 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 02:20:46.409991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 02:20:46.470562 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 02:20:46.485975 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 02:20:46.492441 dbus-daemon[1572]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1626 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 02:20:46.499798 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 02:20:46.503030 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 02:20:46.541827 containerd[1608]: time="2025-09-09T02:20:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 02:20:46.547117 containerd[1608]: time="2025-09-09T02:20:46.546956009Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 02:20:46.561023 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 02:20:46.607985 containerd[1608]: time="2025-09-09T02:20:46.606746285Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.536µs" Sep 9 02:20:46.607985 containerd[1608]: time="2025-09-09T02:20:46.606800386Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 02:20:46.607985 containerd[1608]: time="2025-09-09T02:20:46.606828818Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 02:20:46.607985 containerd[1608]: time="2025-09-09T02:20:46.607132881Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 02:20:46.607985 containerd[1608]: time="2025-09-09T02:20:46.607160032Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 02:20:46.609793 containerd[1608]: time="2025-09-09T02:20:46.607206261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 02:20:46.610345 containerd[1608]: time="2025-09-09T02:20:46.610313045Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 02:20:46.610528 containerd[1608]: time="2025-09-09T02:20:46.610502264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 02:20:46.610985 containerd[1608]: time="2025-09-09T02:20:46.610950501Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 02:20:46.612064 containerd[1608]: time="2025-09-09T02:20:46.612035164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 02:20:46.612236 containerd[1608]: time="2025-09-09T02:20:46.612185365Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 02:20:46.612358 containerd[1608]: time="2025-09-09T02:20:46.612333290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 02:20:46.612920 containerd[1608]: time="2025-09-09T02:20:46.612891423Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.613822616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.613886568Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.613908123Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.613977526Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.614328107Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 02:20:46.614953 containerd[1608]: time="2025-09-09T02:20:46.614418005Z" level=info msg="metadata content store policy set" policy=shared Sep 9 02:20:46.618490 containerd[1608]: time="2025-09-09T02:20:46.618450895Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 02:20:46.618652 containerd[1608]: time="2025-09-09T02:20:46.618625511Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 02:20:46.618789 containerd[1608]: time="2025-09-09T02:20:46.618763893Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 02:20:46.618891 containerd[1608]: time="2025-09-09T02:20:46.618865886Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 02:20:46.619026 containerd[1608]: time="2025-09-09T02:20:46.618999337Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 02:20:46.619140 containerd[1608]: time="2025-09-09T02:20:46.619114665Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 02:20:46.619263 containerd[1608]: time="2025-09-09T02:20:46.619237284Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 02:20:46.619425 containerd[1608]: time="2025-09-09T02:20:46.619343355Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 02:20:46.619529 containerd[1608]: time="2025-09-09T02:20:46.619505657Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 02:20:46.619621 containerd[1608]: time="2025-09-09T02:20:46.619597681Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.619700529Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.619734064Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.619893785Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620024352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620058896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620090059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620110420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620128242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620145877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 02:20:46.620239 containerd[1608]: time="2025-09-09T02:20:46.620168438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 02:20:46.620632 containerd[1608]: time="2025-09-09T02:20:46.620206933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 02:20:46.620756 containerd[1608]: time="2025-09-09T02:20:46.620729337Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 02:20:46.620975 containerd[1608]: time="2025-09-09T02:20:46.620897158Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 02:20:46.621206 containerd[1608]: time="2025-09-09T02:20:46.621178439Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 02:20:46.621423 containerd[1608]: time="2025-09-09T02:20:46.621398059Z" level=info msg="Start snapshots syncer" Sep 9 02:20:46.621632 containerd[1608]: time="2025-09-09T02:20:46.621605001Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 02:20:46.621632 containerd[1608]: time="2025-09-09T02:20:46.622040454Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 02:20:46.622505 containerd[1608]: time="2025-09-09T02:20:46.622120991Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 02:20:46.622907 containerd[1608]: time="2025-09-09T02:20:46.622878793Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 02:20:46.623187 containerd[1608]: time="2025-09-09T02:20:46.623158992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 02:20:46.623373 containerd[1608]: time="2025-09-09T02:20:46.623308870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623447591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623482584Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623518848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623537372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623554552Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623612467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623635623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 02:20:46.623710 containerd[1608]: time="2025-09-09T02:20:46.623653498Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 02:20:46.624200 containerd[1608]: time="2025-09-09T02:20:46.624083491Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 02:20:46.624324 containerd[1608]: time="2025-09-09T02:20:46.624297162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624387403Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624416332Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624431397Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624447317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624472746Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624508954Z" level=info msg="runtime interface created" Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624521450Z" level=info msg="created NRI interface" Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624537211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 02:20:46.624628 containerd[1608]: time="2025-09-09T02:20:46.624569285Z" level=info msg="Connect containerd service" Sep 9 02:20:46.625039 containerd[1608]: time="2025-09-09T02:20:46.624951405Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 02:20:46.627112 containerd[1608]: time="2025-09-09T02:20:46.626751006Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 02:20:46.712332 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 02:20:46.743624 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:46.772534 polkitd[1656]: Started polkitd version 126 Sep 9 02:20:46.798144 polkitd[1656]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 02:20:46.798764 polkitd[1656]: Loading rules from directory /run/polkit-1/rules.d Sep 9 02:20:46.798845 polkitd[1656]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 02:20:46.799202 polkitd[1656]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 02:20:46.801309 polkitd[1656]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 02:20:46.801388 polkitd[1656]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 02:20:46.802121 polkitd[1656]: Finished loading, compiling and executing 2 rules Sep 9 02:20:46.802673 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 02:20:46.805757 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 02:20:46.810574 polkitd[1656]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 02:20:46.822178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 02:20:46.835052 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 02:20:46.840664 systemd[1]: Started sshd@0-10.230.31.10:22-139.178.68.195:39484.service - OpenSSH per-connection server daemon (139.178.68.195:39484). Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855327218Z" level=info msg="Start subscribing containerd event" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855463393Z" level=info msg="Start recovering state" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855759047Z" level=info msg="Start event monitor" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855811380Z" level=info msg="Start cni network conf syncer for default" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855840012Z" level=info msg="Start streaming server" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855884468Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855900434Z" level=info msg="runtime interface starting up..." Sep 9 02:20:46.855956 containerd[1608]: time="2025-09-09T02:20:46.855913790Z" level=info msg="starting plugins..." Sep 9 02:20:46.856465 containerd[1608]: time="2025-09-09T02:20:46.856264905Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 02:20:46.857323 containerd[1608]: time="2025-09-09T02:20:46.856596512Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 02:20:46.857323 containerd[1608]: time="2025-09-09T02:20:46.856714663Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 02:20:46.857614 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 02:20:46.862853 containerd[1608]: time="2025-09-09T02:20:46.862740250Z" level=info msg="containerd successfully booted in 0.322724s" Sep 9 02:20:46.870735 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 02:20:46.871067 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 02:20:46.884443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 02:20:46.893633 systemd-hostnamed[1626]: Hostname set to (static) Sep 9 02:20:46.925194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 02:20:46.930498 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 02:20:46.934537 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 02:20:46.936665 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 02:20:47.159139 tar[1590]: linux-amd64/README.md Sep 9 02:20:47.177187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 02:20:47.388635 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:47.538498 systemd-networkd[1510]: eth0: Gained IPv6LL Sep 9 02:20:47.543155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 02:20:47.546546 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 02:20:47.551173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:20:47.554556 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 02:20:47.594057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 02:20:47.784802 sshd[1689]: Accepted publickey for core from 139.178.68.195 port 39484 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:20:47.788503 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:20:47.803969 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 02:20:47.806505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 02:20:47.830299 systemd-logind[1583]: New session 1 of user core. Sep 9 02:20:47.848894 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 02:20:47.857293 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 02:20:47.877454 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 02:20:47.882256 systemd-logind[1583]: New session c1 of user core. Sep 9 02:20:48.071428 systemd[1717]: Queued start job for default target default.target. Sep 9 02:20:48.078673 systemd[1717]: Created slice app.slice - User Application Slice. Sep 9 02:20:48.078718 systemd[1717]: Reached target paths.target - Paths. Sep 9 02:20:48.078819 systemd[1717]: Reached target timers.target - Timers. Sep 9 02:20:48.080846 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 02:20:48.099107 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 02:20:48.100385 systemd[1717]: Reached target sockets.target - Sockets. Sep 9 02:20:48.100560 systemd[1717]: Reached target basic.target - Basic System. Sep 9 02:20:48.100773 systemd[1717]: Reached target default.target - Main User Target. Sep 9 02:20:48.100978 systemd[1717]: Startup finished in 207ms. Sep 9 02:20:48.101232 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 02:20:48.109531 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 02:20:48.632170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:20:48.647847 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 02:20:48.746614 systemd[1]: Started sshd@1-10.230.31.10:22-139.178.68.195:39500.service - OpenSSH per-connection server daemon (139.178.68.195:39500). Sep 9 02:20:48.788608 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:49.048662 systemd-networkd[1510]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87c2:24:19ff:fee6:1f0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87c2:24:19ff:fee6:1f0a/64 assigned by NDisc. Sep 9 02:20:49.050639 systemd-networkd[1510]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 02:20:49.348701 kubelet[1732]: E0909 02:20:49.348488 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 02:20:49.351923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 02:20:49.352208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 02:20:49.353198 systemd[1]: kubelet.service: Consumed 1.099s CPU time, 262.7M memory peak. Sep 9 02:20:49.406261 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:49.673684 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 39500 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:20:49.675867 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:20:49.684648 systemd-logind[1583]: New session 2 of user core. Sep 9 02:20:49.707454 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 02:20:50.298760 sshd[1745]: Connection closed by 139.178.68.195 port 39500 Sep 9 02:20:50.299604 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 9 02:20:50.304354 systemd[1]: sshd@1-10.230.31.10:22-139.178.68.195:39500.service: Deactivated successfully. Sep 9 02:20:50.306706 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 02:20:50.308299 systemd-logind[1583]: Session 2 logged out. Waiting for processes to exit. Sep 9 02:20:50.310110 systemd-logind[1583]: Removed session 2. Sep 9 02:20:50.458320 systemd[1]: Started sshd@2-10.230.31.10:22-139.178.68.195:44830.service - OpenSSH per-connection server daemon (139.178.68.195:44830). Sep 9 02:20:51.383327 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 44830 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:20:51.385914 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:20:51.395516 systemd-logind[1583]: New session 3 of user core. Sep 9 02:20:51.403576 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 02:20:51.997166 login[1698]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 02:20:52.011580 sshd[1753]: Connection closed by 139.178.68.195 port 44830 Sep 9 02:20:52.011362 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 9 02:20:52.012474 systemd-logind[1583]: New session 4 of user core. Sep 9 02:20:52.016612 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 02:20:52.027563 systemd[1]: sshd@2-10.230.31.10:22-139.178.68.195:44830.service: Deactivated successfully. Sep 9 02:20:52.031806 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 02:20:52.034454 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Sep 9 02:20:52.043744 systemd-logind[1583]: Removed session 3. Sep 9 02:20:52.045444 login[1697]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 02:20:52.055505 systemd-logind[1583]: New session 5 of user core. Sep 9 02:20:52.062028 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 02:20:52.812262 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:52.829880 coreos-metadata[1571]: Sep 09 02:20:52.829 WARN failed to locate config-drive, using the metadata service API instead Sep 9 02:20:52.954765 coreos-metadata[1571]: Sep 09 02:20:52.954 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 9 02:20:52.965023 coreos-metadata[1571]: Sep 09 02:20:52.964 INFO Fetch failed with 404: resource not found Sep 9 02:20:52.965023 coreos-metadata[1571]: Sep 09 02:20:52.964 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 02:20:52.965534 coreos-metadata[1571]: Sep 09 02:20:52.965 INFO Fetch successful Sep 9 02:20:52.965627 coreos-metadata[1571]: Sep 09 02:20:52.965 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 9 02:20:52.980551 coreos-metadata[1571]: Sep 09 02:20:52.980 INFO Fetch successful Sep 9 02:20:52.980812 coreos-metadata[1571]: Sep 09 02:20:52.980 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 9 02:20:53.006592 coreos-metadata[1571]: Sep 09 02:20:53.006 INFO Fetch successful Sep 9 02:20:53.006592 coreos-metadata[1571]: Sep 09 02:20:53.006 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 9 02:20:53.021178 coreos-metadata[1571]: Sep 09 02:20:53.021 INFO Fetch successful Sep 9 02:20:53.021178 coreos-metadata[1571]: Sep 09 02:20:53.021 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 9 02:20:53.046619 coreos-metadata[1571]: Sep 09 02:20:53.046 INFO Fetch successful Sep 9 02:20:53.093233 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 02:20:53.095541 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 02:20:53.425281 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 9 02:20:53.440792 coreos-metadata[1646]: Sep 09 02:20:53.440 WARN failed to locate config-drive, using the metadata service API instead Sep 9 02:20:53.463841 coreos-metadata[1646]: Sep 09 02:20:53.463 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 9 02:20:53.495041 coreos-metadata[1646]: Sep 09 02:20:53.494 INFO Fetch successful Sep 9 02:20:53.495271 coreos-metadata[1646]: Sep 09 02:20:53.495 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 02:20:53.545010 coreos-metadata[1646]: Sep 09 02:20:53.544 INFO Fetch successful Sep 9 02:20:53.547987 unknown[1646]: wrote ssh authorized keys file for user: core Sep 9 02:20:53.577932 update-ssh-keys[1791]: Updated "/home/core/.ssh/authorized_keys" Sep 9 02:20:53.579592 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 02:20:53.583697 systemd[1]: Finished sshkeys.service. Sep 9 02:20:53.586265 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 02:20:53.586755 systemd[1]: Startup finished in 3.631s (kernel) + 16.056s (initrd) + 11.927s (userspace) = 31.615s. Sep 9 02:20:59.602896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 02:20:59.605293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:20:59.837624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:20:59.852000 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 02:20:59.917671 kubelet[1803]: E0909 02:20:59.917582 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 02:20:59.922950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 02:20:59.923302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 02:20:59.924089 systemd[1]: kubelet.service: Consumed 246ms CPU time, 110.9M memory peak. Sep 9 02:21:02.183633 systemd[1]: Started sshd@3-10.230.31.10:22-139.178.68.195:43476.service - OpenSSH per-connection server daemon (139.178.68.195:43476). Sep 9 02:21:03.171475 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 43476 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:03.173702 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:03.181578 systemd-logind[1583]: New session 6 of user core. Sep 9 02:21:03.188522 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 02:21:03.833926 sshd[1813]: Connection closed by 139.178.68.195 port 43476 Sep 9 02:21:03.834668 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:03.842375 systemd[1]: sshd@3-10.230.31.10:22-139.178.68.195:43476.service: Deactivated successfully. Sep 9 02:21:03.845115 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 02:21:03.847586 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Sep 9 02:21:03.849799 systemd-logind[1583]: Removed session 6. Sep 9 02:21:04.000909 systemd[1]: Started sshd@4-10.230.31.10:22-139.178.68.195:43482.service - OpenSSH per-connection server daemon (139.178.68.195:43482). Sep 9 02:21:04.922894 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 43482 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:04.924844 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:04.933183 systemd-logind[1583]: New session 7 of user core. Sep 9 02:21:04.940504 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 02:21:05.540328 sshd[1821]: Connection closed by 139.178.68.195 port 43482 Sep 9 02:21:05.541864 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:05.549123 systemd[1]: sshd@4-10.230.31.10:22-139.178.68.195:43482.service: Deactivated successfully. Sep 9 02:21:05.552172 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 02:21:05.554271 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Sep 9 02:21:05.556772 systemd-logind[1583]: Removed session 7. Sep 9 02:21:05.697292 systemd[1]: Started sshd@5-10.230.31.10:22-139.178.68.195:43492.service - OpenSSH per-connection server daemon (139.178.68.195:43492). Sep 9 02:21:06.616563 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 43492 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:06.618775 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:06.627853 systemd-logind[1583]: New session 8 of user core. Sep 9 02:21:06.634477 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 02:21:07.237255 sshd[1829]: Connection closed by 139.178.68.195 port 43492 Sep 9 02:21:07.237519 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:07.242687 systemd[1]: sshd@5-10.230.31.10:22-139.178.68.195:43492.service: Deactivated successfully. Sep 9 02:21:07.244891 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 02:21:07.246569 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Sep 9 02:21:07.248511 systemd-logind[1583]: Removed session 8. Sep 9 02:21:07.399594 systemd[1]: Started sshd@6-10.230.31.10:22-139.178.68.195:43504.service - OpenSSH per-connection server daemon (139.178.68.195:43504). Sep 9 02:21:08.348120 sshd[1835]: Accepted publickey for core from 139.178.68.195 port 43504 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:08.350775 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:08.364733 systemd-logind[1583]: New session 9 of user core. Sep 9 02:21:08.381537 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 02:21:08.854813 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 02:21:08.855319 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 02:21:08.873654 sudo[1838]: pam_unix(sudo:session): session closed for user root Sep 9 02:21:09.024240 sshd[1837]: Connection closed by 139.178.68.195 port 43504 Sep 9 02:21:09.023628 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:09.029447 systemd[1]: sshd@6-10.230.31.10:22-139.178.68.195:43504.service: Deactivated successfully. Sep 9 02:21:09.031630 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 02:21:09.032912 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Sep 9 02:21:09.034733 systemd-logind[1583]: Removed session 9. Sep 9 02:21:09.180316 systemd[1]: Started sshd@7-10.230.31.10:22-139.178.68.195:43516.service - OpenSSH per-connection server daemon (139.178.68.195:43516). Sep 9 02:21:09.937708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 02:21:09.941205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:10.089311 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 43516 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:10.091322 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:10.102148 systemd-logind[1583]: New session 10 of user core. Sep 9 02:21:10.108660 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 02:21:10.144843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:10.156712 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 02:21:10.255860 kubelet[1855]: E0909 02:21:10.255636 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 02:21:10.259028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 02:21:10.259422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 02:21:10.260425 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.2M memory peak. Sep 9 02:21:10.567137 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 02:21:10.568150 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 02:21:10.575660 sudo[1863]: pam_unix(sudo:session): session closed for user root Sep 9 02:21:10.583450 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 02:21:10.583876 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 02:21:10.600544 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 02:21:10.658040 augenrules[1885]: No rules Sep 9 02:21:10.658954 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 02:21:10.659365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 02:21:10.661441 sudo[1862]: pam_unix(sudo:session): session closed for user root Sep 9 02:21:10.805722 sshd[1851]: Connection closed by 139.178.68.195 port 43516 Sep 9 02:21:10.806163 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:10.811641 systemd[1]: sshd@7-10.230.31.10:22-139.178.68.195:43516.service: Deactivated successfully. Sep 9 02:21:10.813934 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 02:21:10.815409 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Sep 9 02:21:10.817628 systemd-logind[1583]: Removed session 10. Sep 9 02:21:10.962187 systemd[1]: Started sshd@8-10.230.31.10:22-139.178.68.195:56290.service - OpenSSH per-connection server daemon (139.178.68.195:56290). Sep 9 02:21:11.871503 sshd[1894]: Accepted publickey for core from 139.178.68.195 port 56290 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:21:11.874081 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:21:11.880693 systemd-logind[1583]: New session 11 of user core. Sep 9 02:21:11.894501 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 02:21:12.354374 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 02:21:12.354807 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 02:21:12.819509 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 02:21:12.843870 (dockerd)[1916]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 02:21:13.186966 dockerd[1916]: time="2025-09-09T02:21:13.185733869Z" level=info msg="Starting up" Sep 9 02:21:13.187938 dockerd[1916]: time="2025-09-09T02:21:13.187891072Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 02:21:13.225232 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1287130957-merged.mount: Deactivated successfully. Sep 9 02:21:13.267100 dockerd[1916]: time="2025-09-09T02:21:13.266801981Z" level=info msg="Loading containers: start." Sep 9 02:21:13.282817 kernel: Initializing XFRM netlink socket Sep 9 02:21:13.611965 systemd-networkd[1510]: docker0: Link UP Sep 9 02:21:13.616029 dockerd[1916]: time="2025-09-09T02:21:13.615956654Z" level=info msg="Loading containers: done." Sep 9 02:21:13.636268 dockerd[1916]: time="2025-09-09T02:21:13.636100298Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 02:21:13.636268 dockerd[1916]: time="2025-09-09T02:21:13.636257059Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 02:21:13.636488 dockerd[1916]: time="2025-09-09T02:21:13.636457465Z" level=info msg="Initializing buildkit" Sep 9 02:21:13.664477 dockerd[1916]: time="2025-09-09T02:21:13.664409614Z" level=info msg="Completed buildkit initialization" Sep 9 02:21:13.674010 dockerd[1916]: time="2025-09-09T02:21:13.673927971Z" level=info msg="Daemon has completed initialization" Sep 9 02:21:13.674455 dockerd[1916]: time="2025-09-09T02:21:13.674364918Z" level=info msg="API listen on /run/docker.sock" Sep 9 02:21:13.674841 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 02:21:14.822973 containerd[1608]: time="2025-09-09T02:21:14.822701799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 02:21:15.807314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827533291.mount: Deactivated successfully. Sep 9 02:21:17.971298 containerd[1608]: time="2025-09-09T02:21:17.971172002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:17.973012 containerd[1608]: time="2025-09-09T02:21:17.972708587Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800695" Sep 9 02:21:17.973779 containerd[1608]: time="2025-09-09T02:21:17.973739515Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:17.977461 containerd[1608]: time="2025-09-09T02:21:17.977420405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:17.978978 containerd[1608]: time="2025-09-09T02:21:17.978937968Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 3.156122814s" Sep 9 02:21:17.979132 containerd[1608]: time="2025-09-09T02:21:17.979101617Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 02:21:17.980364 containerd[1608]: time="2025-09-09T02:21:17.980324350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 02:21:19.068991 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 02:21:20.294751 containerd[1608]: time="2025-09-09T02:21:20.294656041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:20.297643 containerd[1608]: time="2025-09-09T02:21:20.297585587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784136" Sep 9 02:21:20.298923 containerd[1608]: time="2025-09-09T02:21:20.298854668Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:20.302981 containerd[1608]: time="2025-09-09T02:21:20.302176524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:20.303764 containerd[1608]: time="2025-09-09T02:21:20.303719918Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.323352474s" Sep 9 02:21:20.303845 containerd[1608]: time="2025-09-09T02:21:20.303772109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 02:21:20.304955 containerd[1608]: time="2025-09-09T02:21:20.304920293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 02:21:20.473967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 02:21:20.478133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:20.714344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:20.727994 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 02:21:20.791679 kubelet[2191]: E0909 02:21:20.791444 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 02:21:20.795321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 02:21:20.795624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 02:21:20.796515 systemd[1]: kubelet.service: Consumed 249ms CPU time, 107.9M memory peak. Sep 9 02:21:22.796241 containerd[1608]: time="2025-09-09T02:21:22.796163713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:22.797500 containerd[1608]: time="2025-09-09T02:21:22.797464836Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175044" Sep 9 02:21:22.799938 containerd[1608]: time="2025-09-09T02:21:22.798268712Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:22.801418 containerd[1608]: time="2025-09-09T02:21:22.801383563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:22.802763 containerd[1608]: time="2025-09-09T02:21:22.802722899Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.497760589s" Sep 9 02:21:22.802861 containerd[1608]: time="2025-09-09T02:21:22.802766073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 02:21:22.803482 containerd[1608]: time="2025-09-09T02:21:22.803449199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 02:21:25.464282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208717372.mount: Deactivated successfully. Sep 9 02:21:26.177549 containerd[1608]: time="2025-09-09T02:21:26.177433167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:26.179605 containerd[1608]: time="2025-09-09T02:21:26.179542166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897178" Sep 9 02:21:26.181254 containerd[1608]: time="2025-09-09T02:21:26.180634286Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:26.182641 containerd[1608]: time="2025-09-09T02:21:26.182600776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:26.183638 containerd[1608]: time="2025-09-09T02:21:26.183602826Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 3.380113515s" Sep 9 02:21:26.183811 containerd[1608]: time="2025-09-09T02:21:26.183783520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 02:21:26.184748 containerd[1608]: time="2025-09-09T02:21:26.184696694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 02:21:26.848166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511577588.mount: Deactivated successfully. Sep 9 02:21:29.436192 containerd[1608]: time="2025-09-09T02:21:29.436125933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:29.437520 containerd[1608]: time="2025-09-09T02:21:29.437412903Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 9 02:21:29.438182 containerd[1608]: time="2025-09-09T02:21:29.438144807Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:29.442253 containerd[1608]: time="2025-09-09T02:21:29.441626433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:29.443250 containerd[1608]: time="2025-09-09T02:21:29.443189427Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.258438574s" Sep 9 02:21:29.443410 containerd[1608]: time="2025-09-09T02:21:29.443381003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 02:21:29.444337 containerd[1608]: time="2025-09-09T02:21:29.444307086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 02:21:30.104493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483960259.mount: Deactivated successfully. Sep 9 02:21:30.109790 containerd[1608]: time="2025-09-09T02:21:30.109747682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 02:21:30.111757 containerd[1608]: time="2025-09-09T02:21:30.111719671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 9 02:21:30.112490 containerd[1608]: time="2025-09-09T02:21:30.112436802Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 02:21:30.116807 containerd[1608]: time="2025-09-09T02:21:30.115604197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 02:21:30.116807 containerd[1608]: time="2025-09-09T02:21:30.116663438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 672.079941ms" Sep 9 02:21:30.116807 containerd[1608]: time="2025-09-09T02:21:30.116698234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 02:21:30.117909 containerd[1608]: time="2025-09-09T02:21:30.117856494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 02:21:30.814294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806138232.mount: Deactivated successfully. Sep 9 02:21:30.817941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 02:21:30.821329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:30.983310 update_engine[1584]: I20250909 02:21:30.982338 1584 update_attempter.cc:509] Updating boot flags... Sep 9 02:21:31.154466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:31.184727 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 02:21:31.324298 kubelet[2293]: E0909 02:21:31.324235 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 02:21:31.328556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 02:21:31.328783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 02:21:31.329274 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.5M memory peak. Sep 9 02:21:37.277950 containerd[1608]: time="2025-09-09T02:21:37.277870882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:37.279377 containerd[1608]: time="2025-09-09T02:21:37.278449498Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 9 02:21:37.280076 containerd[1608]: time="2025-09-09T02:21:37.280042870Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:37.285789 containerd[1608]: time="2025-09-09T02:21:37.284184075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:21:37.286564 containerd[1608]: time="2025-09-09T02:21:37.285654353Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.167750548s" Sep 9 02:21:37.286653 containerd[1608]: time="2025-09-09T02:21:37.286566665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 02:21:41.294610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:41.294878 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.5M memory peak. Sep 9 02:21:41.298893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:41.336394 systemd[1]: Reload requested from client PID 2379 ('systemctl') (unit session-11.scope)... Sep 9 02:21:41.336443 systemd[1]: Reloading... Sep 9 02:21:41.553258 zram_generator::config[2425]: No configuration found. Sep 9 02:21:41.667402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 02:21:41.851323 systemd[1]: Reloading finished in 514 ms. Sep 9 02:21:41.938096 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 02:21:41.938291 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 02:21:41.938776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:41.938847 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.3M memory peak. Sep 9 02:21:41.941335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:42.141503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:42.168144 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 02:21:42.229369 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 02:21:42.229369 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 02:21:42.229369 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 02:21:42.229926 kubelet[2491]: I0909 02:21:42.229449 2491 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 02:21:43.070250 kubelet[2491]: I0909 02:21:43.069436 2491 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 02:21:43.070250 kubelet[2491]: I0909 02:21:43.069480 2491 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 02:21:43.070250 kubelet[2491]: I0909 02:21:43.069838 2491 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 02:21:43.114463 kubelet[2491]: I0909 02:21:43.113635 2491 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 02:21:43.114787 kubelet[2491]: E0909 02:21:43.114749 2491 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.31.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:43.132996 kubelet[2491]: I0909 02:21:43.132962 2491 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 02:21:43.144045 kubelet[2491]: I0909 02:21:43.144014 2491 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 02:21:43.151347 kubelet[2491]: I0909 02:21:43.151255 2491 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 02:21:43.151789 kubelet[2491]: I0909 02:21:43.151503 2491 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-9tmcm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 02:21:43.154184 kubelet[2491]: I0909 02:21:43.153832 2491 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 02:21:43.154184 kubelet[2491]: I0909 02:21:43.153864 2491 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 02:21:43.155190 kubelet[2491]: I0909 02:21:43.155165 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 9 02:21:43.159069 kubelet[2491]: I0909 02:21:43.159044 2491 kubelet.go:446] "Attempting to sync node with API server" Sep 9 02:21:43.159240 kubelet[2491]: I0909 02:21:43.159201 2491 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 02:21:43.161282 kubelet[2491]: I0909 02:21:43.161124 2491 kubelet.go:352] "Adding apiserver pod source" Sep 9 02:21:43.161282 kubelet[2491]: I0909 02:21:43.161168 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 02:21:43.166090 kubelet[2491]: W0909 02:21:43.166028 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.31.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9tmcm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:43.166174 kubelet[2491]: E0909 02:21:43.166106 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.31.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9tmcm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:43.167506 kubelet[2491]: I0909 02:21:43.167472 2491 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 02:21:43.171031 kubelet[2491]: I0909 02:21:43.171000 2491 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 02:21:43.171749 kubelet[2491]: W0909 02:21:43.171718 2491 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 02:21:43.175064 kubelet[2491]: W0909 02:21:43.175019 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.31.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:43.175248 kubelet[2491]: E0909 02:21:43.175190 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.31.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:43.176580 kubelet[2491]: I0909 02:21:43.176555 2491 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 02:21:43.176734 kubelet[2491]: I0909 02:21:43.176714 2491 server.go:1287] "Started kubelet" Sep 9 02:21:43.181247 kubelet[2491]: I0909 02:21:43.180641 2491 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 02:21:43.184433 kubelet[2491]: I0909 02:21:43.184364 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 02:21:43.185034 kubelet[2491]: I0909 02:21:43.185010 2491 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 02:21:43.189017 kubelet[2491]: I0909 02:21:43.188716 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 02:21:43.191109 kubelet[2491]: E0909 02:21:43.187579 2491 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.31.10:6443/api/v1/namespaces/default/events\": dial tcp 10.230.31.10:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-9tmcm.gb1.brightbox.com.18637be5059f4d31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9tmcm.gb1.brightbox.com,UID:srv-9tmcm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9tmcm.gb1.brightbox.com,},FirstTimestamp:2025-09-09 02:21:43.176678705 +0000 UTC m=+1.003759284,LastTimestamp:2025-09-09 02:21:43.176678705 +0000 UTC m=+1.003759284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9tmcm.gb1.brightbox.com,}" Sep 9 02:21:43.196013 kubelet[2491]: I0909 02:21:43.195981 2491 server.go:479] "Adding debug handlers to kubelet server" Sep 9 02:21:43.200053 kubelet[2491]: I0909 02:21:43.200014 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 02:21:43.204943 kubelet[2491]: I0909 02:21:43.204902 2491 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 02:21:43.205322 kubelet[2491]: I0909 02:21:43.205245 2491 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 02:21:43.206448 kubelet[2491]: I0909 02:21:43.205540 2491 reconciler.go:26] "Reconciler: start to sync state" Sep 9 02:21:43.206448 kubelet[2491]: E0909 02:21:43.205955 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" Sep 9 02:21:43.207160 kubelet[2491]: E0909 02:21:43.207083 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9tmcm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.10:6443: connect: connection refused" interval="200ms" Sep 9 02:21:43.207358 kubelet[2491]: W0909 02:21:43.207230 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.31.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:43.207358 kubelet[2491]: E0909 02:21:43.207305 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.31.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:43.209044 kubelet[2491]: I0909 02:21:43.209013 2491 factory.go:221] Registration of the systemd container factory successfully Sep 9 02:21:43.209157 kubelet[2491]: I0909 02:21:43.209127 2491 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 02:21:43.212551 kubelet[2491]: I0909 02:21:43.212494 2491 factory.go:221] Registration of the containerd container factory successfully Sep 9 02:21:43.214994 kubelet[2491]: E0909 02:21:43.214951 2491 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 02:21:43.251456 kubelet[2491]: I0909 02:21:43.251393 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 02:21:43.255185 kubelet[2491]: I0909 02:21:43.254840 2491 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 02:21:43.255185 kubelet[2491]: I0909 02:21:43.254864 2491 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 02:21:43.255185 kubelet[2491]: I0909 02:21:43.254898 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 9 02:21:43.256474 kubelet[2491]: I0909 02:21:43.256449 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 02:21:43.267395 kubelet[2491]: I0909 02:21:43.256489 2491 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 02:21:43.267395 kubelet[2491]: I0909 02:21:43.256526 2491 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 02:21:43.267395 kubelet[2491]: I0909 02:21:43.256538 2491 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 02:21:43.267395 kubelet[2491]: E0909 02:21:43.256616 2491 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 02:21:43.267395 kubelet[2491]: W0909 02:21:43.257141 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.31.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:43.267395 kubelet[2491]: E0909 02:21:43.257178 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.31.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:43.271006 kubelet[2491]: I0909 02:21:43.270612 2491 policy_none.go:49] "None policy: Start" Sep 9 02:21:43.271006 kubelet[2491]: I0909 02:21:43.270672 2491 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 02:21:43.271006 kubelet[2491]: I0909 02:21:43.270703 2491 state_mem.go:35] "Initializing new in-memory state store" Sep 9 02:21:43.280325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 02:21:43.300390 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 02:21:43.305202 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 02:21:43.306582 kubelet[2491]: E0909 02:21:43.306515 2491 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" Sep 9 02:21:43.327299 kubelet[2491]: I0909 02:21:43.325984 2491 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 02:21:43.327299 kubelet[2491]: I0909 02:21:43.326624 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 02:21:43.327299 kubelet[2491]: I0909 02:21:43.326653 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 02:21:43.331069 kubelet[2491]: E0909 02:21:43.331031 2491 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 02:21:43.331318 kubelet[2491]: E0909 02:21:43.331282 2491 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-9tmcm.gb1.brightbox.com\" not found" Sep 9 02:21:43.332175 kubelet[2491]: I0909 02:21:43.332149 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 02:21:43.374714 systemd[1]: Created slice kubepods-burstable-pod3e8e1d85b808f91d100d907fa34a703d.slice - libcontainer container kubepods-burstable-pod3e8e1d85b808f91d100d907fa34a703d.slice. Sep 9 02:21:43.387766 kubelet[2491]: E0909 02:21:43.387714 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.394815 systemd[1]: Created slice kubepods-burstable-pod25bfce627be85a086843b319810d3ba9.slice - libcontainer container kubepods-burstable-pod25bfce627be85a086843b319810d3ba9.slice. Sep 9 02:21:43.399002 kubelet[2491]: E0909 02:21:43.398956 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.401110 systemd[1]: Created slice kubepods-burstable-pod94d76ce4dc80b4cb43bc45d9675e2e5e.slice - libcontainer container kubepods-burstable-pod94d76ce4dc80b4cb43bc45d9675e2e5e.slice. Sep 9 02:21:43.404393 kubelet[2491]: E0909 02:21:43.404345 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.407809 kubelet[2491]: E0909 02:21:43.407770 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9tmcm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.10:6443: connect: connection refused" interval="400ms" Sep 9 02:21:43.430260 kubelet[2491]: I0909 02:21:43.429869 2491 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.430668 kubelet[2491]: E0909 02:21:43.430620 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.10:6443/api/v1/nodes\": dial tcp 10.230.31.10:6443: connect: connection refused" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508482 kubelet[2491]: I0909 02:21:43.508342 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-ca-certs\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508684 kubelet[2491]: I0909 02:21:43.508507 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508684 kubelet[2491]: I0909 02:21:43.508596 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-ca-certs\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508684 kubelet[2491]: I0909 02:21:43.508630 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-k8s-certs\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508824 kubelet[2491]: I0909 02:21:43.508695 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-kubeconfig\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508824 kubelet[2491]: I0909 02:21:43.508792 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508925 kubelet[2491]: I0909 02:21:43.508855 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94d76ce4dc80b4cb43bc45d9675e2e5e-kubeconfig\") pod \"kube-scheduler-srv-9tmcm.gb1.brightbox.com\" (UID: \"94d76ce4dc80b4cb43bc45d9675e2e5e\") " pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.508925 kubelet[2491]: I0909 02:21:43.508894 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-k8s-certs\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.509029 kubelet[2491]: I0909 02:21:43.508925 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-flexvolume-dir\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.634370 kubelet[2491]: I0909 02:21:43.633765 2491 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.634370 kubelet[2491]: E0909 02:21:43.634194 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.10:6443/api/v1/nodes\": dial tcp 10.230.31.10:6443: connect: connection refused" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:43.690361 containerd[1608]: time="2025-09-09T02:21:43.690301734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9tmcm.gb1.brightbox.com,Uid:3e8e1d85b808f91d100d907fa34a703d,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:43.700911 containerd[1608]: time="2025-09-09T02:21:43.700685044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9tmcm.gb1.brightbox.com,Uid:25bfce627be85a086843b319810d3ba9,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:43.724970 containerd[1608]: time="2025-09-09T02:21:43.724907692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9tmcm.gb1.brightbox.com,Uid:94d76ce4dc80b4cb43bc45d9675e2e5e,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:43.809417 kubelet[2491]: E0909 02:21:43.809361 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9tmcm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.10:6443: connect: connection refused" interval="800ms" Sep 9 02:21:43.855148 containerd[1608]: time="2025-09-09T02:21:43.855041132Z" level=info msg="connecting to shim 60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135" address="unix:///run/containerd/s/54146d0877857b03676741028e1b9bc3e8d77a189bb0763af6fce3cad7a58bb5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:43.859851 containerd[1608]: time="2025-09-09T02:21:43.859751676Z" level=info msg="connecting to shim 1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c" address="unix:///run/containerd/s/0439e0b21794d0a28af867b3e56d76d4bac5376115f1f4e0e6ee385bd0c273ef" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:43.860938 containerd[1608]: time="2025-09-09T02:21:43.860895308Z" level=info msg="connecting to shim 3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40" address="unix:///run/containerd/s/c19b260624f566d93e5944964d9fe299f036d3940c68243ec7e5cfb13be4594d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:44.003517 systemd[1]: Started cri-containerd-3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40.scope - libcontainer container 3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40. Sep 9 02:21:44.006575 systemd[1]: Started cri-containerd-60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135.scope - libcontainer container 60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135. Sep 9 02:21:44.013421 systemd[1]: Started cri-containerd-1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c.scope - libcontainer container 1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c. Sep 9 02:21:44.037902 kubelet[2491]: I0909 02:21:44.037865 2491 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:44.038507 kubelet[2491]: E0909 02:21:44.038463 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.10:6443/api/v1/nodes\": dial tcp 10.230.31.10:6443: connect: connection refused" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:44.053528 kubelet[2491]: W0909 02:21:44.053417 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.31.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:44.053528 kubelet[2491]: E0909 02:21:44.053487 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.31.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:44.133640 containerd[1608]: time="2025-09-09T02:21:44.133575100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9tmcm.gb1.brightbox.com,Uid:25bfce627be85a086843b319810d3ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c\"" Sep 9 02:21:44.148271 containerd[1608]: time="2025-09-09T02:21:44.147854072Z" level=info msg="CreateContainer within sandbox \"1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 02:21:44.156944 containerd[1608]: time="2025-09-09T02:21:44.156896325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9tmcm.gb1.brightbox.com,Uid:94d76ce4dc80b4cb43bc45d9675e2e5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40\"" Sep 9 02:21:44.161783 containerd[1608]: time="2025-09-09T02:21:44.161412838Z" level=info msg="Container e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:21:44.163104 containerd[1608]: time="2025-09-09T02:21:44.162405969Z" level=info msg="CreateContainer within sandbox \"3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 02:21:44.172201 containerd[1608]: time="2025-09-09T02:21:44.172159033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9tmcm.gb1.brightbox.com,Uid:3e8e1d85b808f91d100d907fa34a703d,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135\"" Sep 9 02:21:44.173120 containerd[1608]: time="2025-09-09T02:21:44.173084084Z" level=info msg="CreateContainer within sandbox \"1e282e784217b83ace0f1aa8117eeb95b89eff3cf054a87808f35d787e53ab5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c\"" Sep 9 02:21:44.176235 containerd[1608]: time="2025-09-09T02:21:44.175117851Z" level=info msg="StartContainer for \"e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c\"" Sep 9 02:21:44.179955 containerd[1608]: time="2025-09-09T02:21:44.179265213Z" level=info msg="connecting to shim e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c" address="unix:///run/containerd/s/0439e0b21794d0a28af867b3e56d76d4bac5376115f1f4e0e6ee385bd0c273ef" protocol=ttrpc version=3 Sep 9 02:21:44.184400 containerd[1608]: time="2025-09-09T02:21:44.184363388Z" level=info msg="Container c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:21:44.188643 kubelet[2491]: W0909 02:21:44.188585 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.31.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:44.191928 kubelet[2491]: E0909 02:21:44.189289 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.31.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:44.192018 containerd[1608]: time="2025-09-09T02:21:44.190951060Z" level=info msg="CreateContainer within sandbox \"60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 02:21:44.198978 containerd[1608]: time="2025-09-09T02:21:44.198930892Z" level=info msg="CreateContainer within sandbox \"3d77f819a53541d3bf095e421ae2c91fe0e8817cabe8a604b3974d93cb4d3e40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47\"" Sep 9 02:21:44.200399 containerd[1608]: time="2025-09-09T02:21:44.200366171Z" level=info msg="StartContainer for \"c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47\"" Sep 9 02:21:44.204327 containerd[1608]: time="2025-09-09T02:21:44.204269470Z" level=info msg="connecting to shim c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47" address="unix:///run/containerd/s/c19b260624f566d93e5944964d9fe299f036d3940c68243ec7e5cfb13be4594d" protocol=ttrpc version=3 Sep 9 02:21:44.212268 containerd[1608]: time="2025-09-09T02:21:44.212227962Z" level=info msg="Container ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:21:44.218500 systemd[1]: Started cri-containerd-e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c.scope - libcontainer container e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c. Sep 9 02:21:44.233285 containerd[1608]: time="2025-09-09T02:21:44.233178397Z" level=info msg="CreateContainer within sandbox \"60d9e1985064368a6c5c567e5f90d2e1fd648dc344ec419e8a83cd8c45b1a135\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7\"" Sep 9 02:21:44.235581 containerd[1608]: time="2025-09-09T02:21:44.235551320Z" level=info msg="StartContainer for \"ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7\"" Sep 9 02:21:44.242988 containerd[1608]: time="2025-09-09T02:21:44.242941888Z" level=info msg="connecting to shim ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7" address="unix:///run/containerd/s/54146d0877857b03676741028e1b9bc3e8d77a189bb0763af6fce3cad7a58bb5" protocol=ttrpc version=3 Sep 9 02:21:44.251513 systemd[1]: Started cri-containerd-c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47.scope - libcontainer container c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47. Sep 9 02:21:44.303402 systemd[1]: Started cri-containerd-ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7.scope - libcontainer container ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7. Sep 9 02:21:44.309559 kubelet[2491]: W0909 02:21:44.308005 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.31.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9tmcm.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:44.309559 kubelet[2491]: E0909 02:21:44.308091 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.31.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9tmcm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:44.353239 containerd[1608]: time="2025-09-09T02:21:44.353152046Z" level=info msg="StartContainer for \"e7f7624ea94bb173f0b8a5a0a91807f04dacdf309e91ed3304211bdcb9b69c2c\" returns successfully" Sep 9 02:21:44.433980 containerd[1608]: time="2025-09-09T02:21:44.433154748Z" level=info msg="StartContainer for \"c732883cb6635824e3bc0866095874dfc57bbfb4c9501a2e2bed676a93272d47\" returns successfully" Sep 9 02:21:44.435467 containerd[1608]: time="2025-09-09T02:21:44.435428600Z" level=info msg="StartContainer for \"ca53d2000c02da06061e8c1e854890e59d18678947c269371e9e05e15882ada7\" returns successfully" Sep 9 02:21:44.539263 kubelet[2491]: W0909 02:21:44.538731 2491 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.31.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.31.10:6443: connect: connection refused Sep 9 02:21:44.539608 kubelet[2491]: E0909 02:21:44.539549 2491 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.31.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.31.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 02:21:44.610781 kubelet[2491]: E0909 02:21:44.610492 2491 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.31.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9tmcm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.31.10:6443: connect: connection refused" interval="1.6s" Sep 9 02:21:44.842542 kubelet[2491]: I0909 02:21:44.842485 2491 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:44.843155 kubelet[2491]: E0909 02:21:44.843116 2491 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.31.10:6443/api/v1/nodes\": dial tcp 10.230.31.10:6443: connect: connection refused" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:45.293357 kubelet[2491]: E0909 02:21:45.293028 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:45.295855 kubelet[2491]: E0909 02:21:45.295675 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:45.309849 kubelet[2491]: E0909 02:21:45.309814 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:46.309873 kubelet[2491]: E0909 02:21:46.309832 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:46.312717 kubelet[2491]: E0909 02:21:46.312689 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:46.313125 kubelet[2491]: E0909 02:21:46.313097 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:46.446782 kubelet[2491]: I0909 02:21:46.446746 2491 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.313243 kubelet[2491]: E0909 02:21:47.312935 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.315290 kubelet[2491]: E0909 02:21:47.313293 2491 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.384323 kubelet[2491]: E0909 02:21:47.384260 2491 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-9tmcm.gb1.brightbox.com\" not found" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.475682 kubelet[2491]: E0909 02:21:47.475529 2491 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-9tmcm.gb1.brightbox.com.18637be5059f4d31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9tmcm.gb1.brightbox.com,UID:srv-9tmcm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9tmcm.gb1.brightbox.com,},FirstTimestamp:2025-09-09 02:21:43.176678705 +0000 UTC m=+1.003759284,LastTimestamp:2025-09-09 02:21:43.176678705 +0000 UTC m=+1.003759284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9tmcm.gb1.brightbox.com,}" Sep 9 02:21:47.489114 kubelet[2491]: I0909 02:21:47.488728 2491 kubelet_node_status.go:78] "Successfully registered node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.489302 kubelet[2491]: I0909 02:21:47.489156 2491 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.506826 kubelet[2491]: I0909 02:21:47.506763 2491 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.548176 kubelet[2491]: E0909 02:21:47.548125 2491 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.548642 kubelet[2491]: E0909 02:21:47.548605 2491 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.548642 kubelet[2491]: I0909 02:21:47.548639 2491 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.554512 kubelet[2491]: E0909 02:21:47.554368 2491 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.554512 kubelet[2491]: I0909 02:21:47.554403 2491 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:47.557501 kubelet[2491]: E0909 02:21:47.557468 2491 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-9tmcm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:48.176849 kubelet[2491]: I0909 02:21:48.176811 2491 apiserver.go:52] "Watching apiserver" Sep 9 02:21:48.207101 kubelet[2491]: I0909 02:21:48.207022 2491 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 02:21:49.678133 systemd[1]: Reload requested from client PID 2768 ('systemctl') (unit session-11.scope)... Sep 9 02:21:49.678159 systemd[1]: Reloading... Sep 9 02:21:49.796261 zram_generator::config[2813]: No configuration found. Sep 9 02:21:49.959997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 02:21:50.162255 systemd[1]: Reloading finished in 483 ms. Sep 9 02:21:50.211275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:50.222774 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 02:21:50.223178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:50.223283 systemd[1]: kubelet.service: Consumed 1.602s CPU time, 128.3M memory peak. Sep 9 02:21:50.226043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 02:21:50.507549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 02:21:50.520717 (kubelet)[2877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 02:21:50.618996 kubelet[2877]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 02:21:50.618996 kubelet[2877]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 02:21:50.618996 kubelet[2877]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 02:21:50.619622 kubelet[2877]: I0909 02:21:50.619093 2877 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 02:21:50.629958 kubelet[2877]: I0909 02:21:50.629881 2877 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 02:21:50.629958 kubelet[2877]: I0909 02:21:50.629916 2877 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 02:21:50.630296 kubelet[2877]: I0909 02:21:50.630262 2877 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 02:21:50.633383 kubelet[2877]: I0909 02:21:50.633330 2877 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 02:21:50.637140 kubelet[2877]: I0909 02:21:50.637086 2877 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 02:21:50.646599 kubelet[2877]: I0909 02:21:50.643296 2877 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 02:21:50.650206 kubelet[2877]: I0909 02:21:50.650154 2877 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 02:21:50.650637 kubelet[2877]: I0909 02:21:50.650562 2877 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 02:21:50.650917 kubelet[2877]: I0909 02:21:50.650629 2877 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-9tmcm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 02:21:50.651116 kubelet[2877]: I0909 02:21:50.650927 2877 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 02:21:50.651116 kubelet[2877]: I0909 02:21:50.650944 2877 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 02:21:50.651116 kubelet[2877]: I0909 02:21:50.651003 2877 state_mem.go:36] "Initialized new in-memory state store" Sep 9 02:21:50.651299 kubelet[2877]: I0909 02:21:50.651247 2877 kubelet.go:446] "Attempting to sync node with API server" Sep 9 02:21:50.651299 kubelet[2877]: I0909 02:21:50.651288 2877 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 02:21:50.651423 kubelet[2877]: I0909 02:21:50.651329 2877 kubelet.go:352] "Adding apiserver pod source" Sep 9 02:21:50.651423 kubelet[2877]: I0909 02:21:50.651353 2877 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 02:21:50.662552 kubelet[2877]: I0909 02:21:50.662518 2877 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 02:21:50.670442 kubelet[2877]: I0909 02:21:50.669466 2877 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 02:21:50.670647 kubelet[2877]: I0909 02:21:50.670614 2877 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 02:21:50.670799 kubelet[2877]: I0909 02:21:50.670780 2877 server.go:1287] "Started kubelet" Sep 9 02:21:50.679242 kubelet[2877]: I0909 02:21:50.679197 2877 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 02:21:50.695368 kubelet[2877]: I0909 02:21:50.694481 2877 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 02:21:50.697241 kubelet[2877]: I0909 02:21:50.696598 2877 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 02:21:50.704005 sudo[2892]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 02:21:50.704544 sudo[2892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 02:21:50.706249 kubelet[2877]: I0909 02:21:50.705906 2877 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 02:21:50.707405 kubelet[2877]: I0909 02:21:50.702168 2877 server.go:479] "Adding debug handlers to kubelet server" Sep 9 02:21:50.712092 kubelet[2877]: I0909 02:21:50.712021 2877 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 02:21:50.712092 kubelet[2877]: I0909 02:21:50.702661 2877 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 02:21:50.717633 kubelet[2877]: I0909 02:21:50.717530 2877 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 02:21:50.719256 kubelet[2877]: I0909 02:21:50.718920 2877 reconciler.go:26] "Reconciler: start to sync state" Sep 9 02:21:50.723905 kubelet[2877]: I0909 02:21:50.723871 2877 factory.go:221] Registration of the systemd container factory successfully Sep 9 02:21:50.726589 kubelet[2877]: I0909 02:21:50.726357 2877 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 02:21:50.734653 kubelet[2877]: I0909 02:21:50.734572 2877 factory.go:221] Registration of the containerd container factory successfully Sep 9 02:21:50.742802 kubelet[2877]: E0909 02:21:50.742734 2877 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 02:21:50.772097 kubelet[2877]: I0909 02:21:50.770718 2877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 02:21:50.774762 kubelet[2877]: I0909 02:21:50.774359 2877 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 02:21:50.774762 kubelet[2877]: I0909 02:21:50.774397 2877 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 02:21:50.774762 kubelet[2877]: I0909 02:21:50.774423 2877 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 02:21:50.774762 kubelet[2877]: I0909 02:21:50.774433 2877 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 02:21:50.774762 kubelet[2877]: E0909 02:21:50.774495 2877 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 02:21:50.852171 kubelet[2877]: I0909 02:21:50.852136 2877 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852404 2877 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852440 2877 state_mem.go:36] "Initialized new in-memory state store" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852659 2877 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852681 2877 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852710 2877 policy_none.go:49] "None policy: Start" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852724 2877 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852740 2877 state_mem.go:35] "Initializing new in-memory state store" Sep 9 02:21:50.853017 kubelet[2877]: I0909 02:21:50.852894 2877 state_mem.go:75] "Updated machine memory state" Sep 9 02:21:50.863378 kubelet[2877]: I0909 02:21:50.862396 2877 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 02:21:50.863378 kubelet[2877]: I0909 02:21:50.862665 2877 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 02:21:50.863378 kubelet[2877]: I0909 02:21:50.862683 2877 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 02:21:50.864713 kubelet[2877]: I0909 02:21:50.864692 2877 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 02:21:50.879168 kubelet[2877]: I0909 02:21:50.879110 2877 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.883513 kubelet[2877]: I0909 02:21:50.883478 2877 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.884088 kubelet[2877]: E0909 02:21:50.884016 2877 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 02:21:50.890591 kubelet[2877]: I0909 02:21:50.888187 2877 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.901538 kubelet[2877]: W0909 02:21:50.901503 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 02:21:50.902430 kubelet[2877]: W0909 02:21:50.902407 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 02:21:50.919515 kubelet[2877]: I0909 02:21:50.919372 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-flexvolume-dir\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.920444 kubelet[2877]: I0909 02:21:50.920381 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-kubeconfig\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.920723 kubelet[2877]: W0909 02:21:50.920005 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 02:21:50.921004 kubelet[2877]: I0909 02:21:50.920975 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.921557 kubelet[2877]: I0909 02:21:50.921428 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94d76ce4dc80b4cb43bc45d9675e2e5e-kubeconfig\") pod \"kube-scheduler-srv-9tmcm.gb1.brightbox.com\" (UID: \"94d76ce4dc80b4cb43bc45d9675e2e5e\") " pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.921557 kubelet[2877]: I0909 02:21:50.921508 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-k8s-certs\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.922094 kubelet[2877]: I0909 02:21:50.922029 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.922341 kubelet[2877]: I0909 02:21:50.922272 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-ca-certs\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.922589 kubelet[2877]: I0909 02:21:50.922436 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25bfce627be85a086843b319810d3ba9-k8s-certs\") pod \"kube-controller-manager-srv-9tmcm.gb1.brightbox.com\" (UID: \"25bfce627be85a086843b319810d3ba9\") " pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.922828 kubelet[2877]: I0909 02:21:50.922703 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e8e1d85b808f91d100d907fa34a703d-ca-certs\") pod \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" (UID: \"3e8e1d85b808f91d100d907fa34a703d\") " pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:50.996154 kubelet[2877]: I0909 02:21:50.995559 2877 kubelet_node_status.go:75] "Attempting to register node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:51.025890 kubelet[2877]: I0909 02:21:51.025071 2877 kubelet_node_status.go:124] "Node was previously registered" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:51.026697 kubelet[2877]: I0909 02:21:51.025205 2877 kubelet_node_status.go:78] "Successfully registered node" node="srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:51.548161 sudo[2892]: pam_unix(sudo:session): session closed for user root Sep 9 02:21:51.653240 kubelet[2877]: I0909 02:21:51.653016 2877 apiserver.go:52] "Watching apiserver" Sep 9 02:21:51.712897 kubelet[2877]: I0909 02:21:51.712830 2877 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 02:21:51.827951 kubelet[2877]: I0909 02:21:51.826056 2877 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:51.858246 kubelet[2877]: W0909 02:21:51.857861 2877 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 02:21:51.858545 kubelet[2877]: E0909 02:21:51.858516 2877 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-9tmcm.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" Sep 9 02:21:51.993268 kubelet[2877]: I0909 02:21:51.992297 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-9tmcm.gb1.brightbox.com" podStartSLOduration=1.992262226 podStartE2EDuration="1.992262226s" podCreationTimestamp="2025-09-09 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:21:51.991663429 +0000 UTC m=+1.456821810" watchObservedRunningTime="2025-09-09 02:21:51.992262226 +0000 UTC m=+1.457420618" Sep 9 02:21:51.993680 kubelet[2877]: I0909 02:21:51.993638 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-9tmcm.gb1.brightbox.com" podStartSLOduration=1.9936258100000002 podStartE2EDuration="1.99362581s" podCreationTimestamp="2025-09-09 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:21:51.971658726 +0000 UTC m=+1.436817108" watchObservedRunningTime="2025-09-09 02:21:51.99362581 +0000 UTC m=+1.458784202" Sep 9 02:21:52.012326 kubelet[2877]: I0909 02:21:52.012248 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-9tmcm.gb1.brightbox.com" podStartSLOduration=2.012228198 podStartE2EDuration="2.012228198s" podCreationTimestamp="2025-09-09 02:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:21:52.011089255 +0000 UTC m=+1.476247631" watchObservedRunningTime="2025-09-09 02:21:52.012228198 +0000 UTC m=+1.477386575" Sep 9 02:21:53.483054 sudo[1897]: pam_unix(sudo:session): session closed for user root Sep 9 02:21:53.628280 sshd[1896]: Connection closed by 139.178.68.195 port 56290 Sep 9 02:21:53.629583 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Sep 9 02:21:53.635941 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Sep 9 02:21:53.636906 systemd[1]: sshd@8-10.230.31.10:22-139.178.68.195:56290.service: Deactivated successfully. Sep 9 02:21:53.641890 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 02:21:53.642411 systemd[1]: session-11.scope: Consumed 6.340s CPU time, 210.8M memory peak. Sep 9 02:21:53.649552 systemd-logind[1583]: Removed session 11. Sep 9 02:21:55.616516 kubelet[2877]: I0909 02:21:55.616321 2877 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 02:21:55.617038 containerd[1608]: time="2025-09-09T02:21:55.616811194Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 02:21:55.618292 kubelet[2877]: I0909 02:21:55.617037 2877 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 02:21:56.668130 systemd[1]: Created slice kubepods-besteffort-pod0c91ebd6_1c03_4d12_baa8_42539c32e911.slice - libcontainer container kubepods-besteffort-pod0c91ebd6_1c03_4d12_baa8_42539c32e911.slice. Sep 9 02:21:56.687167 systemd[1]: Created slice kubepods-burstable-podb9485dcc_774e_4477_86d0_653dadf63239.slice - libcontainer container kubepods-burstable-podb9485dcc_774e_4477_86d0_653dadf63239.slice. Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763451 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-xtables-lock\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763522 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-bpf-maps\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763557 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mmv\" (UniqueName: \"kubernetes.io/projected/0c91ebd6-1c03-4d12-baa8-42539c32e911-kube-api-access-k5mmv\") pod \"kube-proxy-zng7n\" (UID: \"0c91ebd6-1c03-4d12-baa8-42539c32e911\") " pod="kube-system/kube-proxy-zng7n" Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763641 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-run\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763669 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-cgroup\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.764238 kubelet[2877]: I0909 02:21:56.763696 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9485dcc-774e-4477-86d0-653dadf63239-clustermesh-secrets\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763725 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-net\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763750 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-etc-cni-netd\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763775 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdnff\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-kube-api-access-xdnff\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763801 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-hostproc\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763828 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-hubble-tls\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765274 kubelet[2877]: I0909 02:21:56.763854 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c91ebd6-1c03-4d12-baa8-42539c32e911-lib-modules\") pod \"kube-proxy-zng7n\" (UID: \"0c91ebd6-1c03-4d12-baa8-42539c32e911\") " pod="kube-system/kube-proxy-zng7n" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.763881 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-kernel\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.763916 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c91ebd6-1c03-4d12-baa8-42539c32e911-kube-proxy\") pod \"kube-proxy-zng7n\" (UID: \"0c91ebd6-1c03-4d12-baa8-42539c32e911\") " pod="kube-system/kube-proxy-zng7n" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.763942 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c91ebd6-1c03-4d12-baa8-42539c32e911-xtables-lock\") pod \"kube-proxy-zng7n\" (UID: \"0c91ebd6-1c03-4d12-baa8-42539c32e911\") " pod="kube-system/kube-proxy-zng7n" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.763993 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cni-path\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.764018 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-lib-modules\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.765686 kubelet[2877]: I0909 02:21:56.764060 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9485dcc-774e-4477-86d0-653dadf63239-cilium-config-path\") pod \"cilium-9chqf\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " pod="kube-system/cilium-9chqf" Sep 9 02:21:56.789859 systemd[1]: Created slice kubepods-besteffort-pode4e358af_48b0_48bc_9d8f_6cb6f70a24c3.slice - libcontainer container kubepods-besteffort-pode4e358af_48b0_48bc_9d8f_6cb6f70a24c3.slice. Sep 9 02:21:56.865374 kubelet[2877]: I0909 02:21:56.865045 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-99mlv\" (UID: \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\") " pod="kube-system/cilium-operator-6c4d7847fc-99mlv" Sep 9 02:21:56.867024 kubelet[2877]: I0909 02:21:56.866996 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdwg\" (UniqueName: \"kubernetes.io/projected/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-kube-api-access-5tdwg\") pod \"cilium-operator-6c4d7847fc-99mlv\" (UID: \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\") " pod="kube-system/cilium-operator-6c4d7847fc-99mlv" Sep 9 02:21:56.985153 containerd[1608]: time="2025-09-09T02:21:56.985071546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zng7n,Uid:0c91ebd6-1c03-4d12-baa8-42539c32e911,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:56.998505 containerd[1608]: time="2025-09-09T02:21:56.998458912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9chqf,Uid:b9485dcc-774e-4477-86d0-653dadf63239,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:57.033964 containerd[1608]: time="2025-09-09T02:21:57.033904528Z" level=info msg="connecting to shim b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:57.039492 containerd[1608]: time="2025-09-09T02:21:57.039423817Z" level=info msg="connecting to shim 569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa" address="unix:///run/containerd/s/9b36e68f65b9ba54c543ba05b9f22cb66d089fa62d9a1c56388911881a67a9a1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:57.074589 systemd[1]: Started cri-containerd-b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124.scope - libcontainer container b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124. Sep 9 02:21:57.087559 systemd[1]: Started cri-containerd-569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa.scope - libcontainer container 569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa. Sep 9 02:21:57.096529 containerd[1608]: time="2025-09-09T02:21:57.096280425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-99mlv,Uid:e4e358af-48b0-48bc-9d8f-6cb6f70a24c3,Namespace:kube-system,Attempt:0,}" Sep 9 02:21:57.140376 containerd[1608]: time="2025-09-09T02:21:57.140328697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9chqf,Uid:b9485dcc-774e-4477-86d0-653dadf63239,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\"" Sep 9 02:21:57.146593 containerd[1608]: time="2025-09-09T02:21:57.145742528Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 02:21:57.159254 containerd[1608]: time="2025-09-09T02:21:57.159171265Z" level=info msg="connecting to shim 99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67" address="unix:///run/containerd/s/f21a78be6fbfcd9842f442780408b8e28622238dced0c12056f0f6f3b118fd70" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:21:57.187167 containerd[1608]: time="2025-09-09T02:21:57.187030681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zng7n,Uid:0c91ebd6-1c03-4d12-baa8-42539c32e911,Namespace:kube-system,Attempt:0,} returns sandbox id \"569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa\"" Sep 9 02:21:57.195271 containerd[1608]: time="2025-09-09T02:21:57.194801692Z" level=info msg="CreateContainer within sandbox \"569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 02:21:57.209165 containerd[1608]: time="2025-09-09T02:21:57.209110362Z" level=info msg="Container 7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:21:57.211482 systemd[1]: Started cri-containerd-99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67.scope - libcontainer container 99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67. Sep 9 02:21:57.225205 containerd[1608]: time="2025-09-09T02:21:57.225143673Z" level=info msg="CreateContainer within sandbox \"569ed54ca1326958b29a2406244465a9501483d6365d1a68002a4bdf50d935fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4\"" Sep 9 02:21:57.225888 containerd[1608]: time="2025-09-09T02:21:57.225852066Z" level=info msg="StartContainer for \"7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4\"" Sep 9 02:21:57.227626 containerd[1608]: time="2025-09-09T02:21:57.227589927Z" level=info msg="connecting to shim 7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4" address="unix:///run/containerd/s/9b36e68f65b9ba54c543ba05b9f22cb66d089fa62d9a1c56388911881a67a9a1" protocol=ttrpc version=3 Sep 9 02:21:57.265612 systemd[1]: Started cri-containerd-7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4.scope - libcontainer container 7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4. Sep 9 02:21:57.326662 containerd[1608]: time="2025-09-09T02:21:57.326604658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-99mlv,Uid:e4e358af-48b0-48bc-9d8f-6cb6f70a24c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\"" Sep 9 02:21:57.363184 containerd[1608]: time="2025-09-09T02:21:57.363136291Z" level=info msg="StartContainer for \"7ac2b8fd69f0f136c09bab0d18a7b02ab052b03d5da51b67cc809868356620a4\" returns successfully" Sep 9 02:21:58.772832 kubelet[2877]: I0909 02:21:58.772158 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zng7n" podStartSLOduration=2.772127894 podStartE2EDuration="2.772127894s" podCreationTimestamp="2025-09-09 02:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:21:57.881886366 +0000 UTC m=+7.347044753" watchObservedRunningTime="2025-09-09 02:21:58.772127894 +0000 UTC m=+8.237286269" Sep 9 02:22:04.616034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30591762.mount: Deactivated successfully. Sep 9 02:22:07.901103 containerd[1608]: time="2025-09-09T02:22:07.900977203Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:22:07.902856 containerd[1608]: time="2025-09-09T02:22:07.902795322Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 02:22:07.904028 containerd[1608]: time="2025-09-09T02:22:07.903965529Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:22:07.905762 containerd[1608]: time="2025-09-09T02:22:07.905531288Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.754704206s" Sep 9 02:22:07.905762 containerd[1608]: time="2025-09-09T02:22:07.905577009Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 02:22:07.909397 containerd[1608]: time="2025-09-09T02:22:07.908474845Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 02:22:07.911035 containerd[1608]: time="2025-09-09T02:22:07.910982073Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 02:22:07.931575 containerd[1608]: time="2025-09-09T02:22:07.931520923Z" level=info msg="Container f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:07.946851 containerd[1608]: time="2025-09-09T02:22:07.946738873Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\"" Sep 9 02:22:07.947689 containerd[1608]: time="2025-09-09T02:22:07.947467546Z" level=info msg="StartContainer for \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\"" Sep 9 02:22:07.949969 containerd[1608]: time="2025-09-09T02:22:07.949922384Z" level=info msg="connecting to shim f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" protocol=ttrpc version=3 Sep 9 02:22:07.988539 systemd[1]: Started cri-containerd-f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6.scope - libcontainer container f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6. Sep 9 02:22:08.037696 containerd[1608]: time="2025-09-09T02:22:08.037646060Z" level=info msg="StartContainer for \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" returns successfully" Sep 9 02:22:08.057831 systemd[1]: cri-containerd-f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6.scope: Deactivated successfully. Sep 9 02:22:08.123433 containerd[1608]: time="2025-09-09T02:22:08.123365965Z" level=info msg="received exit event container_id:\"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" id:\"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" pid:3290 exited_at:{seconds:1757384528 nanos:61700729}" Sep 9 02:22:08.138912 containerd[1608]: time="2025-09-09T02:22:08.138862601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" id:\"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" pid:3290 exited_at:{seconds:1757384528 nanos:61700729}" Sep 9 02:22:08.168177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6-rootfs.mount: Deactivated successfully. Sep 9 02:22:08.924385 containerd[1608]: time="2025-09-09T02:22:08.924324856Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 02:22:08.948857 containerd[1608]: time="2025-09-09T02:22:08.946745401Z" level=info msg="Container 7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:08.952622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476291682.mount: Deactivated successfully. Sep 9 02:22:08.959975 containerd[1608]: time="2025-09-09T02:22:08.959919748Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\"" Sep 9 02:22:08.961511 containerd[1608]: time="2025-09-09T02:22:08.961463627Z" level=info msg="StartContainer for \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\"" Sep 9 02:22:08.963137 containerd[1608]: time="2025-09-09T02:22:08.962935427Z" level=info msg="connecting to shim 7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" protocol=ttrpc version=3 Sep 9 02:22:09.001425 systemd[1]: Started cri-containerd-7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca.scope - libcontainer container 7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca. Sep 9 02:22:09.045999 containerd[1608]: time="2025-09-09T02:22:09.045928346Z" level=info msg="StartContainer for \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" returns successfully" Sep 9 02:22:09.065495 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 02:22:09.065887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 02:22:09.066405 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 02:22:09.069760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 02:22:09.073614 containerd[1608]: time="2025-09-09T02:22:09.073507687Z" level=info msg="received exit event container_id:\"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" id:\"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" pid:3337 exited_at:{seconds:1757384529 nanos:72458704}" Sep 9 02:22:09.074414 containerd[1608]: time="2025-09-09T02:22:09.074363254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" id:\"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" pid:3337 exited_at:{seconds:1757384529 nanos:72458704}" Sep 9 02:22:09.074911 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 02:22:09.076776 systemd[1]: cri-containerd-7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca.scope: Deactivated successfully. Sep 9 02:22:09.134769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 02:22:09.930802 containerd[1608]: time="2025-09-09T02:22:09.930686150Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 02:22:09.945405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca-rootfs.mount: Deactivated successfully. Sep 9 02:22:09.970387 containerd[1608]: time="2025-09-09T02:22:09.970324449Z" level=info msg="Container cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:09.979439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100563847.mount: Deactivated successfully. Sep 9 02:22:10.001199 containerd[1608]: time="2025-09-09T02:22:10.001147307Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\"" Sep 9 02:22:10.004074 containerd[1608]: time="2025-09-09T02:22:10.004043185Z" level=info msg="StartContainer for \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\"" Sep 9 02:22:10.007517 containerd[1608]: time="2025-09-09T02:22:10.007485122Z" level=info msg="connecting to shim cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" protocol=ttrpc version=3 Sep 9 02:22:10.059717 systemd[1]: Started cri-containerd-cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f.scope - libcontainer container cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f. Sep 9 02:22:10.154797 systemd[1]: cri-containerd-cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f.scope: Deactivated successfully. Sep 9 02:22:10.164512 containerd[1608]: time="2025-09-09T02:22:10.164437854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" id:\"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" pid:3397 exited_at:{seconds:1757384530 nanos:159711790}" Sep 9 02:22:10.172285 containerd[1608]: time="2025-09-09T02:22:10.172097136Z" level=info msg="received exit event container_id:\"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" id:\"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" pid:3397 exited_at:{seconds:1757384530 nanos:159711790}" Sep 9 02:22:10.196903 containerd[1608]: time="2025-09-09T02:22:10.196775259Z" level=info msg="StartContainer for \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" returns successfully" Sep 9 02:22:10.229612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f-rootfs.mount: Deactivated successfully. Sep 9 02:22:10.865941 containerd[1608]: time="2025-09-09T02:22:10.865725902Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:22:10.867635 containerd[1608]: time="2025-09-09T02:22:10.866975410Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 02:22:10.868559 containerd[1608]: time="2025-09-09T02:22:10.868196551Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 02:22:10.870981 containerd[1608]: time="2025-09-09T02:22:10.870108319Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.960709652s" Sep 9 02:22:10.870981 containerd[1608]: time="2025-09-09T02:22:10.870152126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 02:22:10.875475 containerd[1608]: time="2025-09-09T02:22:10.875359790Z" level=info msg="CreateContainer within sandbox \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 02:22:10.884131 containerd[1608]: time="2025-09-09T02:22:10.884045934Z" level=info msg="Container 71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:10.902008 containerd[1608]: time="2025-09-09T02:22:10.901884497Z" level=info msg="CreateContainer within sandbox \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\"" Sep 9 02:22:10.904762 containerd[1608]: time="2025-09-09T02:22:10.904565989Z" level=info msg="StartContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\"" Sep 9 02:22:10.905971 containerd[1608]: time="2025-09-09T02:22:10.905903402Z" level=info msg="connecting to shim 71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0" address="unix:///run/containerd/s/f21a78be6fbfcd9842f442780408b8e28622238dced0c12056f0f6f3b118fd70" protocol=ttrpc version=3 Sep 9 02:22:10.932468 systemd[1]: Started cri-containerd-71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0.scope - libcontainer container 71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0. Sep 9 02:22:10.960457 containerd[1608]: time="2025-09-09T02:22:10.960383844Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 02:22:11.007084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199440668.mount: Deactivated successfully. Sep 9 02:22:11.009913 containerd[1608]: time="2025-09-09T02:22:11.007725608Z" level=info msg="Container 35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:11.023430 containerd[1608]: time="2025-09-09T02:22:11.022780705Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\"" Sep 9 02:22:11.025753 containerd[1608]: time="2025-09-09T02:22:11.025719039Z" level=info msg="StartContainer for \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\"" Sep 9 02:22:11.030441 containerd[1608]: time="2025-09-09T02:22:11.030397749Z" level=info msg="connecting to shim 35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" protocol=ttrpc version=3 Sep 9 02:22:11.044329 containerd[1608]: time="2025-09-09T02:22:11.043319401Z" level=info msg="StartContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" returns successfully" Sep 9 02:22:11.097450 systemd[1]: Started cri-containerd-35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31.scope - libcontainer container 35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31. Sep 9 02:22:11.172777 systemd[1]: cri-containerd-35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31.scope: Deactivated successfully. Sep 9 02:22:11.175762 containerd[1608]: time="2025-09-09T02:22:11.175569030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" id:\"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" pid:3473 exited_at:{seconds:1757384531 nanos:174502594}" Sep 9 02:22:11.201981 containerd[1608]: time="2025-09-09T02:22:11.201802946Z" level=info msg="received exit event container_id:\"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" id:\"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" pid:3473 exited_at:{seconds:1757384531 nanos:174502594}" Sep 9 02:22:11.234435 containerd[1608]: time="2025-09-09T02:22:11.234041316Z" level=info msg="StartContainer for \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" returns successfully" Sep 9 02:22:11.945310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31-rootfs.mount: Deactivated successfully. Sep 9 02:22:11.975037 containerd[1608]: time="2025-09-09T02:22:11.974971812Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 02:22:12.007658 containerd[1608]: time="2025-09-09T02:22:12.004963216Z" level=info msg="Container e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:12.012256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408632175.mount: Deactivated successfully. Sep 9 02:22:12.047018 containerd[1608]: time="2025-09-09T02:22:12.046941455Z" level=info msg="CreateContainer within sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\"" Sep 9 02:22:12.049426 containerd[1608]: time="2025-09-09T02:22:12.049342349Z" level=info msg="StartContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\"" Sep 9 02:22:12.052671 containerd[1608]: time="2025-09-09T02:22:12.052615631Z" level=info msg="connecting to shim e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527" address="unix:///run/containerd/s/98bba84d1212c5a709351a948a5265d051c97cc049177591f990b850f221f89a" protocol=ttrpc version=3 Sep 9 02:22:12.110591 systemd[1]: Started cri-containerd-e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527.scope - libcontainer container e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527. Sep 9 02:22:12.289817 containerd[1608]: time="2025-09-09T02:22:12.289755093Z" level=info msg="StartContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" returns successfully" Sep 9 02:22:12.532848 containerd[1608]: time="2025-09-09T02:22:12.532783320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" id:\"390f6da9a9237da21c081b3dbe0ac7cc95ef98fbb51b159106c9e1d5472d193f\" pid:3546 exited_at:{seconds:1757384532 nanos:529278937}" Sep 9 02:22:12.564374 kubelet[2877]: I0909 02:22:12.563846 2877 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 02:22:12.618336 kubelet[2877]: I0909 02:22:12.618248 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-99mlv" podStartSLOduration=3.072783601 podStartE2EDuration="16.617935029s" podCreationTimestamp="2025-09-09 02:21:56 +0000 UTC" firstStartedPulling="2025-09-09 02:21:57.327964991 +0000 UTC m=+6.793123346" lastFinishedPulling="2025-09-09 02:22:10.873116414 +0000 UTC m=+20.338274774" observedRunningTime="2025-09-09 02:22:12.181656312 +0000 UTC m=+21.646814693" watchObservedRunningTime="2025-09-09 02:22:12.617935029 +0000 UTC m=+22.083093405" Sep 9 02:22:12.635953 systemd[1]: Created slice kubepods-burstable-pod994c99e1_4980_4cca_b525_6621b9d95e8d.slice - libcontainer container kubepods-burstable-pod994c99e1_4980_4cca_b525_6621b9d95e8d.slice. Sep 9 02:22:12.648759 systemd[1]: Created slice kubepods-burstable-podc50d17c8_1cff_4a12_b3a2_ccc92057080e.slice - libcontainer container kubepods-burstable-podc50d17c8_1cff_4a12_b3a2_ccc92057080e.slice. Sep 9 02:22:12.705988 kubelet[2877]: I0909 02:22:12.705931 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhqg\" (UniqueName: \"kubernetes.io/projected/c50d17c8-1cff-4a12-b3a2-ccc92057080e-kube-api-access-llhqg\") pod \"coredns-668d6bf9bc-gbwrv\" (UID: \"c50d17c8-1cff-4a12-b3a2-ccc92057080e\") " pod="kube-system/coredns-668d6bf9bc-gbwrv" Sep 9 02:22:12.706181 kubelet[2877]: I0909 02:22:12.705999 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994c99e1-4980-4cca-b525-6621b9d95e8d-config-volume\") pod \"coredns-668d6bf9bc-mlk2p\" (UID: \"994c99e1-4980-4cca-b525-6621b9d95e8d\") " pod="kube-system/coredns-668d6bf9bc-mlk2p" Sep 9 02:22:12.706181 kubelet[2877]: I0909 02:22:12.706037 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c50d17c8-1cff-4a12-b3a2-ccc92057080e-config-volume\") pod \"coredns-668d6bf9bc-gbwrv\" (UID: \"c50d17c8-1cff-4a12-b3a2-ccc92057080e\") " pod="kube-system/coredns-668d6bf9bc-gbwrv" Sep 9 02:22:12.706181 kubelet[2877]: I0909 02:22:12.706069 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brb2s\" (UniqueName: \"kubernetes.io/projected/994c99e1-4980-4cca-b525-6621b9d95e8d-kube-api-access-brb2s\") pod \"coredns-668d6bf9bc-mlk2p\" (UID: \"994c99e1-4980-4cca-b525-6621b9d95e8d\") " pod="kube-system/coredns-668d6bf9bc-mlk2p" Sep 9 02:22:12.945130 containerd[1608]: time="2025-09-09T02:22:12.944921391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlk2p,Uid:994c99e1-4980-4cca-b525-6621b9d95e8d,Namespace:kube-system,Attempt:0,}" Sep 9 02:22:12.961180 containerd[1608]: time="2025-09-09T02:22:12.960666969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gbwrv,Uid:c50d17c8-1cff-4a12-b3a2-ccc92057080e,Namespace:kube-system,Attempt:0,}" Sep 9 02:22:13.122437 kubelet[2877]: I0909 02:22:13.122113 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9chqf" podStartSLOduration=6.359126551 podStartE2EDuration="17.122093651s" podCreationTimestamp="2025-09-09 02:21:56 +0000 UTC" firstStartedPulling="2025-09-09 02:21:57.144424168 +0000 UTC m=+6.609582528" lastFinishedPulling="2025-09-09 02:22:07.90739126 +0000 UTC m=+17.372549628" observedRunningTime="2025-09-09 02:22:13.121894923 +0000 UTC m=+22.587053308" watchObservedRunningTime="2025-09-09 02:22:13.122093651 +0000 UTC m=+22.587252019" Sep 9 02:22:15.329281 systemd-networkd[1510]: cilium_host: Link UP Sep 9 02:22:15.330186 systemd-networkd[1510]: cilium_net: Link UP Sep 9 02:22:15.332007 systemd-networkd[1510]: cilium_net: Gained carrier Sep 9 02:22:15.332653 systemd-networkd[1510]: cilium_host: Gained carrier Sep 9 02:22:15.508370 systemd-networkd[1510]: cilium_vxlan: Link UP Sep 9 02:22:15.508384 systemd-networkd[1510]: cilium_vxlan: Gained carrier Sep 9 02:22:15.634437 systemd-networkd[1510]: cilium_host: Gained IPv6LL Sep 9 02:22:16.038276 kernel: NET: Registered PF_ALG protocol family Sep 9 02:22:16.306535 systemd-networkd[1510]: cilium_net: Gained IPv6LL Sep 9 02:22:17.165880 systemd-networkd[1510]: lxc_health: Link UP Sep 9 02:22:17.173918 systemd-networkd[1510]: lxc_health: Gained carrier Sep 9 02:22:17.330420 systemd-networkd[1510]: cilium_vxlan: Gained IPv6LL Sep 9 02:22:17.542809 systemd-networkd[1510]: lxc8431a2bf4503: Link UP Sep 9 02:22:17.554239 kernel: eth0: renamed from tmpf134d Sep 9 02:22:17.557732 systemd-networkd[1510]: lxc8431a2bf4503: Gained carrier Sep 9 02:22:17.593106 systemd-networkd[1510]: lxc5a54442bd3d1: Link UP Sep 9 02:22:17.600260 kernel: eth0: renamed from tmpafd26 Sep 9 02:22:17.606960 systemd-networkd[1510]: lxc5a54442bd3d1: Gained carrier Sep 9 02:22:18.355353 systemd-networkd[1510]: lxc_health: Gained IPv6LL Sep 9 02:22:19.058925 systemd-networkd[1510]: lxc8431a2bf4503: Gained IPv6LL Sep 9 02:22:19.442460 systemd-networkd[1510]: lxc5a54442bd3d1: Gained IPv6LL Sep 9 02:22:23.409616 containerd[1608]: time="2025-09-09T02:22:23.409467770Z" level=info msg="connecting to shim afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60" address="unix:///run/containerd/s/5f44c90a9e7b77f81e1e45db9339071d0b252b7fa92e36ac1895e0384c157d61" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:22:23.449189 containerd[1608]: time="2025-09-09T02:22:23.448446006Z" level=info msg="connecting to shim f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a" address="unix:///run/containerd/s/bf00e2a9b2c70dece1e9b2aca7a88e94733151304283d58157176a9ccad20740" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:22:23.488444 systemd[1]: Started cri-containerd-afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60.scope - libcontainer container afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60. Sep 9 02:22:23.512461 systemd[1]: Started cri-containerd-f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a.scope - libcontainer container f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a. Sep 9 02:22:23.650486 containerd[1608]: time="2025-09-09T02:22:23.650416464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlk2p,Uid:994c99e1-4980-4cca-b525-6621b9d95e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a\"" Sep 9 02:22:23.669277 containerd[1608]: time="2025-09-09T02:22:23.668993362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gbwrv,Uid:c50d17c8-1cff-4a12-b3a2-ccc92057080e,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60\"" Sep 9 02:22:23.682302 containerd[1608]: time="2025-09-09T02:22:23.681979562Z" level=info msg="CreateContainer within sandbox \"f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 02:22:23.682926 containerd[1608]: time="2025-09-09T02:22:23.682773655Z" level=info msg="CreateContainer within sandbox \"afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 02:22:23.709390 containerd[1608]: time="2025-09-09T02:22:23.709093520Z" level=info msg="Container ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:23.709946 containerd[1608]: time="2025-09-09T02:22:23.709307492Z" level=info msg="Container c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:22:23.731514 containerd[1608]: time="2025-09-09T02:22:23.731345030Z" level=info msg="CreateContainer within sandbox \"f134dfa1854963390aaf24ca2df8edf8505935bacf11c4af3ad044e8be0b2c1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685\"" Sep 9 02:22:23.731865 containerd[1608]: time="2025-09-09T02:22:23.731732815Z" level=info msg="CreateContainer within sandbox \"afd2618c56809dcb66d03b48ec2d372dd8b59e355dcc6736f39c5abc9e96ca60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb\"" Sep 9 02:22:23.733844 containerd[1608]: time="2025-09-09T02:22:23.733722782Z" level=info msg="StartContainer for \"ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685\"" Sep 9 02:22:23.736779 containerd[1608]: time="2025-09-09T02:22:23.735511106Z" level=info msg="connecting to shim ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685" address="unix:///run/containerd/s/bf00e2a9b2c70dece1e9b2aca7a88e94733151304283d58157176a9ccad20740" protocol=ttrpc version=3 Sep 9 02:22:23.737109 containerd[1608]: time="2025-09-09T02:22:23.734125738Z" level=info msg="StartContainer for \"c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb\"" Sep 9 02:22:23.752346 containerd[1608]: time="2025-09-09T02:22:23.752197584Z" level=info msg="connecting to shim c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb" address="unix:///run/containerd/s/5f44c90a9e7b77f81e1e45db9339071d0b252b7fa92e36ac1895e0384c157d61" protocol=ttrpc version=3 Sep 9 02:22:23.781618 systemd[1]: Started cri-containerd-ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685.scope - libcontainer container ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685. Sep 9 02:22:23.802405 systemd[1]: Started cri-containerd-c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb.scope - libcontainer container c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb. Sep 9 02:22:23.861903 containerd[1608]: time="2025-09-09T02:22:23.861619021Z" level=info msg="StartContainer for \"ab9917bfabe42111862755b9a0c77540defeb6f29ded1febaa7dd0d5d49b4685\" returns successfully" Sep 9 02:22:23.885106 containerd[1608]: time="2025-09-09T02:22:23.885055293Z" level=info msg="StartContainer for \"c308e443052cbd4440066372ff972be78c57db3cda1e2feb776af7ef8e19e0cb\" returns successfully" Sep 9 02:22:24.218579 kubelet[2877]: I0909 02:22:24.218447 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gbwrv" podStartSLOduration=28.218347553 podStartE2EDuration="28.218347553s" podCreationTimestamp="2025-09-09 02:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:22:24.215657943 +0000 UTC m=+33.680816326" watchObservedRunningTime="2025-09-09 02:22:24.218347553 +0000 UTC m=+33.683505926" Sep 9 02:22:24.237428 kubelet[2877]: I0909 02:22:24.236898 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mlk2p" podStartSLOduration=28.236874535 podStartE2EDuration="28.236874535s" podCreationTimestamp="2025-09-09 02:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:22:24.235456265 +0000 UTC m=+33.700614654" watchObservedRunningTime="2025-09-09 02:22:24.236874535 +0000 UTC m=+33.702032921" Sep 9 02:22:24.362434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441511796.mount: Deactivated successfully. Sep 9 02:23:08.003439 systemd[1]: Started sshd@9-10.230.31.10:22-139.178.68.195:51502.service - OpenSSH per-connection server daemon (139.178.68.195:51502). Sep 9 02:23:08.976173 sshd[4205]: Accepted publickey for core from 139.178.68.195 port 51502 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:08.978968 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:08.993769 systemd-logind[1583]: New session 12 of user core. Sep 9 02:23:08.997423 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 02:23:10.147594 sshd[4208]: Connection closed by 139.178.68.195 port 51502 Sep 9 02:23:10.148248 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:10.154400 systemd[1]: sshd@9-10.230.31.10:22-139.178.68.195:51502.service: Deactivated successfully. Sep 9 02:23:10.158568 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 02:23:10.160121 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Sep 9 02:23:10.162555 systemd-logind[1583]: Removed session 12. Sep 9 02:23:15.307489 systemd[1]: Started sshd@10-10.230.31.10:22-139.178.68.195:59818.service - OpenSSH per-connection server daemon (139.178.68.195:59818). Sep 9 02:23:16.232345 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 59818 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:16.234446 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:16.241673 systemd-logind[1583]: New session 13 of user core. Sep 9 02:23:16.249514 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 02:23:16.952283 sshd[4225]: Connection closed by 139.178.68.195 port 59818 Sep 9 02:23:16.953241 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:16.959204 systemd[1]: sshd@10-10.230.31.10:22-139.178.68.195:59818.service: Deactivated successfully. Sep 9 02:23:16.962633 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 02:23:16.964448 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Sep 9 02:23:16.966676 systemd-logind[1583]: Removed session 13. Sep 9 02:23:22.114311 systemd[1]: Started sshd@11-10.230.31.10:22-139.178.68.195:38342.service - OpenSSH per-connection server daemon (139.178.68.195:38342). Sep 9 02:23:23.068399 sshd[4240]: Accepted publickey for core from 139.178.68.195 port 38342 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:23.071486 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:23.079361 systemd-logind[1583]: New session 14 of user core. Sep 9 02:23:23.088549 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 02:23:23.793254 sshd[4242]: Connection closed by 139.178.68.195 port 38342 Sep 9 02:23:23.792569 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:23.798131 systemd[1]: sshd@11-10.230.31.10:22-139.178.68.195:38342.service: Deactivated successfully. Sep 9 02:23:23.801675 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 02:23:23.804199 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Sep 9 02:23:23.806399 systemd-logind[1583]: Removed session 14. Sep 9 02:23:28.950300 systemd[1]: Started sshd@12-10.230.31.10:22-139.178.68.195:38352.service - OpenSSH per-connection server daemon (139.178.68.195:38352). Sep 9 02:23:29.917999 sshd[4257]: Accepted publickey for core from 139.178.68.195 port 38352 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:29.920419 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:29.929507 systemd-logind[1583]: New session 15 of user core. Sep 9 02:23:29.936545 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 02:23:30.622885 sshd[4259]: Connection closed by 139.178.68.195 port 38352 Sep 9 02:23:30.623831 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:30.630980 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Sep 9 02:23:30.631419 systemd[1]: sshd@12-10.230.31.10:22-139.178.68.195:38352.service: Deactivated successfully. Sep 9 02:23:30.633843 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 02:23:30.636697 systemd-logind[1583]: Removed session 15. Sep 9 02:23:30.783116 systemd[1]: Started sshd@13-10.230.31.10:22-139.178.68.195:42066.service - OpenSSH per-connection server daemon (139.178.68.195:42066). Sep 9 02:23:31.702266 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 42066 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:31.704564 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:31.712158 systemd-logind[1583]: New session 16 of user core. Sep 9 02:23:31.716785 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 02:23:32.493249 sshd[4274]: Connection closed by 139.178.68.195 port 42066 Sep 9 02:23:32.493082 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:32.498115 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Sep 9 02:23:32.499613 systemd[1]: sshd@13-10.230.31.10:22-139.178.68.195:42066.service: Deactivated successfully. Sep 9 02:23:32.502086 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 02:23:32.504617 systemd-logind[1583]: Removed session 16. Sep 9 02:23:32.654045 systemd[1]: Started sshd@14-10.230.31.10:22-139.178.68.195:42080.service - OpenSSH per-connection server daemon (139.178.68.195:42080). Sep 9 02:23:33.568072 sshd[4284]: Accepted publickey for core from 139.178.68.195 port 42080 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:33.569867 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:33.578061 systemd-logind[1583]: New session 17 of user core. Sep 9 02:23:33.587613 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 02:23:34.277206 sshd[4286]: Connection closed by 139.178.68.195 port 42080 Sep 9 02:23:34.278116 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:34.283744 systemd[1]: sshd@14-10.230.31.10:22-139.178.68.195:42080.service: Deactivated successfully. Sep 9 02:23:34.286920 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 02:23:34.289138 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Sep 9 02:23:34.290906 systemd-logind[1583]: Removed session 17. Sep 9 02:23:39.434086 systemd[1]: Started sshd@15-10.230.31.10:22-139.178.68.195:42082.service - OpenSSH per-connection server daemon (139.178.68.195:42082). Sep 9 02:23:40.339088 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 42082 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:40.341064 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:40.348345 systemd-logind[1583]: New session 18 of user core. Sep 9 02:23:40.356474 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 02:23:41.055420 sshd[4299]: Connection closed by 139.178.68.195 port 42082 Sep 9 02:23:41.056266 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:41.061454 systemd[1]: sshd@15-10.230.31.10:22-139.178.68.195:42082.service: Deactivated successfully. Sep 9 02:23:41.063911 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 02:23:41.065873 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Sep 9 02:23:41.068578 systemd-logind[1583]: Removed session 18. Sep 9 02:23:46.214582 systemd[1]: Started sshd@16-10.230.31.10:22-139.178.68.195:39566.service - OpenSSH per-connection server daemon (139.178.68.195:39566). Sep 9 02:23:47.130024 sshd[4310]: Accepted publickey for core from 139.178.68.195 port 39566 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:47.132155 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:47.140718 systemd-logind[1583]: New session 19 of user core. Sep 9 02:23:47.146472 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 02:23:47.842836 sshd[4312]: Connection closed by 139.178.68.195 port 39566 Sep 9 02:23:47.843866 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:47.850630 systemd[1]: sshd@16-10.230.31.10:22-139.178.68.195:39566.service: Deactivated successfully. Sep 9 02:23:47.853481 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 02:23:47.854942 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Sep 9 02:23:47.857433 systemd-logind[1583]: Removed session 19. Sep 9 02:23:48.000139 systemd[1]: Started sshd@17-10.230.31.10:22-139.178.68.195:39568.service - OpenSSH per-connection server daemon (139.178.68.195:39568). Sep 9 02:23:48.908456 sshd[4323]: Accepted publickey for core from 139.178.68.195 port 39568 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:48.910587 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:48.919385 systemd-logind[1583]: New session 20 of user core. Sep 9 02:23:48.925527 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 02:23:49.959808 sshd[4325]: Connection closed by 139.178.68.195 port 39568 Sep 9 02:23:49.960742 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:49.966082 systemd[1]: sshd@17-10.230.31.10:22-139.178.68.195:39568.service: Deactivated successfully. Sep 9 02:23:49.968590 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 02:23:49.970124 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Sep 9 02:23:49.973070 systemd-logind[1583]: Removed session 20. Sep 9 02:23:50.123631 systemd[1]: Started sshd@18-10.230.31.10:22-139.178.68.195:39570.service - OpenSSH per-connection server daemon (139.178.68.195:39570). Sep 9 02:23:51.041155 sshd[4335]: Accepted publickey for core from 139.178.68.195 port 39570 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:51.043196 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:51.050128 systemd-logind[1583]: New session 21 of user core. Sep 9 02:23:51.059502 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 02:23:52.474956 sshd[4339]: Connection closed by 139.178.68.195 port 39570 Sep 9 02:23:52.476172 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:52.481009 systemd[1]: sshd@18-10.230.31.10:22-139.178.68.195:39570.service: Deactivated successfully. Sep 9 02:23:52.484623 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 02:23:52.487303 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Sep 9 02:23:52.489321 systemd-logind[1583]: Removed session 21. Sep 9 02:23:52.631683 systemd[1]: Started sshd@19-10.230.31.10:22-139.178.68.195:39242.service - OpenSSH per-connection server daemon (139.178.68.195:39242). Sep 9 02:23:53.572976 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 39242 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:53.575012 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:53.581897 systemd-logind[1583]: New session 22 of user core. Sep 9 02:23:53.593456 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 02:23:54.536690 sshd[4359]: Connection closed by 139.178.68.195 port 39242 Sep 9 02:23:54.538015 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:54.550013 systemd[1]: sshd@19-10.230.31.10:22-139.178.68.195:39242.service: Deactivated successfully. Sep 9 02:23:54.553384 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 02:23:54.556409 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Sep 9 02:23:54.559275 systemd-logind[1583]: Removed session 22. Sep 9 02:23:54.692931 systemd[1]: Started sshd@20-10.230.31.10:22-139.178.68.195:39246.service - OpenSSH per-connection server daemon (139.178.68.195:39246). Sep 9 02:23:55.617184 sshd[4368]: Accepted publickey for core from 139.178.68.195 port 39246 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:23:55.621171 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:23:55.630171 systemd-logind[1583]: New session 23 of user core. Sep 9 02:23:55.636453 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 02:23:56.318150 sshd[4370]: Connection closed by 139.178.68.195 port 39246 Sep 9 02:23:56.318019 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Sep 9 02:23:56.323497 systemd[1]: sshd@20-10.230.31.10:22-139.178.68.195:39246.service: Deactivated successfully. Sep 9 02:23:56.325830 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 02:23:56.327381 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Sep 9 02:23:56.330031 systemd-logind[1583]: Removed session 23. Sep 9 02:24:01.479565 systemd[1]: Started sshd@21-10.230.31.10:22-139.178.68.195:58502.service - OpenSSH per-connection server daemon (139.178.68.195:58502). Sep 9 02:24:02.391665 sshd[4386]: Accepted publickey for core from 139.178.68.195 port 58502 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:02.394185 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:02.406647 systemd-logind[1583]: New session 24 of user core. Sep 9 02:24:02.412621 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 02:24:03.091534 sshd[4388]: Connection closed by 139.178.68.195 port 58502 Sep 9 02:24:03.092415 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:03.097776 systemd[1]: sshd@21-10.230.31.10:22-139.178.68.195:58502.service: Deactivated successfully. Sep 9 02:24:03.100827 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 02:24:03.102256 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Sep 9 02:24:03.104738 systemd-logind[1583]: Removed session 24. Sep 9 02:24:08.249957 systemd[1]: Started sshd@22-10.230.31.10:22-139.178.68.195:58518.service - OpenSSH per-connection server daemon (139.178.68.195:58518). Sep 9 02:24:09.161093 sshd[4399]: Accepted publickey for core from 139.178.68.195 port 58518 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:09.163332 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:09.170722 systemd-logind[1583]: New session 25 of user core. Sep 9 02:24:09.181572 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 02:24:09.908483 sshd[4401]: Connection closed by 139.178.68.195 port 58518 Sep 9 02:24:09.909358 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:09.915295 systemd[1]: sshd@22-10.230.31.10:22-139.178.68.195:58518.service: Deactivated successfully. Sep 9 02:24:09.919832 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 02:24:09.921605 systemd-logind[1583]: Session 25 logged out. Waiting for processes to exit. Sep 9 02:24:09.923776 systemd-logind[1583]: Removed session 25. Sep 9 02:24:15.067648 systemd[1]: Started sshd@23-10.230.31.10:22-139.178.68.195:33584.service - OpenSSH per-connection server daemon (139.178.68.195:33584). Sep 9 02:24:15.996945 sshd[4413]: Accepted publickey for core from 139.178.68.195 port 33584 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:15.999429 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:16.007421 systemd-logind[1583]: New session 26 of user core. Sep 9 02:24:16.012442 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 02:24:16.715973 sshd[4415]: Connection closed by 139.178.68.195 port 33584 Sep 9 02:24:16.716808 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:16.723043 systemd[1]: sshd@23-10.230.31.10:22-139.178.68.195:33584.service: Deactivated successfully. Sep 9 02:24:16.726244 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 02:24:16.728284 systemd-logind[1583]: Session 26 logged out. Waiting for processes to exit. Sep 9 02:24:16.731639 systemd-logind[1583]: Removed session 26. Sep 9 02:24:16.875550 systemd[1]: Started sshd@24-10.230.31.10:22-139.178.68.195:33598.service - OpenSSH per-connection server daemon (139.178.68.195:33598). Sep 9 02:24:17.783508 sshd[4427]: Accepted publickey for core from 139.178.68.195 port 33598 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:17.785404 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:17.792437 systemd-logind[1583]: New session 27 of user core. Sep 9 02:24:17.799432 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 02:24:20.145647 containerd[1608]: time="2025-09-09T02:24:20.144195210Z" level=info msg="StopContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" with timeout 30 (s)" Sep 9 02:24:20.147529 containerd[1608]: time="2025-09-09T02:24:20.147302084Z" level=info msg="Stop container \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" with signal terminated" Sep 9 02:24:20.171848 systemd[1]: cri-containerd-71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0.scope: Deactivated successfully. Sep 9 02:24:20.179988 containerd[1608]: time="2025-09-09T02:24:20.179936661Z" level=info msg="received exit event container_id:\"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" id:\"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" pid:3442 exited_at:{seconds:1757384660 nanos:178013365}" Sep 9 02:24:20.180351 containerd[1608]: time="2025-09-09T02:24:20.179873840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" id:\"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" pid:3442 exited_at:{seconds:1757384660 nanos:178013365}" Sep 9 02:24:20.194168 containerd[1608]: time="2025-09-09T02:24:20.194021788Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 02:24:20.203267 containerd[1608]: time="2025-09-09T02:24:20.203168332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" id:\"f2e53ad04388a4ae1711122a9aec96a096cb9618e1bb6f83b747e0dca7eb9493\" pid:4455 exited_at:{seconds:1757384660 nanos:201885008}" Sep 9 02:24:20.206816 containerd[1608]: time="2025-09-09T02:24:20.206746823Z" level=info msg="StopContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" with timeout 2 (s)" Sep 9 02:24:20.207295 containerd[1608]: time="2025-09-09T02:24:20.207245142Z" level=info msg="Stop container \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" with signal terminated" Sep 9 02:24:20.230288 systemd-networkd[1510]: lxc_health: Link DOWN Sep 9 02:24:20.230302 systemd-networkd[1510]: lxc_health: Lost carrier Sep 9 02:24:20.232597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0-rootfs.mount: Deactivated successfully. Sep 9 02:24:20.251190 containerd[1608]: time="2025-09-09T02:24:20.251107775Z" level=info msg="StopContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" returns successfully" Sep 9 02:24:20.253602 containerd[1608]: time="2025-09-09T02:24:20.253252087Z" level=info msg="StopPodSandbox for \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\"" Sep 9 02:24:20.253602 containerd[1608]: time="2025-09-09T02:24:20.253351118Z" level=info msg="Container to stop \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.256464 systemd[1]: cri-containerd-e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527.scope: Deactivated successfully. Sep 9 02:24:20.256905 systemd[1]: cri-containerd-e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527.scope: Consumed 10.388s CPU time, 198.6M memory peak, 76.8M read from disk, 13.3M written to disk. Sep 9 02:24:20.268234 containerd[1608]: time="2025-09-09T02:24:20.268025992Z" level=info msg="received exit event container_id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" pid:3516 exited_at:{seconds:1757384660 nanos:267438987}" Sep 9 02:24:20.268905 containerd[1608]: time="2025-09-09T02:24:20.268869378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" id:\"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" pid:3516 exited_at:{seconds:1757384660 nanos:267438987}" Sep 9 02:24:20.279841 systemd[1]: cri-containerd-99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67.scope: Deactivated successfully. Sep 9 02:24:20.287045 containerd[1608]: time="2025-09-09T02:24:20.286992535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" id:\"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" pid:3076 exit_status:137 exited_at:{seconds:1757384660 nanos:286685577}" Sep 9 02:24:20.319076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527-rootfs.mount: Deactivated successfully. Sep 9 02:24:20.333382 containerd[1608]: time="2025-09-09T02:24:20.333007598Z" level=info msg="StopContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" returns successfully" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333801315Z" level=info msg="StopPodSandbox for \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\"" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333873634Z" level=info msg="Container to stop \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333896982Z" level=info msg="Container to stop \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333912413Z" level=info msg="Container to stop \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333925906Z" level=info msg="Container to stop \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.334149 containerd[1608]: time="2025-09-09T02:24:20.333941072Z" level=info msg="Container to stop \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 02:24:20.353902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67-rootfs.mount: Deactivated successfully. Sep 9 02:24:20.355745 systemd[1]: cri-containerd-b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124.scope: Deactivated successfully. Sep 9 02:24:20.360812 containerd[1608]: time="2025-09-09T02:24:20.360666492Z" level=info msg="shim disconnected" id=99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67 namespace=k8s.io Sep 9 02:24:20.361161 containerd[1608]: time="2025-09-09T02:24:20.360711089Z" level=warning msg="cleaning up after shim disconnected" id=99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67 namespace=k8s.io Sep 9 02:24:20.369539 containerd[1608]: time="2025-09-09T02:24:20.361027615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 02:24:20.397503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124-rootfs.mount: Deactivated successfully. Sep 9 02:24:20.401954 containerd[1608]: time="2025-09-09T02:24:20.401885069Z" level=info msg="shim disconnected" id=b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124 namespace=k8s.io Sep 9 02:24:20.402089 containerd[1608]: time="2025-09-09T02:24:20.402044288Z" level=warning msg="cleaning up after shim disconnected" id=b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124 namespace=k8s.io Sep 9 02:24:20.402141 containerd[1608]: time="2025-09-09T02:24:20.402066619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 02:24:20.416239 containerd[1608]: time="2025-09-09T02:24:20.416090739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" id:\"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" pid:3016 exit_status:137 exited_at:{seconds:1757384660 nanos:357971373}" Sep 9 02:24:20.416484 containerd[1608]: time="2025-09-09T02:24:20.416352616Z" level=info msg="received exit event sandbox_id:\"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" exit_status:137 exited_at:{seconds:1757384660 nanos:357971373}" Sep 9 02:24:20.416772 containerd[1608]: time="2025-09-09T02:24:20.416736282Z" level=info msg="received exit event sandbox_id:\"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" exit_status:137 exited_at:{seconds:1757384660 nanos:286685577}" Sep 9 02:24:20.420923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67-shm.mount: Deactivated successfully. Sep 9 02:24:20.430962 containerd[1608]: time="2025-09-09T02:24:20.430444073Z" level=info msg="TearDown network for sandbox \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" successfully" Sep 9 02:24:20.430962 containerd[1608]: time="2025-09-09T02:24:20.430500783Z" level=info msg="StopPodSandbox for \"b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124\" returns successfully" Sep 9 02:24:20.431604 containerd[1608]: time="2025-09-09T02:24:20.431570023Z" level=info msg="TearDown network for sandbox \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" successfully" Sep 9 02:24:20.432636 containerd[1608]: time="2025-09-09T02:24:20.431817727Z" level=info msg="StopPodSandbox for \"99f8f8dc20d67a02fff3786c9ec1ee861b0cd90c7cf67b20940215ea2009bf67\" returns successfully" Sep 9 02:24:20.473843 kubelet[2877]: I0909 02:24:20.473801 2877 scope.go:117] "RemoveContainer" containerID="71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0" Sep 9 02:24:20.479420 containerd[1608]: time="2025-09-09T02:24:20.477549461Z" level=info msg="RemoveContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\"" Sep 9 02:24:20.493489 containerd[1608]: time="2025-09-09T02:24:20.493430531Z" level=info msg="RemoveContainer for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" returns successfully" Sep 9 02:24:20.496411 kubelet[2877]: I0909 02:24:20.496293 2877 scope.go:117] "RemoveContainer" containerID="71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0" Sep 9 02:24:20.497276 containerd[1608]: time="2025-09-09T02:24:20.497142933Z" level=error msg="ContainerStatus for \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\": not found" Sep 9 02:24:20.497807 kubelet[2877]: E0909 02:24:20.497668 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\": not found" containerID="71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0" Sep 9 02:24:20.498088 kubelet[2877]: I0909 02:24:20.497759 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0"} err="failed to get container status \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"71b701b0db189d4a98177c4dc0a50fc5a8c016e8cb7df452ffe4b39b281d5bf0\": not found" Sep 9 02:24:20.498294 kubelet[2877]: I0909 02:24:20.498206 2877 scope.go:117] "RemoveContainer" containerID="e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527" Sep 9 02:24:20.502319 containerd[1608]: time="2025-09-09T02:24:20.501794986Z" level=info msg="RemoveContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\"" Sep 9 02:24:20.518582 containerd[1608]: time="2025-09-09T02:24:20.518343381Z" level=info msg="RemoveContainer for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" returns successfully" Sep 9 02:24:20.519471 kubelet[2877]: I0909 02:24:20.519423 2877 scope.go:117] "RemoveContainer" containerID="35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31" Sep 9 02:24:20.524208 containerd[1608]: time="2025-09-09T02:24:20.524139211Z" level=info msg="RemoveContainer for \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\"" Sep 9 02:24:20.530960 containerd[1608]: time="2025-09-09T02:24:20.530897290Z" level=info msg="RemoveContainer for \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" returns successfully" Sep 9 02:24:20.531385 kubelet[2877]: I0909 02:24:20.531352 2877 scope.go:117] "RemoveContainer" containerID="cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f" Sep 9 02:24:20.534202 containerd[1608]: time="2025-09-09T02:24:20.534168264Z" level=info msg="RemoveContainer for \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\"" Sep 9 02:24:20.538312 containerd[1608]: time="2025-09-09T02:24:20.538260257Z" level=info msg="RemoveContainer for \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" returns successfully" Sep 9 02:24:20.538514 kubelet[2877]: I0909 02:24:20.538440 2877 scope.go:117] "RemoveContainer" containerID="7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca" Sep 9 02:24:20.541073 containerd[1608]: time="2025-09-09T02:24:20.540310413Z" level=info msg="RemoveContainer for \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\"" Sep 9 02:24:20.543958 containerd[1608]: time="2025-09-09T02:24:20.543926925Z" level=info msg="RemoveContainer for \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" returns successfully" Sep 9 02:24:20.544369 kubelet[2877]: I0909 02:24:20.544287 2877 scope.go:117] "RemoveContainer" containerID="f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6" Sep 9 02:24:20.546292 kubelet[2877]: I0909 02:24:20.546266 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-xtables-lock\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.546609 kubelet[2877]: I0909 02:24:20.546311 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.546712 kubelet[2877]: I0909 02:24:20.546688 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.547022 kubelet[2877]: I0909 02:24:20.546562 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-cgroup\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.547356 containerd[1608]: time="2025-09-09T02:24:20.546964473Z" level=info msg="RemoveContainer for \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\"" Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.547952 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdnff\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-kube-api-access-xdnff\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.548030 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9485dcc-774e-4477-86d0-653dadf63239-cilium-config-path\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.548055 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cni-path\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.548080 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-bpf-maps\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.548105 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-hostproc\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549274 kubelet[2877]: I0909 02:24:20.548169 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-hubble-tls\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548244 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tdwg\" (UniqueName: \"kubernetes.io/projected/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-kube-api-access-5tdwg\") pod \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\" (UID: \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548281 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9485dcc-774e-4477-86d0-653dadf63239-clustermesh-secrets\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548307 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-net\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548330 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-kernel\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548353 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-lib-modules\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549582 kubelet[2877]: I0909 02:24:20.548385 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-cilium-config-path\") pod \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\" (UID: \"e4e358af-48b0-48bc-9d8f-6cb6f70a24c3\") " Sep 9 02:24:20.549871 kubelet[2877]: I0909 02:24:20.548414 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-run\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549871 kubelet[2877]: I0909 02:24:20.548437 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-etc-cni-netd\") pod \"b9485dcc-774e-4477-86d0-653dadf63239\" (UID: \"b9485dcc-774e-4477-86d0-653dadf63239\") " Sep 9 02:24:20.549871 kubelet[2877]: I0909 02:24:20.548508 2877 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-xtables-lock\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.549871 kubelet[2877]: I0909 02:24:20.548533 2877 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-cgroup\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.549871 kubelet[2877]: I0909 02:24:20.548565 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.550864 kubelet[2877]: I0909 02:24:20.550823 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.551191 kubelet[2877]: I0909 02:24:20.551128 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.551938 kubelet[2877]: I0909 02:24:20.551577 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.556253 containerd[1608]: time="2025-09-09T02:24:20.555042132Z" level=info msg="RemoveContainer for \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" returns successfully" Sep 9 02:24:20.557520 kubelet[2877]: I0909 02:24:20.557485 2877 scope.go:117] "RemoveContainer" containerID="e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527" Sep 9 02:24:20.557733 kubelet[2877]: I0909 02:24:20.557706 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.558135 containerd[1608]: time="2025-09-09T02:24:20.558088644Z" level=error msg="ContainerStatus for \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\": not found" Sep 9 02:24:20.558539 kubelet[2877]: I0909 02:24:20.558407 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.559336 kubelet[2877]: I0909 02:24:20.559294 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.560516 kubelet[2877]: E0909 02:24:20.560309 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\": not found" containerID="e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527" Sep 9 02:24:20.560672 kubelet[2877]: I0909 02:24:20.560637 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527"} err="failed to get container status \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\": rpc error: code = NotFound desc = an error occurred when try to find container \"e31af52eba1a753a86ef4e96a434cab9b26df31945848289f4d6f9bfb5fee527\": not found" Sep 9 02:24:20.561695 kubelet[2877]: I0909 02:24:20.561670 2877 scope.go:117] "RemoveContainer" containerID="35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31" Sep 9 02:24:20.562034 containerd[1608]: time="2025-09-09T02:24:20.561995349Z" level=error msg="ContainerStatus for \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\": not found" Sep 9 02:24:20.563060 kubelet[2877]: I0909 02:24:20.562173 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 02:24:20.563060 kubelet[2877]: I0909 02:24:20.562980 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9485dcc-774e-4477-86d0-653dadf63239-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 02:24:20.563911 kubelet[2877]: E0909 02:24:20.563873 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\": not found" containerID="35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31" Sep 9 02:24:20.564094 kubelet[2877]: I0909 02:24:20.564040 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31"} err="failed to get container status \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\": rpc error: code = NotFound desc = an error occurred when try to find container \"35c90e76e9075857dee70eb40208bc4d74951295c3055cd3118d1b045d3e9a31\": not found" Sep 9 02:24:20.564217 kubelet[2877]: I0909 02:24:20.564198 2877 scope.go:117] "RemoveContainer" containerID="cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f" Sep 9 02:24:20.564860 containerd[1608]: time="2025-09-09T02:24:20.564818397Z" level=error msg="ContainerStatus for \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\": not found" Sep 9 02:24:20.569344 kubelet[2877]: E0909 02:24:20.569310 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\": not found" containerID="cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f" Sep 9 02:24:20.569507 kubelet[2877]: I0909 02:24:20.569472 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f"} err="failed to get container status \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbebb833521a2b041f2b0d87b764445644485a7ffa4d67f7dad6c665c7fbdc3f\": not found" Sep 9 02:24:20.569617 kubelet[2877]: I0909 02:24:20.569596 2877 scope.go:117] "RemoveContainer" containerID="7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca" Sep 9 02:24:20.569776 kubelet[2877]: I0909 02:24:20.569651 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4e358af-48b0-48bc-9d8f-6cb6f70a24c3" (UID: "e4e358af-48b0-48bc-9d8f-6cb6f70a24c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 02:24:20.570267 containerd[1608]: time="2025-09-09T02:24:20.570065751Z" level=error msg="ContainerStatus for \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\": not found" Sep 9 02:24:20.570697 kubelet[2877]: E0909 02:24:20.570291 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\": not found" containerID="7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca" Sep 9 02:24:20.570697 kubelet[2877]: I0909 02:24:20.570381 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca"} err="failed to get container status \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fd30b453e074eff00613ee84aa2f140effd963cd38bbabed4cdd07cd8624dca\": not found" Sep 9 02:24:20.570697 kubelet[2877]: I0909 02:24:20.570403 2877 scope.go:117] "RemoveContainer" containerID="f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6" Sep 9 02:24:20.570861 containerd[1608]: time="2025-09-09T02:24:20.570605908Z" level=error msg="ContainerStatus for \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\": not found" Sep 9 02:24:20.570920 kubelet[2877]: E0909 02:24:20.570758 2877 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\": not found" containerID="f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6" Sep 9 02:24:20.570920 kubelet[2877]: I0909 02:24:20.570788 2877 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6"} err="failed to get container status \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8c65bbf66dcfafc7af0ced6f9c93a9b809b4f70dd1a6ea501d4e275205c44b6\": not found" Sep 9 02:24:20.572086 kubelet[2877]: I0909 02:24:20.572047 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 02:24:20.572805 kubelet[2877]: I0909 02:24:20.572775 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-kube-api-access-xdnff" (OuterVolumeSpecName: "kube-api-access-xdnff") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "kube-api-access-xdnff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 02:24:20.573162 kubelet[2877]: I0909 02:24:20.573101 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9485dcc-774e-4477-86d0-653dadf63239-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9485dcc-774e-4477-86d0-653dadf63239" (UID: "b9485dcc-774e-4477-86d0-653dadf63239"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 02:24:20.573710 kubelet[2877]: I0909 02:24:20.573667 2877 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-kube-api-access-5tdwg" (OuterVolumeSpecName: "kube-api-access-5tdwg") pod "e4e358af-48b0-48bc-9d8f-6cb6f70a24c3" (UID: "e4e358af-48b0-48bc-9d8f-6cb6f70a24c3"). InnerVolumeSpecName "kube-api-access-5tdwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649738 2877 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xdnff\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-kube-api-access-xdnff\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649811 2877 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cni-path\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649829 2877 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9485dcc-774e-4477-86d0-653dadf63239-cilium-config-path\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649846 2877 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-hostproc\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649869 2877 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9485dcc-774e-4477-86d0-653dadf63239-hubble-tls\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649896 2877 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tdwg\" (UniqueName: \"kubernetes.io/projected/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-kube-api-access-5tdwg\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649912 2877 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-bpf-maps\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650158 kubelet[2877]: I0909 02:24:20.649931 2877 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9485dcc-774e-4477-86d0-653dadf63239-clustermesh-secrets\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.649946 2877 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-net\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.649964 2877 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-host-proc-sys-kernel\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.649979 2877 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-lib-modules\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.649996 2877 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3-cilium-config-path\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.650013 2877 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-cilium-run\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.650644 kubelet[2877]: I0909 02:24:20.650028 2877 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9485dcc-774e-4477-86d0-653dadf63239-etc-cni-netd\") on node \"srv-9tmcm.gb1.brightbox.com\" DevicePath \"\"" Sep 9 02:24:20.782609 systemd[1]: Removed slice kubepods-besteffort-pode4e358af_48b0_48bc_9d8f_6cb6f70a24c3.slice - libcontainer container kubepods-besteffort-pode4e358af_48b0_48bc_9d8f_6cb6f70a24c3.slice. Sep 9 02:24:20.789941 systemd[1]: Removed slice kubepods-burstable-podb9485dcc_774e_4477_86d0_653dadf63239.slice - libcontainer container kubepods-burstable-podb9485dcc_774e_4477_86d0_653dadf63239.slice. Sep 9 02:24:20.790425 systemd[1]: kubepods-burstable-podb9485dcc_774e_4477_86d0_653dadf63239.slice: Consumed 10.532s CPU time, 199M memory peak, 76.9M read from disk, 13.3M written to disk. Sep 9 02:24:20.935911 kubelet[2877]: E0909 02:24:20.935752 2877 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 02:24:21.228733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5f311ac1fe99179e5cb4ced828a8a9d830e4e707f2658f59efadc51b7a3d124-shm.mount: Deactivated successfully. Sep 9 02:24:21.228878 systemd[1]: var-lib-kubelet-pods-e4e358af\x2d48b0\x2d48bc\x2d9d8f\x2d6cb6f70a24c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tdwg.mount: Deactivated successfully. Sep 9 02:24:21.229004 systemd[1]: var-lib-kubelet-pods-b9485dcc\x2d774e\x2d4477\x2d86d0\x2d653dadf63239-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdnff.mount: Deactivated successfully. Sep 9 02:24:21.229114 systemd[1]: var-lib-kubelet-pods-b9485dcc\x2d774e\x2d4477\x2d86d0\x2d653dadf63239-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 02:24:21.229909 systemd[1]: var-lib-kubelet-pods-b9485dcc\x2d774e\x2d4477\x2d86d0\x2d653dadf63239-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 02:24:22.204853 sshd[4429]: Connection closed by 139.178.68.195 port 33598 Sep 9 02:24:22.207990 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:22.219384 systemd-logind[1583]: Session 27 logged out. Waiting for processes to exit. Sep 9 02:24:22.219694 systemd[1]: sshd@24-10.230.31.10:22-139.178.68.195:33598.service: Deactivated successfully. Sep 9 02:24:22.223648 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 02:24:22.224020 systemd[1]: session-27.scope: Consumed 1.208s CPU time, 25.8M memory peak. Sep 9 02:24:22.227680 systemd-logind[1583]: Removed session 27. Sep 9 02:24:22.366998 systemd[1]: Started sshd@25-10.230.31.10:22-139.178.68.195:35166.service - OpenSSH per-connection server daemon (139.178.68.195:35166). Sep 9 02:24:22.778863 kubelet[2877]: I0909 02:24:22.778803 2877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9485dcc-774e-4477-86d0-653dadf63239" path="/var/lib/kubelet/pods/b9485dcc-774e-4477-86d0-653dadf63239/volumes" Sep 9 02:24:22.779995 kubelet[2877]: I0909 02:24:22.779968 2877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e358af-48b0-48bc-9d8f-6cb6f70a24c3" path="/var/lib/kubelet/pods/e4e358af-48b0-48bc-9d8f-6cb6f70a24c3/volumes" Sep 9 02:24:23.301746 sshd[4579]: Accepted publickey for core from 139.178.68.195 port 35166 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:23.305890 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:23.315299 systemd-logind[1583]: New session 28 of user core. Sep 9 02:24:23.325627 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 02:24:24.302804 kubelet[2877]: I0909 02:24:24.302399 2877 setters.go:602] "Node became not ready" node="srv-9tmcm.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T02:24:24Z","lastTransitionTime":"2025-09-09T02:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 02:24:24.889346 kubelet[2877]: I0909 02:24:24.888895 2877 memory_manager.go:355] "RemoveStaleState removing state" podUID="b9485dcc-774e-4477-86d0-653dadf63239" containerName="cilium-agent" Sep 9 02:24:24.890042 kubelet[2877]: I0909 02:24:24.889576 2877 memory_manager.go:355] "RemoveStaleState removing state" podUID="e4e358af-48b0-48bc-9d8f-6cb6f70a24c3" containerName="cilium-operator" Sep 9 02:24:24.904326 systemd[1]: Created slice kubepods-burstable-podb0f09419_090d_4778_89cf_7e214296dd98.slice - libcontainer container kubepods-burstable-podb0f09419_090d_4778_89cf_7e214296dd98.slice. Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.983860 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-cilium-run\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.983926 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-lib-modules\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.983966 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-cni-path\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.984007 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-host-proc-sys-net\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.984038 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0f09419-090d-4778-89cf-7e214296dd98-hubble-tls\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984253 kubelet[2877]: I0909 02:24:24.984080 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-host-proc-sys-kernel\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984699 kubelet[2877]: I0909 02:24:24.984112 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-hostproc\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984699 kubelet[2877]: I0909 02:24:24.984141 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0f09419-090d-4778-89cf-7e214296dd98-cilium-ipsec-secrets\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.984699 kubelet[2877]: I0909 02:24:24.984198 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-cilium-cgroup\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.984902 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhvz\" (UniqueName: \"kubernetes.io/projected/b0f09419-090d-4778-89cf-7e214296dd98-kube-api-access-bqhvz\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.984946 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-bpf-maps\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.984987 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0f09419-090d-4778-89cf-7e214296dd98-cilium-config-path\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.985017 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-xtables-lock\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.985045 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0f09419-090d-4778-89cf-7e214296dd98-clustermesh-secrets\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.985126 kubelet[2877]: I0909 02:24:24.985074 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0f09419-090d-4778-89cf-7e214296dd98-etc-cni-netd\") pod \"cilium-f8wm7\" (UID: \"b0f09419-090d-4778-89cf-7e214296dd98\") " pod="kube-system/cilium-f8wm7" Sep 9 02:24:24.995387 sshd[4581]: Connection closed by 139.178.68.195 port 35166 Sep 9 02:24:24.996494 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:25.001879 systemd-logind[1583]: Session 28 logged out. Waiting for processes to exit. Sep 9 02:24:25.002161 systemd[1]: sshd@25-10.230.31.10:22-139.178.68.195:35166.service: Deactivated successfully. Sep 9 02:24:25.004929 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 02:24:25.007924 systemd-logind[1583]: Removed session 28. Sep 9 02:24:25.147349 systemd[1]: Started sshd@26-10.230.31.10:22-139.178.68.195:35176.service - OpenSSH per-connection server daemon (139.178.68.195:35176). Sep 9 02:24:25.213327 containerd[1608]: time="2025-09-09T02:24:25.212947394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8wm7,Uid:b0f09419-090d-4778-89cf-7e214296dd98,Namespace:kube-system,Attempt:0,}" Sep 9 02:24:25.238168 containerd[1608]: time="2025-09-09T02:24:25.237675617Z" level=info msg="connecting to shim eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 02:24:25.273490 systemd[1]: Started cri-containerd-eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef.scope - libcontainer container eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef. Sep 9 02:24:25.318982 containerd[1608]: time="2025-09-09T02:24:25.318917795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f8wm7,Uid:b0f09419-090d-4778-89cf-7e214296dd98,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\"" Sep 9 02:24:25.324755 containerd[1608]: time="2025-09-09T02:24:25.324354103Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 02:24:25.333269 containerd[1608]: time="2025-09-09T02:24:25.333209353Z" level=info msg="Container 495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:24:25.339585 containerd[1608]: time="2025-09-09T02:24:25.339538287Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\"" Sep 9 02:24:25.340764 containerd[1608]: time="2025-09-09T02:24:25.340646635Z" level=info msg="StartContainer for \"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\"" Sep 9 02:24:25.343265 containerd[1608]: time="2025-09-09T02:24:25.342045568Z" level=info msg="connecting to shim 495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" protocol=ttrpc version=3 Sep 9 02:24:25.370446 systemd[1]: Started cri-containerd-495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6.scope - libcontainer container 495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6. Sep 9 02:24:25.416954 containerd[1608]: time="2025-09-09T02:24:25.416645174Z" level=info msg="StartContainer for \"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\" returns successfully" Sep 9 02:24:25.433849 systemd[1]: cri-containerd-495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6.scope: Deactivated successfully. Sep 9 02:24:25.434284 systemd[1]: cri-containerd-495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6.scope: Consumed 30ms CPU time, 9.3M memory peak, 2.9M read from disk. Sep 9 02:24:25.437180 containerd[1608]: time="2025-09-09T02:24:25.437130273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\" id:\"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\" pid:4655 exited_at:{seconds:1757384665 nanos:436525110}" Sep 9 02:24:25.437314 containerd[1608]: time="2025-09-09T02:24:25.437233835Z" level=info msg="received exit event container_id:\"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\" id:\"495626f5401753f6645e9d9e73af2ba6765d67c48de82e20f27b4c983646e2b6\" pid:4655 exited_at:{seconds:1757384665 nanos:436525110}" Sep 9 02:24:25.521932 containerd[1608]: time="2025-09-09T02:24:25.521290498Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 02:24:25.535240 containerd[1608]: time="2025-09-09T02:24:25.533031389Z" level=info msg="Container 056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:24:25.543687 containerd[1608]: time="2025-09-09T02:24:25.543496307Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\"" Sep 9 02:24:25.551578 containerd[1608]: time="2025-09-09T02:24:25.551534790Z" level=info msg="StartContainer for \"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\"" Sep 9 02:24:25.553520 containerd[1608]: time="2025-09-09T02:24:25.553485073Z" level=info msg="connecting to shim 056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" protocol=ttrpc version=3 Sep 9 02:24:25.576428 systemd[1]: Started cri-containerd-056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930.scope - libcontainer container 056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930. Sep 9 02:24:25.622964 containerd[1608]: time="2025-09-09T02:24:25.622893549Z" level=info msg="StartContainer for \"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\" returns successfully" Sep 9 02:24:25.636055 systemd[1]: cri-containerd-056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930.scope: Deactivated successfully. Sep 9 02:24:25.636821 systemd[1]: cri-containerd-056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930.scope: Consumed 28ms CPU time, 7.2M memory peak, 1.8M read from disk. Sep 9 02:24:25.639021 containerd[1608]: time="2025-09-09T02:24:25.638945842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\" id:\"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\" pid:4699 exited_at:{seconds:1757384665 nanos:638392891}" Sep 9 02:24:25.639111 containerd[1608]: time="2025-09-09T02:24:25.638970867Z" level=info msg="received exit event container_id:\"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\" id:\"056c4026503e5f7c2cf764a79d32f6a4a9ed5af7fb7b668ca30a4f5892973930\" pid:4699 exited_at:{seconds:1757384665 nanos:638392891}" Sep 9 02:24:25.937861 kubelet[2877]: E0909 02:24:25.937791 2877 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 02:24:26.054185 sshd[4595]: Accepted publickey for core from 139.178.68.195 port 35176 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:26.056373 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:26.064329 systemd-logind[1583]: New session 29 of user core. Sep 9 02:24:26.070466 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 02:24:26.530403 containerd[1608]: time="2025-09-09T02:24:26.529893747Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 02:24:26.546650 containerd[1608]: time="2025-09-09T02:24:26.546382787Z" level=info msg="Container 78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:24:26.561696 containerd[1608]: time="2025-09-09T02:24:26.561643962Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\"" Sep 9 02:24:26.563248 containerd[1608]: time="2025-09-09T02:24:26.562615172Z" level=info msg="StartContainer for \"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\"" Sep 9 02:24:26.565177 containerd[1608]: time="2025-09-09T02:24:26.565127975Z" level=info msg="connecting to shim 78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" protocol=ttrpc version=3 Sep 9 02:24:26.605499 systemd[1]: Started cri-containerd-78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48.scope - libcontainer container 78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48. Sep 9 02:24:26.667284 sshd[4730]: Connection closed by 139.178.68.195 port 35176 Sep 9 02:24:26.667707 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:26.674276 containerd[1608]: time="2025-09-09T02:24:26.672877058Z" level=info msg="StartContainer for \"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\" returns successfully" Sep 9 02:24:26.677407 systemd[1]: sshd@26-10.230.31.10:22-139.178.68.195:35176.service: Deactivated successfully. Sep 9 02:24:26.682779 containerd[1608]: time="2025-09-09T02:24:26.682607497Z" level=info msg="received exit event container_id:\"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\" id:\"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\" pid:4745 exited_at:{seconds:1757384666 nanos:682176082}" Sep 9 02:24:26.682772 systemd[1]: cri-containerd-78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48.scope: Deactivated successfully. Sep 9 02:24:26.684299 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 02:24:26.685847 containerd[1608]: time="2025-09-09T02:24:26.684976932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\" id:\"78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48\" pid:4745 exited_at:{seconds:1757384666 nanos:682176082}" Sep 9 02:24:26.688273 systemd-logind[1583]: Session 29 logged out. Waiting for processes to exit. Sep 9 02:24:26.691269 systemd-logind[1583]: Removed session 29. Sep 9 02:24:26.721959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78d51d3bc6fc24c2499fea1199b4a2d6d2849bbd1a178fa4b51beb2eff0f6b48-rootfs.mount: Deactivated successfully. Sep 9 02:24:26.828797 systemd[1]: Started sshd@27-10.230.31.10:22-139.178.68.195:35178.service - OpenSSH per-connection server daemon (139.178.68.195:35178). Sep 9 02:24:27.538913 containerd[1608]: time="2025-09-09T02:24:27.538671421Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 02:24:27.551261 containerd[1608]: time="2025-09-09T02:24:27.550325153Z" level=info msg="Container 42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:24:27.561352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270104757.mount: Deactivated successfully. Sep 9 02:24:27.576396 containerd[1608]: time="2025-09-09T02:24:27.576342378Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\"" Sep 9 02:24:27.580480 containerd[1608]: time="2025-09-09T02:24:27.578461860Z" level=info msg="StartContainer for \"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\"" Sep 9 02:24:27.581929 containerd[1608]: time="2025-09-09T02:24:27.581783571Z" level=info msg="connecting to shim 42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" protocol=ttrpc version=3 Sep 9 02:24:27.613444 systemd[1]: Started cri-containerd-42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0.scope - libcontainer container 42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0. Sep 9 02:24:27.657309 systemd[1]: cri-containerd-42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0.scope: Deactivated successfully. Sep 9 02:24:27.659238 containerd[1608]: time="2025-09-09T02:24:27.659115106Z" level=info msg="received exit event container_id:\"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\" id:\"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\" pid:4792 exited_at:{seconds:1757384667 nanos:658917193}" Sep 9 02:24:27.659686 containerd[1608]: time="2025-09-09T02:24:27.659627378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\" id:\"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\" pid:4792 exited_at:{seconds:1757384667 nanos:658917193}" Sep 9 02:24:27.662616 containerd[1608]: time="2025-09-09T02:24:27.661526956Z" level=info msg="StartContainer for \"42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0\" returns successfully" Sep 9 02:24:27.690772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42a1a3e452b93410eb6e822ca58b2fc37fd8c90ef520625f7c8fdd2dea2734d0-rootfs.mount: Deactivated successfully. Sep 9 02:24:27.744688 sshd[4777]: Accepted publickey for core from 139.178.68.195 port 35178 ssh2: RSA SHA256:yYzLg7A+eYyQixfY96au7HD9CORfZHfcWL0BKKoujqs Sep 9 02:24:27.746755 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 02:24:27.754303 systemd-logind[1583]: New session 30 of user core. Sep 9 02:24:27.760459 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 02:24:28.546594 containerd[1608]: time="2025-09-09T02:24:28.546525128Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 02:24:28.568185 containerd[1608]: time="2025-09-09T02:24:28.568131883Z" level=info msg="Container c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559: CDI devices from CRI Config.CDIDevices: []" Sep 9 02:24:28.578372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934614685.mount: Deactivated successfully. Sep 9 02:24:28.589023 containerd[1608]: time="2025-09-09T02:24:28.588914325Z" level=info msg="CreateContainer within sandbox \"eb11d702368d78c4a06d3306c9cf0355acc326091a98a42ea96ed0a8c0c062ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\"" Sep 9 02:24:28.589792 containerd[1608]: time="2025-09-09T02:24:28.589672673Z" level=info msg="StartContainer for \"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\"" Sep 9 02:24:28.591734 containerd[1608]: time="2025-09-09T02:24:28.591681164Z" level=info msg="connecting to shim c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559" address="unix:///run/containerd/s/f99fa4be5491e6580f05deb5bfbed14cbdb33313e301f2e1e2cbbc415ab6e6bb" protocol=ttrpc version=3 Sep 9 02:24:28.627542 systemd[1]: Started cri-containerd-c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559.scope - libcontainer container c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559. Sep 9 02:24:28.681673 containerd[1608]: time="2025-09-09T02:24:28.681574827Z" level=info msg="StartContainer for \"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" returns successfully" Sep 9 02:24:28.796031 containerd[1608]: time="2025-09-09T02:24:28.795971572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"53c0857566db96fa1ac0dc05cf884dceb6327984a2c01eb4d171c19abd628eac\" pid:4869 exited_at:{seconds:1757384668 nanos:795512189}" Sep 9 02:24:29.432421 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 02:24:29.586443 kubelet[2877]: I0909 02:24:29.586180 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f8wm7" podStartSLOduration=5.586145592 podStartE2EDuration="5.586145592s" podCreationTimestamp="2025-09-09 02:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 02:24:29.585731926 +0000 UTC m=+159.050890313" watchObservedRunningTime="2025-09-09 02:24:29.586145592 +0000 UTC m=+159.051303979" Sep 9 02:24:30.579342 containerd[1608]: time="2025-09-09T02:24:30.578816557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"c8e45e6b7164bbc256cb560701965a784482a26a982086c1db35e03217e659fc\" pid:4946 exit_status:1 exited_at:{seconds:1757384670 nanos:577299889}" Sep 9 02:24:30.589524 kubelet[2877]: E0909 02:24:30.589401 2877 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57484->127.0.0.1:37651: write tcp 127.0.0.1:57484->127.0.0.1:37651: write: broken pipe Sep 9 02:24:32.911148 containerd[1608]: time="2025-09-09T02:24:32.910901458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"98da8ce7d5b6fca3c366e7cb891b6d2e021820075ed8b47c989eaf2eff88d2c7\" pid:5315 exit_status:1 exited_at:{seconds:1757384672 nanos:909796438}" Sep 9 02:24:33.121884 systemd-networkd[1510]: lxc_health: Link UP Sep 9 02:24:33.134667 systemd-networkd[1510]: lxc_health: Gained carrier Sep 9 02:24:34.290537 systemd-networkd[1510]: lxc_health: Gained IPv6LL Sep 9 02:24:35.185434 containerd[1608]: time="2025-09-09T02:24:35.185363714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"846ec81d02f52188ddc5ce2071e19154188066d72059267e5a3bf32081dc88e4\" pid:5427 exited_at:{seconds:1757384675 nanos:184861775}" Sep 9 02:24:37.403802 containerd[1608]: time="2025-09-09T02:24:37.403681515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"6090a63c7ea6b183711e1dbadf0a40d2bbbc4c0b3bf97d7a82d7452ccad3085b\" pid:5455 exited_at:{seconds:1757384677 nanos:402670061}" Sep 9 02:24:37.412367 kubelet[2877]: E0909 02:24:37.412201 2877 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:45766->127.0.0.1:37651: read tcp 127.0.0.1:45766->127.0.0.1:37651: read: connection reset by peer Sep 9 02:24:37.413642 kubelet[2877]: E0909 02:24:37.413543 2877 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45766->127.0.0.1:37651: write tcp 127.0.0.1:45766->127.0.0.1:37651: write: broken pipe Sep 9 02:24:39.607925 containerd[1608]: time="2025-09-09T02:24:39.607810425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c792cc7f21d2c3b1699515d968fb0aeb656dab255fd74bdbd0fe46a443f3e559\" id:\"4d50e778da330c9327a39e2585330c37a074156b0cea07cfe1879443f32e1394\" pid:5485 exited_at:{seconds:1757384679 nanos:606639892}" Sep 9 02:24:39.613234 kubelet[2877]: E0909 02:24:39.613120 2877 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:45770->127.0.0.1:37651: read tcp 127.0.0.1:45770->127.0.0.1:37651: read: connection reset by peer Sep 9 02:24:39.613813 kubelet[2877]: E0909 02:24:39.613551 2877 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45770->127.0.0.1:37651: write tcp 127.0.0.1:45770->127.0.0.1:37651: write: broken pipe Sep 9 02:24:39.766589 sshd[4817]: Connection closed by 139.178.68.195 port 35178 Sep 9 02:24:39.768860 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Sep 9 02:24:39.786301 systemd[1]: sshd@27-10.230.31.10:22-139.178.68.195:35178.service: Deactivated successfully. Sep 9 02:24:39.794575 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 02:24:39.797726 systemd-logind[1583]: Session 30 logged out. Waiting for processes to exit. Sep 9 02:24:39.803643 systemd-logind[1583]: Removed session 30.