Sep 5 03:56:50.982310 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:12:48 -00 2025 Sep 5 03:56:50.982359 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 03:56:50.982381 kernel: BIOS-provided physical RAM map: Sep 5 03:56:50.982393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 03:56:50.982404 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 03:56:50.982415 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 03:56:50.982427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 5 03:56:50.982446 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 5 03:56:50.982459 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 03:56:50.982470 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 03:56:50.982487 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 03:56:50.982498 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 03:56:50.982509 kernel: NX (Execute Disable) protection: active Sep 5 03:56:50.982537 kernel: APIC: Static calls initialized Sep 5 03:56:50.982550 kernel: SMBIOS 2.8 present. Sep 5 03:56:50.982563 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 5 03:56:50.982588 kernel: DMI: Memory slots populated: 1/1 Sep 5 03:56:50.982601 kernel: Hypervisor detected: KVM Sep 5 03:56:50.982613 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 03:56:50.982625 kernel: kvm-clock: using sched offset of 6576055148 cycles Sep 5 03:56:50.982638 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 03:56:50.982651 kernel: tsc: Detected 2499.998 MHz processor Sep 5 03:56:50.982663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 03:56:50.982676 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 03:56:50.982688 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 5 03:56:50.982706 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 03:56:50.982718 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 03:56:50.982730 kernel: Using GB pages for direct mapping Sep 5 03:56:50.982742 kernel: ACPI: Early table checksum verification disabled Sep 5 03:56:50.982758 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 5 03:56:50.982770 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982782 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982794 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982807 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 5 03:56:50.982824 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982836 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982851 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982863 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 03:56:50.982875 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 5 03:56:50.982887 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 5 03:56:50.982906 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 5 03:56:50.982923 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 5 03:56:50.982936 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 5 03:56:50.982949 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 5 03:56:50.982961 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 5 03:56:50.982974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 5 03:56:50.982986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 5 03:56:50.982999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 5 03:56:50.983017 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Sep 5 03:56:50.983029 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Sep 5 03:56:50.983042 kernel: Zone ranges: Sep 5 03:56:50.983055 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 03:56:50.983068 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 5 03:56:50.983080 kernel: Normal empty Sep 5 03:56:50.983092 kernel: Device empty Sep 5 03:56:50.983118 kernel: Movable zone start for each node Sep 5 03:56:50.983133 kernel: Early memory node ranges Sep 5 03:56:50.983151 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 03:56:50.983164 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 5 03:56:50.983189 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 5 03:56:50.983204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 03:56:50.983217 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 03:56:50.983230 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 5 03:56:50.983249 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 03:56:50.983263 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 03:56:50.983279 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 03:56:50.983293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 03:56:50.983312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 03:56:50.983325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 03:56:50.983338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 03:56:50.983351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 03:56:50.983363 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 03:56:50.983376 kernel: TSC deadline timer available Sep 5 03:56:50.983389 kernel: CPU topo: Max. logical packages: 16 Sep 5 03:56:50.983402 kernel: CPU topo: Max. logical dies: 16 Sep 5 03:56:50.983414 kernel: CPU topo: Max. dies per package: 1 Sep 5 03:56:50.983432 kernel: CPU topo: Max. threads per core: 1 Sep 5 03:56:50.983445 kernel: CPU topo: Num. cores per package: 1 Sep 5 03:56:50.983457 kernel: CPU topo: Num. threads per package: 1 Sep 5 03:56:50.983470 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Sep 5 03:56:50.983482 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 03:56:50.983495 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 03:56:50.983507 kernel: Booting paravirtualized kernel on KVM Sep 5 03:56:50.983520 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 03:56:50.983533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 03:56:50.983551 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 5 03:56:50.983564 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 5 03:56:50.983576 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 03:56:50.983589 kernel: kvm-guest: PV spinlocks enabled Sep 5 03:56:50.983601 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 03:56:50.983615 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 03:56:50.983629 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 03:56:50.983641 kernel: random: crng init done Sep 5 03:56:50.983659 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 03:56:50.983672 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 5 03:56:50.983685 kernel: Fallback order for Node 0: 0 Sep 5 03:56:50.983698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Sep 5 03:56:50.983710 kernel: Policy zone: DMA32 Sep 5 03:56:50.983723 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 03:56:50.983735 kernel: software IO TLB: area num 16. Sep 5 03:56:50.983748 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 03:56:50.983760 kernel: Kernel/User page tables isolation: enabled Sep 5 03:56:50.983779 kernel: ftrace: allocating 40102 entries in 157 pages Sep 5 03:56:50.983792 kernel: ftrace: allocated 157 pages with 5 groups Sep 5 03:56:50.983805 kernel: Dynamic Preempt: voluntary Sep 5 03:56:50.983817 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 03:56:50.983831 kernel: rcu: RCU event tracing is enabled. Sep 5 03:56:50.983844 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 03:56:50.983856 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 03:56:50.983875 kernel: Rude variant of Tasks RCU enabled. Sep 5 03:56:50.983889 kernel: Tracing variant of Tasks RCU enabled. Sep 5 03:56:50.983908 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 03:56:50.983920 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 03:56:50.983933 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 03:56:50.983955 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 03:56:50.983968 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 03:56:50.983980 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 5 03:56:50.983993 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 03:56:50.984022 kernel: Console: colour VGA+ 80x25 Sep 5 03:56:50.984036 kernel: printk: legacy console [tty0] enabled Sep 5 03:56:50.984054 kernel: printk: legacy console [ttyS0] enabled Sep 5 03:56:50.984068 kernel: ACPI: Core revision 20240827 Sep 5 03:56:50.984086 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 03:56:50.984100 kernel: x2apic enabled Sep 5 03:56:50.984126 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 03:56:50.984139 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 5 03:56:50.984153 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 5 03:56:50.984172 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 03:56:50.986322 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 5 03:56:50.986340 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 5 03:56:50.986354 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 03:56:50.986367 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 03:56:50.986381 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 03:56:50.986394 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 5 03:56:50.986407 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 03:56:50.986420 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 03:56:50.986433 kernel: MDS: Mitigation: Clear CPU buffers Sep 5 03:56:50.986446 kernel: MMIO Stale Data: Unknown: No mitigations Sep 5 03:56:50.986467 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 5 03:56:50.986481 kernel: active return thunk: its_return_thunk Sep 5 03:56:50.986493 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 5 03:56:50.986507 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 03:56:50.986520 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 03:56:50.986533 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 03:56:50.986546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 03:56:50.986559 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 5 03:56:50.986572 kernel: Freeing SMP alternatives memory: 32K Sep 5 03:56:50.986585 kernel: pid_max: default: 32768 minimum: 301 Sep 5 03:56:50.986598 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 03:56:50.986616 kernel: landlock: Up and running. Sep 5 03:56:50.986629 kernel: SELinux: Initializing. Sep 5 03:56:50.986642 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 5 03:56:50.986656 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 5 03:56:50.986669 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 5 03:56:50.986682 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 5 03:56:50.986696 kernel: signal: max sigframe size: 1776 Sep 5 03:56:50.986716 kernel: rcu: Hierarchical SRCU implementation. Sep 5 03:56:50.986732 kernel: rcu: Max phase no-delay instances is 400. Sep 5 03:56:50.986745 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Sep 5 03:56:50.986764 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 5 03:56:50.986778 kernel: smp: Bringing up secondary CPUs ... Sep 5 03:56:50.986791 kernel: smpboot: x86: Booting SMP configuration: Sep 5 03:56:50.986804 kernel: .... node #0, CPUs: #1 Sep 5 03:56:50.986818 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 03:56:50.986831 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 5 03:56:50.986845 kernel: Memory: 1895680K/2096616K available (14336K kernel code, 2428K rwdata, 9956K rodata, 54044K init, 2924K bss, 194928K reserved, 0K cma-reserved) Sep 5 03:56:50.986858 kernel: devtmpfs: initialized Sep 5 03:56:50.986872 kernel: x86/mm: Memory block size: 128MB Sep 5 03:56:50.986890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 03:56:50.986904 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 03:56:50.986917 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 03:56:50.986930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 03:56:50.986943 kernel: audit: initializing netlink subsys (disabled) Sep 5 03:56:50.986957 kernel: audit: type=2000 audit(1757044606.416:1): state=initialized audit_enabled=0 res=1 Sep 5 03:56:50.986970 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 03:56:50.986983 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 03:56:50.986996 kernel: cpuidle: using governor menu Sep 5 03:56:50.987015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 03:56:50.987028 kernel: dca service started, version 1.12.1 Sep 5 03:56:50.987042 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 5 03:56:50.987055 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 03:56:50.987069 kernel: PCI: Using configuration type 1 for base access Sep 5 03:56:50.987082 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 03:56:50.987095 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 03:56:50.987122 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 03:56:50.987136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 03:56:50.987155 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 03:56:50.987169 kernel: ACPI: Added _OSI(Module Device) Sep 5 03:56:50.987203 kernel: ACPI: Added _OSI(Processor Device) Sep 5 03:56:50.987217 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 03:56:50.987230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 03:56:50.987243 kernel: ACPI: Interpreter enabled Sep 5 03:56:50.987256 kernel: ACPI: PM: (supports S0 S5) Sep 5 03:56:50.987269 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 03:56:50.987283 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 03:56:50.987303 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 03:56:50.987316 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 03:56:50.987329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 03:56:50.987644 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 03:56:50.987832 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 03:56:50.988011 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 03:56:50.988031 kernel: PCI host bridge to bus 0000:00 Sep 5 03:56:50.990385 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 03:56:50.990564 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 03:56:50.990731 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 03:56:50.990912 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 5 03:56:50.991076 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 03:56:50.992827 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 5 03:56:50.992999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 03:56:50.993268 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 5 03:56:50.993474 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Sep 5 03:56:50.993655 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Sep 5 03:56:50.993833 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Sep 5 03:56:50.994013 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Sep 5 03:56:50.997277 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 03:56:50.997496 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:50.997710 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Sep 5 03:56:50.997892 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 5 03:56:50.998071 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 5 03:56:50.998309 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 5 03:56:50.998530 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:50.998734 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Sep 5 03:56:50.998925 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 5 03:56:50.999113 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 5 03:56:50.999356 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 5 03:56:50.999557 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:50.999737 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Sep 5 03:56:50.999914 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 5 03:56:51.000113 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 5 03:56:51.001374 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 5 03:56:51.001580 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:51.001760 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Sep 5 03:56:51.001937 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 5 03:56:51.002126 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 5 03:56:51.004347 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 5 03:56:51.004551 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:51.004745 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Sep 5 03:56:51.004926 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 5 03:56:51.005116 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 5 03:56:51.005323 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 5 03:56:51.005533 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:51.005715 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Sep 5 03:56:51.005893 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 5 03:56:51.006070 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 5 03:56:51.008374 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 5 03:56:51.008595 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:51.008782 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Sep 5 03:56:51.008963 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 5 03:56:51.009157 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 5 03:56:51.009373 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 5 03:56:51.009592 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 5 03:56:51.009773 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Sep 5 03:56:51.009951 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 5 03:56:51.010140 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 5 03:56:51.010344 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 5 03:56:51.010543 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 5 03:56:51.010736 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Sep 5 03:56:51.010926 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Sep 5 03:56:51.011111 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 5 03:56:51.013425 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Sep 5 03:56:51.013641 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 5 03:56:51.013826 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Sep 5 03:56:51.014008 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Sep 5 03:56:51.014226 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Sep 5 03:56:51.014461 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 5 03:56:51.014642 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 03:56:51.014839 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 5 03:56:51.015018 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Sep 5 03:56:51.017301 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Sep 5 03:56:51.017535 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 5 03:56:51.017734 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 5 03:56:51.017946 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Sep 5 03:56:51.018148 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Sep 5 03:56:51.018360 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 5 03:56:51.018544 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 5 03:56:51.018725 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 5 03:56:51.018958 kernel: pci_bus 0000:02: extended config space not accessible Sep 5 03:56:51.021237 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Sep 5 03:56:51.021455 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Sep 5 03:56:51.021648 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 5 03:56:51.021858 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 5 03:56:51.022045 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Sep 5 03:56:51.022261 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 5 03:56:51.022461 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 5 03:56:51.022655 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 5 03:56:51.022836 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 5 03:56:51.023017 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 5 03:56:51.025241 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 5 03:56:51.025443 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 5 03:56:51.025631 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 5 03:56:51.025824 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 5 03:56:51.025846 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 03:56:51.025861 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 03:56:51.025874 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 03:56:51.025888 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 03:56:51.025902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 03:56:51.025924 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 03:56:51.025939 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 03:56:51.025953 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 03:56:51.025974 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 03:56:51.025988 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 03:56:51.026002 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 03:56:51.026015 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 03:56:51.026028 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 03:56:51.026041 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 03:56:51.026055 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 03:56:51.026068 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 03:56:51.026081 kernel: iommu: Default domain type: Translated Sep 5 03:56:51.026100 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 03:56:51.026125 kernel: PCI: Using ACPI for IRQ routing Sep 5 03:56:51.026138 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 03:56:51.026152 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 03:56:51.026165 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 5 03:56:51.026364 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 03:56:51.026542 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 03:56:51.026717 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 03:56:51.026745 kernel: vgaarb: loaded Sep 5 03:56:51.026759 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 03:56:51.026773 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 03:56:51.026786 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 03:56:51.026800 kernel: pnp: PnP ACPI init Sep 5 03:56:51.027033 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 03:56:51.027057 kernel: pnp: PnP ACPI: found 5 devices Sep 5 03:56:51.027071 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 03:56:51.027092 kernel: NET: Registered PF_INET protocol family Sep 5 03:56:51.027117 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 03:56:51.027132 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 5 03:56:51.027146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 03:56:51.027159 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 5 03:56:51.027173 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 03:56:51.029217 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 5 03:56:51.029233 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 5 03:56:51.029247 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 5 03:56:51.029269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 03:56:51.029283 kernel: NET: Registered PF_XDP protocol family Sep 5 03:56:51.029490 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 5 03:56:51.029683 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 5 03:56:51.029865 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 5 03:56:51.030045 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 5 03:56:51.030263 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 5 03:56:51.030444 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 5 03:56:51.030629 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 5 03:56:51.030818 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 5 03:56:51.030995 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 5 03:56:51.032868 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 5 03:56:51.033064 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 5 03:56:51.033276 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 5 03:56:51.033457 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 5 03:56:51.033649 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 5 03:56:51.033835 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 5 03:56:51.034011 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 5 03:56:51.034231 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 5 03:56:51.034448 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 5 03:56:51.034625 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 5 03:56:51.034811 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 5 03:56:51.034994 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 5 03:56:51.037207 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 5 03:56:51.037430 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 5 03:56:51.037624 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 5 03:56:51.037806 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 5 03:56:51.037998 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 5 03:56:51.038215 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 5 03:56:51.038396 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 5 03:56:51.038574 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 5 03:56:51.038751 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 5 03:56:51.038935 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 5 03:56:51.039123 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 5 03:56:51.045637 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 5 03:56:51.045827 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 5 03:56:51.046029 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 5 03:56:51.046258 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 5 03:56:51.046453 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 5 03:56:51.046632 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 5 03:56:51.046809 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 5 03:56:51.047006 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 5 03:56:51.047220 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 5 03:56:51.047400 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 5 03:56:51.047577 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 5 03:56:51.047762 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 5 03:56:51.047939 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 5 03:56:51.048129 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 5 03:56:51.048330 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 5 03:56:51.048508 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 5 03:56:51.048685 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 5 03:56:51.048874 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 5 03:56:51.049048 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 03:56:51.049244 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 03:56:51.049416 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 03:56:51.049579 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 5 03:56:51.049740 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 03:56:51.049902 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 5 03:56:51.050096 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 5 03:56:51.052612 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 5 03:56:51.052794 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 5 03:56:51.052986 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 5 03:56:51.053230 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 5 03:56:51.053408 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 5 03:56:51.053577 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 5 03:56:51.053795 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 5 03:56:51.053966 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 5 03:56:51.054148 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 5 03:56:51.054374 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 5 03:56:51.054546 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 5 03:56:51.054715 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 5 03:56:51.054895 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 5 03:56:51.055064 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 5 03:56:51.057698 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 5 03:56:51.057882 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 5 03:56:51.058062 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 5 03:56:51.058265 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 5 03:56:51.058463 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 5 03:56:51.058634 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 5 03:56:51.058803 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 5 03:56:51.059001 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 5 03:56:51.064127 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 5 03:56:51.064345 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 5 03:56:51.064370 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 03:56:51.064386 kernel: PCI: CLS 0 bytes, default 64 Sep 5 03:56:51.064400 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 03:56:51.064414 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 5 03:56:51.064428 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 5 03:56:51.064442 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 5 03:56:51.064456 kernel: Initialise system trusted keyrings Sep 5 03:56:51.064479 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 5 03:56:51.064493 kernel: Key type asymmetric registered Sep 5 03:56:51.064507 kernel: Asymmetric key parser 'x509' registered Sep 5 03:56:51.064521 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 03:56:51.064535 kernel: io scheduler mq-deadline registered Sep 5 03:56:51.064549 kernel: io scheduler kyber registered Sep 5 03:56:51.064563 kernel: io scheduler bfq registered Sep 5 03:56:51.064745 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 5 03:56:51.064942 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 5 03:56:51.065158 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.065371 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 5 03:56:51.065562 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 5 03:56:51.065779 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.065960 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 5 03:56:51.066152 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 5 03:56:51.066362 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.066543 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 5 03:56:51.066721 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 5 03:56:51.066897 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.067089 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 5 03:56:51.067300 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 5 03:56:51.067489 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.067667 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 5 03:56:51.067844 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 5 03:56:51.068020 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.069259 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 5 03:56:51.069465 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 5 03:56:51.069653 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.069831 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 5 03:56:51.070016 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 5 03:56:51.070241 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 5 03:56:51.070265 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 03:56:51.070281 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 03:56:51.070303 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 03:56:51.070317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 03:56:51.070332 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 03:56:51.070346 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 03:56:51.070360 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 03:56:51.070375 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 03:56:51.070572 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 5 03:56:51.070595 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 03:56:51.070767 kernel: rtc_cmos 00:03: registered as rtc0 Sep 5 03:56:51.070935 kernel: rtc_cmos 00:03: setting system clock to 2025-09-05T03:56:50 UTC (1757044610) Sep 5 03:56:51.071139 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 5 03:56:51.071163 kernel: intel_pstate: CPU model not supported Sep 5 03:56:51.072318 kernel: NET: Registered PF_INET6 protocol family Sep 5 03:56:51.072349 kernel: Segment Routing with IPv6 Sep 5 03:56:51.072364 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 03:56:51.072379 kernel: NET: Registered PF_PACKET protocol family Sep 5 03:56:51.072393 kernel: Key type dns_resolver registered Sep 5 03:56:51.072415 kernel: IPI shorthand broadcast: enabled Sep 5 03:56:51.072429 kernel: sched_clock: Marking stable (3895080688, 227543371)->(4281082292, -158458233) Sep 5 03:56:51.072443 kernel: registered taskstats version 1 Sep 5 03:56:51.072457 kernel: Loading compiled-in X.509 certificates Sep 5 03:56:51.072472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 55c9ce6358d6eed45ca94030a2308729ee6a249f' Sep 5 03:56:51.072485 kernel: Demotion targets for Node 0: null Sep 5 03:56:51.072499 kernel: Key type .fscrypt registered Sep 5 03:56:51.072513 kernel: Key type fscrypt-provisioning registered Sep 5 03:56:51.072527 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 03:56:51.072546 kernel: ima: Allocated hash algorithm: sha1 Sep 5 03:56:51.072560 kernel: ima: No architecture policies found Sep 5 03:56:51.072574 kernel: clk: Disabling unused clocks Sep 5 03:56:51.072588 kernel: Warning: unable to open an initial console. Sep 5 03:56:51.072602 kernel: Freeing unused kernel image (initmem) memory: 54044K Sep 5 03:56:51.072616 kernel: Write protecting the kernel read-only data: 24576k Sep 5 03:56:51.072630 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 5 03:56:51.072644 kernel: Run /init as init process Sep 5 03:56:51.072662 kernel: with arguments: Sep 5 03:56:51.072677 kernel: /init Sep 5 03:56:51.072690 kernel: with environment: Sep 5 03:56:51.072704 kernel: HOME=/ Sep 5 03:56:51.072717 kernel: TERM=linux Sep 5 03:56:51.072731 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 03:56:51.072746 systemd[1]: Successfully made /usr/ read-only. Sep 5 03:56:51.072765 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 03:56:51.072786 systemd[1]: Detected virtualization kvm. Sep 5 03:56:51.072800 systemd[1]: Detected architecture x86-64. Sep 5 03:56:51.072815 systemd[1]: Running in initrd. Sep 5 03:56:51.072829 systemd[1]: No hostname configured, using default hostname. Sep 5 03:56:51.072844 systemd[1]: Hostname set to . Sep 5 03:56:51.072858 systemd[1]: Initializing machine ID from VM UUID. Sep 5 03:56:51.072873 systemd[1]: Queued start job for default target initrd.target. Sep 5 03:56:51.072887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 03:56:51.072907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 03:56:51.072923 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 03:56:51.072938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 03:56:51.072953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 03:56:51.072969 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 03:56:51.072985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 03:56:51.073000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 03:56:51.073021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 03:56:51.073036 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 03:56:51.073050 systemd[1]: Reached target paths.target - Path Units. Sep 5 03:56:51.073065 systemd[1]: Reached target slices.target - Slice Units. Sep 5 03:56:51.073079 systemd[1]: Reached target swap.target - Swaps. Sep 5 03:56:51.073094 systemd[1]: Reached target timers.target - Timer Units. Sep 5 03:56:51.073122 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 03:56:51.073138 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 03:56:51.073161 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 03:56:51.073215 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 03:56:51.073235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 03:56:51.073251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 03:56:51.073265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 03:56:51.073280 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 03:56:51.073295 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 03:56:51.073310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 03:56:51.073324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 03:56:51.073347 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 03:56:51.073362 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 03:56:51.073377 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 03:56:51.073392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 03:56:51.073407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 03:56:51.073422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 03:56:51.073442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 03:56:51.073503 systemd-journald[230]: Collecting audit messages is disabled. Sep 5 03:56:51.073544 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 03:56:51.073598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 03:56:51.073618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 03:56:51.073633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 03:56:51.073649 systemd-journald[230]: Journal started Sep 5 03:56:51.073677 systemd-journald[230]: Runtime Journal (/run/log/journal/02376769c9ab4e89b605b060f9a2a961) is 4.7M, max 38.2M, 33.4M free. Sep 5 03:56:51.080675 kernel: Bridge firewalling registered Sep 5 03:56:51.007574 systemd-modules-load[231]: Inserted module 'overlay' Sep 5 03:56:51.080475 systemd-modules-load[231]: Inserted module 'br_netfilter' Sep 5 03:56:51.089222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 03:56:51.094208 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 03:56:51.101346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 03:56:51.103465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 03:56:51.110865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 03:56:51.116029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 03:56:51.122660 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 03:56:51.132270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 03:56:51.143171 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 03:56:51.147037 systemd-tmpfiles[259]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 03:56:51.147324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 03:56:51.149420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 03:56:51.154693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 03:56:51.160258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 03:56:51.180344 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5ddbf8d117777441d6c5be3659126fb3de7a68afc9e620e02a4b6c5a60c1c503 Sep 5 03:56:51.216175 systemd-resolved[270]: Positive Trust Anchors: Sep 5 03:56:51.217281 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 03:56:51.217332 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 03:56:51.225712 systemd-resolved[270]: Defaulting to hostname 'linux'. Sep 5 03:56:51.228384 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 03:56:51.230113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 03:56:51.299219 kernel: SCSI subsystem initialized Sep 5 03:56:51.311223 kernel: Loading iSCSI transport class v2.0-870. Sep 5 03:56:51.325213 kernel: iscsi: registered transport (tcp) Sep 5 03:56:51.353521 kernel: iscsi: registered transport (qla4xxx) Sep 5 03:56:51.353610 kernel: QLogic iSCSI HBA Driver Sep 5 03:56:51.382540 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 03:56:51.417685 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 03:56:51.421410 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 03:56:51.491246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 03:56:51.494834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 03:56:51.566253 kernel: raid6: sse2x4 gen() 13712 MB/s Sep 5 03:56:51.584229 kernel: raid6: sse2x2 gen() 9689 MB/s Sep 5 03:56:51.602797 kernel: raid6: sse2x1 gen() 9988 MB/s Sep 5 03:56:51.602880 kernel: raid6: using algorithm sse2x4 gen() 13712 MB/s Sep 5 03:56:51.621794 kernel: raid6: .... xor() 7662 MB/s, rmw enabled Sep 5 03:56:51.621934 kernel: raid6: using ssse3x2 recovery algorithm Sep 5 03:56:51.649237 kernel: xor: automatically using best checksumming function avx Sep 5 03:56:51.848234 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 03:56:51.858940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 03:56:51.863653 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 03:56:51.900294 systemd-udevd[480]: Using default interface naming scheme 'v255'. Sep 5 03:56:51.909727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 03:56:51.914341 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 03:56:51.946084 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Sep 5 03:56:51.981896 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 03:56:51.984615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 03:56:52.118740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 03:56:52.125586 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 03:56:52.277319 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 03:56:52.280201 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 5 03:56:52.301473 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 5 03:56:52.309742 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 03:56:52.309839 kernel: GPT:17805311 != 125829119 Sep 5 03:56:52.309887 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 03:56:52.312520 kernel: GPT:17805311 != 125829119 Sep 5 03:56:52.312612 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 03:56:52.313959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 03:56:52.315916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 03:56:52.317900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 03:56:52.321817 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 03:56:52.329374 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 5 03:56:52.332343 kernel: ACPI: bus type USB registered Sep 5 03:56:52.332401 kernel: libata version 3.00 loaded. Sep 5 03:56:52.331651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 03:56:52.338079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 03:56:52.343226 kernel: usbcore: registered new interface driver usbfs Sep 5 03:56:52.352198 kernel: usbcore: registered new interface driver hub Sep 5 03:56:52.362347 kernel: AES CTR mode by8 optimization enabled Sep 5 03:56:52.362397 kernel: usbcore: registered new device driver usb Sep 5 03:56:52.373211 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 03:56:52.401204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 03:56:52.418203 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 5 03:56:52.418511 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 5 03:56:52.418743 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 03:56:52.431419 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 03:56:52.530803 kernel: scsi host0: ahci Sep 5 03:56:52.531242 kernel: scsi host1: ahci Sep 5 03:56:52.531591 kernel: scsi host2: ahci Sep 5 03:56:52.531848 kernel: scsi host3: ahci Sep 5 03:56:52.532118 kernel: scsi host4: ahci Sep 5 03:56:52.532425 kernel: scsi host5: ahci Sep 5 03:56:52.532667 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Sep 5 03:56:52.532691 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Sep 5 03:56:52.532710 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Sep 5 03:56:52.532728 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Sep 5 03:56:52.532746 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Sep 5 03:56:52.532765 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Sep 5 03:56:52.539688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 03:56:52.568480 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 03:56:52.569412 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 03:56:52.583730 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 03:56:52.596394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 03:56:52.599385 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 03:56:52.630561 disk-uuid[631]: Primary Header is updated. Sep 5 03:56:52.630561 disk-uuid[631]: Secondary Entries is updated. Sep 5 03:56:52.630561 disk-uuid[631]: Secondary Header is updated. Sep 5 03:56:52.635219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 03:56:52.644242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 03:56:52.757254 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.757344 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.757367 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.757421 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.757443 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.760220 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 03:56:52.775218 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 5 03:56:52.778211 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 5 03:56:52.782210 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 5 03:56:52.805659 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 5 03:56:52.805982 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 5 03:56:52.808229 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 5 03:56:52.810833 kernel: hub 1-0:1.0: USB hub found Sep 5 03:56:52.811120 kernel: hub 1-0:1.0: 4 ports detected Sep 5 03:56:52.814210 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 5 03:56:52.816796 kernel: hub 2-0:1.0: USB hub found Sep 5 03:56:52.817275 kernel: hub 2-0:1.0: 4 ports detected Sep 5 03:56:52.851344 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 03:56:52.876400 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 03:56:52.877360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 03:56:52.879009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 03:56:52.881914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 03:56:52.912370 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 03:56:53.049331 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 5 03:56:53.192218 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 03:56:53.199660 kernel: usbcore: registered new interface driver usbhid Sep 5 03:56:53.199736 kernel: usbhid: USB HID core driver Sep 5 03:56:53.208353 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 5 03:56:53.208416 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 5 03:56:53.649233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 03:56:53.650972 disk-uuid[632]: The operation has completed successfully. Sep 5 03:56:53.710670 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 03:56:53.710859 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 03:56:53.759306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 03:56:53.798737 sh[657]: Success Sep 5 03:56:53.824581 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 03:56:53.824682 kernel: device-mapper: uevent: version 1.0.3 Sep 5 03:56:53.826267 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 03:56:53.842237 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Sep 5 03:56:53.909080 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 03:56:53.915331 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 03:56:53.926639 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 03:56:53.939226 kernel: BTRFS: device fsid bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (669) Sep 5 03:56:53.943949 kernel: BTRFS info (device dm-0): first mount of filesystem bbfaff22-5589-4cab-94aa-ce3e6be0b7e8 Sep 5 03:56:53.943986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 03:56:53.956174 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 03:56:53.956235 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 03:56:53.958913 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 03:56:53.960234 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 03:56:53.961355 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 03:56:53.962504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 03:56:53.966336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 03:56:54.001235 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (702) Sep 5 03:56:54.004233 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 03:56:54.007534 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 03:56:54.013801 kernel: BTRFS info (device vda6): turning on async discard Sep 5 03:56:54.013846 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 03:56:54.022258 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 03:56:54.023577 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 03:56:54.028374 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 03:56:54.139939 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 03:56:54.145987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 03:56:54.224382 systemd-networkd[841]: lo: Link UP Sep 5 03:56:54.224395 systemd-networkd[841]: lo: Gained carrier Sep 5 03:56:54.226954 systemd-networkd[841]: Enumeration completed Sep 5 03:56:54.227312 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 03:56:54.228094 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 03:56:54.228102 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 03:56:54.229656 systemd-networkd[841]: eth0: Link UP Sep 5 03:56:54.229888 systemd-networkd[841]: eth0: Gained carrier Sep 5 03:56:54.229901 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 03:56:54.231806 systemd[1]: Reached target network.target - Network. Sep 5 03:56:54.380362 systemd-networkd[841]: eth0: DHCPv4 address 10.230.58.50/30, gateway 10.230.58.49 acquired from 10.230.58.49 Sep 5 03:56:54.417620 ignition[752]: Ignition 2.21.0 Sep 5 03:56:54.417651 ignition[752]: Stage: fetch-offline Sep 5 03:56:54.417738 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 5 03:56:54.417759 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:56:54.417925 ignition[752]: parsed url from cmdline: "" Sep 5 03:56:54.417932 ignition[752]: no config URL provided Sep 5 03:56:54.417942 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 03:56:54.422873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 03:56:54.417958 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 5 03:56:54.417966 ignition[752]: failed to fetch config: resource requires networking Sep 5 03:56:54.420251 ignition[752]: Ignition finished successfully Sep 5 03:56:54.425436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 03:56:54.465404 ignition[851]: Ignition 2.21.0 Sep 5 03:56:54.465431 ignition[851]: Stage: fetch Sep 5 03:56:54.465620 ignition[851]: no configs at "/usr/lib/ignition/base.d" Sep 5 03:56:54.465640 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:56:54.465756 ignition[851]: parsed url from cmdline: "" Sep 5 03:56:54.465763 ignition[851]: no config URL provided Sep 5 03:56:54.465772 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 03:56:54.465788 ignition[851]: no config at "/usr/lib/ignition/user.ign" Sep 5 03:56:54.465949 ignition[851]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 5 03:56:54.465987 ignition[851]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 5 03:56:54.466104 ignition[851]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 5 03:56:54.482346 ignition[851]: GET result: OK Sep 5 03:56:54.482499 ignition[851]: parsing config with SHA512: edb520018aa2b87e74df55bb37439b80b532dc75201e0caba69bcf39cf01d050d68260f3d24503ca6cfcad2c42d0c4f437ac9f4cf0cd8a80c64c4ee69c0b9b9a Sep 5 03:56:54.487527 unknown[851]: fetched base config from "system" Sep 5 03:56:54.487544 unknown[851]: fetched base config from "system" Sep 5 03:56:54.488012 ignition[851]: fetch: fetch complete Sep 5 03:56:54.487552 unknown[851]: fetched user config from "openstack" Sep 5 03:56:54.488021 ignition[851]: fetch: fetch passed Sep 5 03:56:54.491128 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 03:56:54.488100 ignition[851]: Ignition finished successfully Sep 5 03:56:54.495372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 03:56:54.545446 ignition[857]: Ignition 2.21.0 Sep 5 03:56:54.545474 ignition[857]: Stage: kargs Sep 5 03:56:54.545662 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 5 03:56:54.545682 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:56:54.550546 ignition[857]: kargs: kargs passed Sep 5 03:56:54.550667 ignition[857]: Ignition finished successfully Sep 5 03:56:54.553616 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 03:56:54.557209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 03:56:54.601687 ignition[864]: Ignition 2.21.0 Sep 5 03:56:54.601714 ignition[864]: Stage: disks Sep 5 03:56:54.601944 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 5 03:56:54.604597 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 03:56:54.601964 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:56:54.603150 ignition[864]: disks: disks passed Sep 5 03:56:54.606815 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 03:56:54.603243 ignition[864]: Ignition finished successfully Sep 5 03:56:54.608483 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 03:56:54.609832 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 03:56:54.610528 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 03:56:54.612028 systemd[1]: Reached target basic.target - Basic System. Sep 5 03:56:54.616365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 03:56:54.664260 systemd-fsck[872]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 5 03:56:54.668390 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 03:56:54.671308 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 03:56:54.808214 kernel: EXT4-fs (vda9): mounted filesystem a99dab41-6cdd-4037-a941-eeee48403b9e r/w with ordered data mode. Quota mode: none. Sep 5 03:56:54.809356 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 03:56:54.810657 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 03:56:54.813165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 03:56:54.815036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 03:56:54.817008 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 03:56:54.821370 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 5 03:56:54.824212 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 03:56:54.824268 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 03:56:54.832379 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 03:56:54.844528 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (880) Sep 5 03:56:54.845074 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 03:56:54.851642 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 03:56:54.851701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 03:56:54.863332 kernel: BTRFS info (device vda6): turning on async discard Sep 5 03:56:54.863396 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 03:56:54.867390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 03:56:54.927232 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:56:54.943611 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 03:56:54.955956 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 5 03:56:54.963823 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 03:56:54.971270 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 03:56:55.092696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 03:56:55.095028 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 03:56:55.096661 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 03:56:55.119608 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 03:56:55.121872 kernel: BTRFS info (device vda6): last unmount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 03:56:55.143859 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 03:56:55.163239 ignition[997]: INFO : Ignition 2.21.0 Sep 5 03:56:55.164453 ignition[997]: INFO : Stage: mount Sep 5 03:56:55.165100 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 03:56:55.165100 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:56:55.166865 ignition[997]: INFO : mount: mount passed Sep 5 03:56:55.166865 ignition[997]: INFO : Ignition finished successfully Sep 5 03:56:55.167782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 03:56:55.976231 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:56:56.239526 systemd-networkd[841]: eth0: Gained IPv6LL Sep 5 03:56:57.749126 systemd-networkd[841]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e8c:24:19ff:fee6:3a32/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e8c:24:19ff:fee6:3a32/64 assigned by NDisc. Sep 5 03:56:57.749143 systemd-networkd[841]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 5 03:56:57.986226 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:01.995217 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:02.004501 coreos-metadata[882]: Sep 05 03:57:02.004 WARN failed to locate config-drive, using the metadata service API instead Sep 5 03:57:02.026739 coreos-metadata[882]: Sep 05 03:57:02.026 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 5 03:57:02.037968 coreos-metadata[882]: Sep 05 03:57:02.037 INFO Fetch successful Sep 5 03:57:02.038905 coreos-metadata[882]: Sep 05 03:57:02.038 INFO wrote hostname srv-86xia.gb1.brightbox.com to /sysroot/etc/hostname Sep 5 03:57:02.040788 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 5 03:57:02.040994 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 5 03:57:02.045492 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 03:57:02.071533 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 03:57:02.104205 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (1014) Sep 5 03:57:02.104276 kernel: BTRFS info (device vda6): first mount of filesystem f4b20ae7-6320-4f9d-b17c-1a32a98200fb Sep 5 03:57:02.106516 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 03:57:02.111641 kernel: BTRFS info (device vda6): turning on async discard Sep 5 03:57:02.111703 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 03:57:02.116147 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 03:57:02.151176 ignition[1031]: INFO : Ignition 2.21.0 Sep 5 03:57:02.151176 ignition[1031]: INFO : Stage: files Sep 5 03:57:02.153429 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 03:57:02.153429 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:57:02.153429 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 5 03:57:02.156566 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 03:57:02.156566 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 03:57:02.164608 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 03:57:02.164608 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 03:57:02.166886 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 03:57:02.164851 unknown[1031]: wrote ssh authorized keys file for user: core Sep 5 03:57:02.169367 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 03:57:02.169367 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 5 03:57:02.527763 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 03:57:03.434226 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 03:57:03.434226 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 03:57:03.449510 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 03:57:03.766549 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 03:57:04.820216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 03:57:04.820216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 03:57:04.820216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 03:57:04.820216 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 03:57:04.827231 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 03:57:04.837319 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 03:57:04.837319 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 03:57:04.837319 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 5 03:57:05.077259 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 03:57:07.635284 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 03:57:07.635284 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 03:57:07.640398 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 03:57:07.645223 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 03:57:07.645223 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 03:57:07.645223 ignition[1031]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 5 03:57:07.645223 ignition[1031]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 03:57:07.649896 ignition[1031]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 03:57:07.649896 ignition[1031]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 03:57:07.649896 ignition[1031]: INFO : files: files passed Sep 5 03:57:07.649896 ignition[1031]: INFO : Ignition finished successfully Sep 5 03:57:07.650110 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 03:57:07.658461 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 03:57:07.662729 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 03:57:07.688167 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 03:57:07.689409 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 03:57:07.695797 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 03:57:07.695797 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 03:57:07.698851 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 03:57:07.700083 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 03:57:07.701732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 03:57:07.703930 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 03:57:07.771982 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 03:57:07.772207 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 03:57:07.774356 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 03:57:07.775524 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 03:57:07.777223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 03:57:07.779368 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 03:57:07.824584 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 03:57:07.827334 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 03:57:07.861060 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 03:57:07.863046 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 03:57:07.863931 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 03:57:07.864764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 03:57:07.864947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 03:57:07.866975 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 03:57:07.867875 systemd[1]: Stopped target basic.target - Basic System. Sep 5 03:57:07.869305 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 03:57:07.870846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 03:57:07.872352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 03:57:07.873795 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 03:57:07.875256 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 03:57:07.876922 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 03:57:07.878519 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 03:57:07.880111 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 03:57:07.881588 systemd[1]: Stopped target swap.target - Swaps. Sep 5 03:57:07.883240 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 03:57:07.883569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 03:57:07.885239 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 03:57:07.886244 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 03:57:07.887797 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 03:57:07.887968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 03:57:07.889174 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 03:57:07.889455 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 03:57:07.896985 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 03:57:07.897174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 03:57:07.899065 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 03:57:07.899267 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 03:57:07.903385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 03:57:07.904669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 03:57:07.904859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 03:57:07.910089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 03:57:07.912285 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 03:57:07.912496 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 03:57:07.919534 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 03:57:07.919759 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 03:57:07.936374 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 03:57:07.936549 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 03:57:07.956731 ignition[1085]: INFO : Ignition 2.21.0 Sep 5 03:57:07.956731 ignition[1085]: INFO : Stage: umount Sep 5 03:57:07.959938 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 03:57:07.959938 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 5 03:57:07.959938 ignition[1085]: INFO : umount: umount passed Sep 5 03:57:07.959938 ignition[1085]: INFO : Ignition finished successfully Sep 5 03:57:07.961752 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 03:57:07.962958 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 03:57:07.969102 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 03:57:07.970047 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 03:57:07.970248 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 03:57:07.974561 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 03:57:07.974706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 03:57:07.976163 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 03:57:07.976266 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 03:57:07.977514 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 03:57:07.977603 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 03:57:07.978904 systemd[1]: Stopped target network.target - Network. Sep 5 03:57:07.980164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 03:57:07.980335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 03:57:07.981807 systemd[1]: Stopped target paths.target - Path Units. Sep 5 03:57:07.982999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 03:57:07.986262 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 03:57:07.987840 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 03:57:07.989513 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 03:57:07.991004 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 03:57:07.991080 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 03:57:07.992523 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 03:57:07.992647 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 03:57:07.993877 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 03:57:07.994024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 03:57:07.995279 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 03:57:07.995350 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 03:57:07.996731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 03:57:07.996860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 03:57:07.998601 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 03:57:08.000553 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 03:57:08.009601 systemd-networkd[841]: eth0: DHCPv6 lease lost Sep 5 03:57:08.015160 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 03:57:08.015511 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 03:57:08.023096 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 03:57:08.023624 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 03:57:08.023863 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 03:57:08.027336 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 03:57:08.028553 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 03:57:08.029738 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 03:57:08.029824 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 03:57:08.033288 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 03:57:08.033967 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 03:57:08.034046 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 03:57:08.036438 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 03:57:08.036544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 03:57:08.039049 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 03:57:08.039131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 03:57:08.041761 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 03:57:08.041922 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 03:57:08.045438 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 03:57:08.051099 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 03:57:08.051265 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 03:57:08.057549 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 03:57:08.057908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 03:57:08.061983 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 03:57:08.062079 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 03:57:08.062903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 03:57:08.062965 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 03:57:08.064366 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 03:57:08.064443 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 03:57:08.067299 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 03:57:08.067386 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 03:57:08.068859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 03:57:08.068939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 03:57:08.073254 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 03:57:08.073997 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 03:57:08.074079 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 03:57:08.077329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 03:57:08.077406 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 03:57:08.079616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 03:57:08.079758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 03:57:08.089722 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 5 03:57:08.089819 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 03:57:08.089916 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 03:57:08.090612 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 03:57:08.094310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 03:57:08.106877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 03:57:08.107097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 03:57:08.109731 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 03:57:08.112255 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 03:57:08.140012 systemd[1]: Switching root. Sep 5 03:57:08.184804 systemd-journald[230]: Journal stopped Sep 5 03:57:10.179541 systemd-journald[230]: Received SIGTERM from PID 1 (systemd). Sep 5 03:57:10.179664 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 03:57:10.179699 kernel: SELinux: policy capability open_perms=1 Sep 5 03:57:10.179721 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 03:57:10.179740 kernel: SELinux: policy capability always_check_network=0 Sep 5 03:57:10.179784 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 03:57:10.179814 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 03:57:10.179834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 03:57:10.179860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 03:57:10.179894 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 03:57:10.179916 kernel: audit: type=1403 audit(1757044628.569:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 03:57:10.179958 systemd[1]: Successfully loaded SELinux policy in 93.961ms. Sep 5 03:57:10.179984 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.685ms. Sep 5 03:57:10.180007 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 03:57:10.180044 systemd[1]: Detected virtualization kvm. Sep 5 03:57:10.180067 systemd[1]: Detected architecture x86-64. Sep 5 03:57:10.180094 systemd[1]: Detected first boot. Sep 5 03:57:10.180132 systemd[1]: Hostname set to . Sep 5 03:57:10.180163 systemd[1]: Initializing machine ID from VM UUID. Sep 5 03:57:10.180212 zram_generator::config[1128]: No configuration found. Sep 5 03:57:10.180238 kernel: Guest personality initialized and is inactive Sep 5 03:57:10.180258 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 5 03:57:10.180296 kernel: Initialized host personality Sep 5 03:57:10.180317 kernel: NET: Registered PF_VSOCK protocol family Sep 5 03:57:10.180353 systemd[1]: Populated /etc with preset unit settings. Sep 5 03:57:10.180377 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 03:57:10.180406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 03:57:10.180428 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 03:57:10.180450 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 03:57:10.180472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 03:57:10.180517 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 03:57:10.180541 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 03:57:10.180589 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 03:57:10.180627 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 03:57:10.180677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 03:57:10.180701 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 03:57:10.180737 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 03:57:10.180761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 03:57:10.180784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 03:57:10.180805 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 03:57:10.180827 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 03:57:10.180850 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 03:57:10.180884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 03:57:10.180913 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 03:57:10.180941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 03:57:10.180964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 03:57:10.180991 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 03:57:10.181014 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 03:57:10.181035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 03:57:10.181057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 03:57:10.181077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 03:57:10.181117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 03:57:10.181141 systemd[1]: Reached target slices.target - Slice Units. Sep 5 03:57:10.181163 systemd[1]: Reached target swap.target - Swaps. Sep 5 03:57:10.181310 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 03:57:10.181338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 03:57:10.181361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 03:57:10.181383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 03:57:10.181405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 03:57:10.181435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 03:57:10.181467 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 03:57:10.181523 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 03:57:10.181564 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 03:57:10.181587 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 03:57:10.181609 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:10.181630 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 03:57:10.181650 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 03:57:10.181671 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 03:57:10.181693 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 03:57:10.181730 systemd[1]: Reached target machines.target - Containers. Sep 5 03:57:10.181756 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 03:57:10.181777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 03:57:10.181799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 03:57:10.181820 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 03:57:10.181851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 03:57:10.181875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 03:57:10.181896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 03:57:10.181930 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 03:57:10.181962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 03:57:10.181986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 03:57:10.182007 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 03:57:10.182028 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 03:57:10.182050 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 03:57:10.182071 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 03:57:10.182093 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 03:57:10.182129 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 03:57:10.182152 kernel: loop: module loaded Sep 5 03:57:10.182173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 03:57:10.182216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 03:57:10.182240 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 03:57:10.182276 kernel: fuse: init (API version 7.41) Sep 5 03:57:10.182313 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 03:57:10.182338 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 03:57:10.182360 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 03:57:10.182381 systemd[1]: Stopped verity-setup.service. Sep 5 03:57:10.182424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:10.182449 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 03:57:10.182477 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 03:57:10.182509 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 03:57:10.182534 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 03:57:10.182555 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 03:57:10.182577 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 03:57:10.182598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 03:57:10.182619 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 03:57:10.182656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 03:57:10.182680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 03:57:10.182701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 03:57:10.182721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 03:57:10.182743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 03:57:10.182764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 03:57:10.182785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 03:57:10.182807 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 03:57:10.182842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 03:57:10.182867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 03:57:10.182891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 03:57:10.182952 systemd-journald[1218]: Collecting audit messages is disabled. Sep 5 03:57:10.183007 kernel: ACPI: bus type drm_connector registered Sep 5 03:57:10.183031 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 03:57:10.183068 systemd-journald[1218]: Journal started Sep 5 03:57:10.183134 systemd-journald[1218]: Runtime Journal (/run/log/journal/02376769c9ab4e89b605b060f9a2a961) is 4.7M, max 38.2M, 33.4M free. Sep 5 03:57:09.711153 systemd[1]: Queued start job for default target multi-user.target. Sep 5 03:57:09.728107 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 03:57:09.729108 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 03:57:10.186214 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 03:57:10.190516 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 03:57:10.196324 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 03:57:10.199322 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 03:57:10.234924 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 03:57:10.242324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 03:57:10.247309 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 03:57:10.249275 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 03:57:10.249322 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 03:57:10.253094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 03:57:10.262355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 03:57:10.264439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 03:57:10.269874 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 03:57:10.273398 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 03:57:10.275307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 03:57:10.280610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 03:57:10.281883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 03:57:10.285384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 03:57:10.290345 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 03:57:10.298571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 03:57:10.312376 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 03:57:10.317051 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 03:57:10.319384 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 03:57:10.328358 kernel: loop0: detected capacity change from 0 to 224512 Sep 5 03:57:10.356299 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 03:57:10.357984 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 03:57:10.365236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 03:57:10.375853 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 03:57:10.389260 kernel: loop1: detected capacity change from 0 to 128016 Sep 5 03:57:10.393264 systemd-journald[1218]: Time spent on flushing to /var/log/journal/02376769c9ab4e89b605b060f9a2a961 is 114.537ms for 1174 entries. Sep 5 03:57:10.393264 systemd-journald[1218]: System Journal (/var/log/journal/02376769c9ab4e89b605b060f9a2a961) is 8M, max 584.8M, 576.8M free. Sep 5 03:57:10.552423 systemd-journald[1218]: Received client request to flush runtime journal. Sep 5 03:57:10.552519 kernel: loop2: detected capacity change from 0 to 111000 Sep 5 03:57:10.552566 kernel: loop3: detected capacity change from 0 to 8 Sep 5 03:57:10.431123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 03:57:10.445807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 03:57:10.449742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 03:57:10.480427 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 03:57:10.555859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 03:57:10.569850 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 5 03:57:10.571234 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 5 03:57:10.579211 kernel: loop4: detected capacity change from 0 to 224512 Sep 5 03:57:10.588384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 03:57:10.672493 kernel: loop5: detected capacity change from 0 to 128016 Sep 5 03:57:10.685949 kernel: loop6: detected capacity change from 0 to 111000 Sep 5 03:57:10.719335 kernel: loop7: detected capacity change from 0 to 8 Sep 5 03:57:10.724878 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 5 03:57:10.725873 (sd-merge)[1288]: Merged extensions into '/usr'. Sep 5 03:57:10.732804 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 03:57:10.742605 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 03:57:10.742639 systemd[1]: Reloading... Sep 5 03:57:11.088256 zram_generator::config[1316]: No configuration found. Sep 5 03:57:11.468212 ldconfig[1260]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 03:57:11.503244 systemd[1]: Reloading finished in 759 ms. Sep 5 03:57:11.532597 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 03:57:11.534652 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 03:57:11.541632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 03:57:11.550402 systemd[1]: Starting ensure-sysext.service... Sep 5 03:57:11.554948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 03:57:11.596582 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Sep 5 03:57:11.596804 systemd[1]: Reloading... Sep 5 03:57:11.628729 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 03:57:11.629110 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 03:57:11.629677 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 03:57:11.630261 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 03:57:11.631829 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 03:57:11.632247 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Sep 5 03:57:11.632395 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Sep 5 03:57:11.642392 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 03:57:11.642410 systemd-tmpfiles[1374]: Skipping /boot Sep 5 03:57:11.670294 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 03:57:11.670316 systemd-tmpfiles[1374]: Skipping /boot Sep 5 03:57:11.763226 zram_generator::config[1401]: No configuration found. Sep 5 03:57:12.041411 systemd[1]: Reloading finished in 443 ms. Sep 5 03:57:12.067803 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 03:57:12.085970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 03:57:12.098457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.101143 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 03:57:12.106472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 03:57:12.107700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 03:57:12.112232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 03:57:12.115611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 03:57:12.126527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 03:57:12.127509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 03:57:12.127696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 03:57:12.130761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 03:57:12.143649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 03:57:12.149051 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 03:57:12.153761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 03:57:12.155255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.167845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.168137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 03:57:12.170329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 03:57:12.170493 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 03:57:12.170636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.185716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.186156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 03:57:12.196305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 03:57:12.197299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 03:57:12.197479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 03:57:12.197683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 03:57:12.206699 systemd[1]: Finished ensure-sysext.service. Sep 5 03:57:12.209141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 03:57:12.212119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 03:57:12.213564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 03:57:12.213838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 03:57:12.226856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 03:57:12.238070 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 03:57:12.239683 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 03:57:12.241285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 03:57:12.242516 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 03:57:12.244452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 03:57:12.257919 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 03:57:12.261509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 03:57:12.267601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 03:57:12.269301 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 03:57:12.276990 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 03:57:12.298479 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 03:57:12.299837 systemd-udevd[1472]: Using default interface naming scheme 'v255'. Sep 5 03:57:12.305361 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 03:57:12.308212 augenrules[1497]: No rules Sep 5 03:57:12.311065 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 03:57:12.313301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 03:57:12.340591 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 03:57:12.351586 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 03:57:12.360293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 03:57:12.367795 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 03:57:12.504150 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 03:57:12.506443 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 03:57:12.593695 systemd-networkd[1513]: lo: Link UP Sep 5 03:57:12.598082 systemd-networkd[1513]: lo: Gained carrier Sep 5 03:57:12.602349 systemd-networkd[1513]: Enumeration completed Sep 5 03:57:12.602510 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 03:57:12.606121 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 03:57:12.608753 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 03:57:12.610044 systemd-resolved[1469]: Positive Trust Anchors: Sep 5 03:57:12.611598 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 03:57:12.611744 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 03:57:12.627973 systemd-resolved[1469]: Using system hostname 'srv-86xia.gb1.brightbox.com'. Sep 5 03:57:12.635415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 03:57:12.636300 systemd[1]: Reached target network.target - Network. Sep 5 03:57:12.636955 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 03:57:12.638325 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 03:57:12.639162 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 03:57:12.641308 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 03:57:12.642104 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 5 03:57:12.643076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 03:57:12.643900 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 03:57:12.644686 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 03:57:12.645818 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 03:57:12.645867 systemd[1]: Reached target paths.target - Path Units. Sep 5 03:57:12.646934 systemd[1]: Reached target timers.target - Timer Units. Sep 5 03:57:12.650347 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 03:57:12.653057 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 03:57:12.658192 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 03:57:12.661031 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 03:57:12.663988 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 03:57:12.671039 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 03:57:12.672735 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 03:57:12.675969 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 03:57:12.677528 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 03:57:12.688725 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 03:57:12.690474 systemd[1]: Reached target basic.target - Basic System. Sep 5 03:57:12.692345 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 03:57:12.692432 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 03:57:12.694225 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 03:57:12.700468 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 03:57:12.704484 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 03:57:12.708825 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 03:57:12.719402 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 03:57:12.724244 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:12.727947 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 03:57:12.728843 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 03:57:12.730838 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 5 03:57:12.738358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 03:57:12.745039 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 03:57:12.755294 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 03:57:12.763519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 03:57:12.776846 jq[1548]: false Sep 5 03:57:12.777888 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 03:57:12.782666 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 03:57:12.783621 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 03:57:12.787500 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 03:57:12.790337 extend-filesystems[1550]: Found /dev/vda6 Sep 5 03:57:12.796862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 03:57:12.804257 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 03:57:12.805225 extend-filesystems[1550]: Found /dev/vda9 Sep 5 03:57:12.807822 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 03:57:12.809948 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 5 03:57:12.810717 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 5 03:57:12.808164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 03:57:12.818211 extend-filesystems[1550]: Checking size of /dev/vda9 Sep 5 03:57:12.821924 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 5 03:57:12.821924 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 03:57:12.819128 oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 5 03:57:12.819160 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 5 03:57:12.834727 oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 5 03:57:12.837284 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 5 03:57:12.837479 jq[1563]: true Sep 5 03:57:12.841888 oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 5 03:57:12.842439 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 5 03:57:12.842439 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 03:57:12.840863 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 03:57:12.841906 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 5 03:57:12.842321 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 03:57:12.864830 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 5 03:57:12.865246 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 5 03:57:12.898870 extend-filesystems[1550]: Resized partition /dev/vda9 Sep 5 03:57:12.910198 jq[1576]: true Sep 5 03:57:12.910590 extend-filesystems[1591]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 03:57:12.928574 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 5 03:57:12.936939 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 03:57:12.938894 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 03:57:12.941310 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 03:57:12.963246 update_engine[1561]: I20250905 03:57:12.962994 1561 main.cc:92] Flatcar Update Engine starting Sep 5 03:57:12.969862 tar[1566]: linux-amd64/LICENSE Sep 5 03:57:12.969862 tar[1566]: linux-amd64/helm Sep 5 03:57:13.002106 dbus-daemon[1546]: [system] SELinux support is enabled Sep 5 03:57:13.002568 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 03:57:13.010873 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 03:57:13.010936 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 03:57:13.013672 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 03:57:13.013714 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 03:57:13.017259 systemd[1]: Started update-engine.service - Update Engine. Sep 5 03:57:13.025645 update_engine[1561]: I20250905 03:57:13.025460 1561 update_check_scheduler.cc:74] Next update check in 7m25s Sep 5 03:57:13.028395 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 03:57:13.141764 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Sep 5 03:57:13.143079 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 03:57:13.151622 systemd[1]: Starting sshkeys.service... Sep 5 03:57:13.243440 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 5 03:57:13.262483 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 03:57:13.262483 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 5 03:57:13.262483 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 5 03:57:13.268258 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Sep 5 03:57:13.268724 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 03:57:13.269204 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 03:57:13.270297 systemd-logind[1560]: New seat seat0. Sep 5 03:57:13.295871 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 03:57:13.309206 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 03:57:13.384392 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:13.352321 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 03:57:13.358012 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 03:57:13.377382 systemd-networkd[1513]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 03:57:13.377409 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 03:57:13.379739 systemd-networkd[1513]: eth0: Link UP Sep 5 03:57:13.380140 systemd-networkd[1513]: eth0: Gained carrier Sep 5 03:57:13.380201 systemd-networkd[1513]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 03:57:13.424298 systemd-networkd[1513]: eth0: DHCPv4 address 10.230.58.50/30, gateway 10.230.58.49 acquired from 10.230.58.49 Sep 5 03:57:13.426009 dbus-daemon[1546]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1513 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 5 03:57:13.428017 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 03:57:13.437679 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 5 03:57:13.823472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 03:57:13.836627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 03:57:13.920640 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 5 03:57:13.933360 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 5 03:57:13.934686 dbus-daemon[1546]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1624 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 5 03:57:13.945270 systemd[1]: Starting polkit.service - Authorization Manager... Sep 5 03:57:13.952146 containerd[1583]: time="2025-09-05T03:57:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 03:57:13.965208 containerd[1583]: time="2025-09-05T03:57:13.961489163Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 03:57:13.981983 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.041850262Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="28.363µs" Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.041922428Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.041958018Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042406186Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042435517Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042486564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042599390Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042621731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042969888Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.042995336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.043015499Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044210 containerd[1583]: time="2025-09-05T03:57:14.043030129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 03:57:14.044776 containerd[1583]: time="2025-09-05T03:57:14.043163268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 03:57:14.047719 containerd[1583]: time="2025-09-05T03:57:14.047683648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 03:57:14.047890 containerd[1583]: time="2025-09-05T03:57:14.047859493Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 03:57:14.048011 containerd[1583]: time="2025-09-05T03:57:14.047984837Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 03:57:14.048170 containerd[1583]: time="2025-09-05T03:57:14.048143446Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 03:57:14.048706 containerd[1583]: time="2025-09-05T03:57:14.048675820Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 03:57:14.050312 containerd[1583]: time="2025-09-05T03:57:14.050284549Z" level=info msg="metadata content store policy set" policy=shared Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058625622Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058717279Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058744815Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058765465Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058784982Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058803147Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058822629Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058841834Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058863704Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058882597Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058900076Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.058921340Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.059102423Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 03:57:14.060207 containerd[1583]: time="2025-09-05T03:57:14.059165724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059232235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059257543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059276138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059296976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059316672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059334236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059361291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059394258Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059415300Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059554122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059579877Z" level=info msg="Start snapshots syncer" Sep 5 03:57:14.060701 containerd[1583]: time="2025-09-05T03:57:14.059627553Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 03:57:14.061092 containerd[1583]: time="2025-09-05T03:57:14.059970147Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 03:57:14.061092 containerd[1583]: time="2025-09-05T03:57:14.060055630Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.064989312Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065150420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065207180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065230999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065249034Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065290145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065331875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065354229Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065404058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065427567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065445780Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065490245Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065515581Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 03:57:14.066204 containerd[1583]: time="2025-09-05T03:57:14.065531148Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065547321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065563301Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065580020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065597309Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065631875Z" level=info msg="runtime interface created" Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065644281Z" level=info msg="created NRI interface" Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065658401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065677107Z" level=info msg="Connect containerd service" Sep 5 03:57:14.066669 containerd[1583]: time="2025-09-05T03:57:14.065712522Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 03:57:14.070068 containerd[1583]: time="2025-09-05T03:57:14.070034448Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 03:57:14.076208 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 03:57:14.092567 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 03:57:14.155221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 5 03:57:14.160209 kernel: ACPI: button: Power Button [PWRF] Sep 5 03:57:14.281232 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 03:57:14.323692 polkitd[1635]: Started polkitd version 126 Sep 5 03:57:14.479053 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 03:57:14.411502 systemd[1]: Started polkit.service - Authorization Manager. Sep 5 03:57:14.340891 polkitd[1635]: Loading rules from directory /etc/polkit-1/rules.d Sep 5 03:57:14.342417 polkitd[1635]: Loading rules from directory /run/polkit-1/rules.d Sep 5 03:57:14.342487 polkitd[1635]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 5 03:57:14.342833 polkitd[1635]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 5 03:57:14.342873 polkitd[1635]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 5 03:57:14.342932 polkitd[1635]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 5 03:57:14.408271 polkitd[1635]: Finished loading, compiling and executing 2 rules Sep 5 03:57:14.451923 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 5 03:57:14.458576 polkitd[1635]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 5 03:57:14.518300 systemd-hostnamed[1624]: Hostname set to (static) Sep 5 03:57:14.571488 containerd[1583]: time="2025-09-05T03:57:14.571404711Z" level=info msg="Start subscribing containerd event" Sep 5 03:57:14.571693 containerd[1583]: time="2025-09-05T03:57:14.571519031Z" level=info msg="Start recovering state" Sep 5 03:57:14.571930 containerd[1583]: time="2025-09-05T03:57:14.571816757Z" level=info msg="Start event monitor" Sep 5 03:57:14.571930 containerd[1583]: time="2025-09-05T03:57:14.571855732Z" level=info msg="Start cni network conf syncer for default" Sep 5 03:57:14.571930 containerd[1583]: time="2025-09-05T03:57:14.571874486Z" level=info msg="Start streaming server" Sep 5 03:57:14.571930 containerd[1583]: time="2025-09-05T03:57:14.571915385Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 03:57:14.572099 containerd[1583]: time="2025-09-05T03:57:14.571941558Z" level=info msg="runtime interface starting up..." Sep 5 03:57:14.572099 containerd[1583]: time="2025-09-05T03:57:14.571957206Z" level=info msg="starting plugins..." Sep 5 03:57:14.572099 containerd[1583]: time="2025-09-05T03:57:14.572018811Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 03:57:14.573170 containerd[1583]: time="2025-09-05T03:57:14.573120300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 03:57:14.574447 containerd[1583]: time="2025-09-05T03:57:14.574415511Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 03:57:14.574587 containerd[1583]: time="2025-09-05T03:57:14.574543458Z" level=info msg="containerd successfully booted in 0.623034s" Sep 5 03:57:14.575504 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 03:57:14.617314 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 03:57:14.627850 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 03:57:14.635217 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 03:57:14.647294 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 03:57:14.650030 systemd[1]: Started sshd@0-10.230.58.50:22-139.178.89.65:33416.service - OpenSSH per-connection server daemon (139.178.89.65:33416). Sep 5 03:57:14.704476 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 03:57:14.704893 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 03:57:14.714114 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 03:57:14.767166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 03:57:14.842834 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 03:57:14.860121 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 03:57:14.862557 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 03:57:14.996604 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 03:57:15.004262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 03:57:15.006695 systemd-logind[1560]: Watching system buttons on /dev/input/event3 (Power Button) Sep 5 03:57:15.092209 tar[1566]: linux-amd64/README.md Sep 5 03:57:15.166081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 03:57:15.249326 systemd-networkd[1513]: eth0: Gained IPv6LL Sep 5 03:57:15.252911 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 03:57:15.260225 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 03:57:15.304267 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 03:57:15.342614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:57:15.419729 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 03:57:15.421231 systemd-networkd[1513]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e8c:24:19ff:fee6:3a32/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e8c:24:19ff:fee6:3a32/64 assigned by NDisc. Sep 5 03:57:15.421244 systemd-networkd[1513]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 5 03:57:15.421971 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 03:57:15.495925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 03:57:15.518541 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 03:57:15.780010 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 33416 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:15.784520 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:15.806015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 03:57:15.809477 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 03:57:15.832695 systemd-logind[1560]: New session 1 of user core. Sep 5 03:57:15.857704 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 03:57:15.864546 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 03:57:15.886885 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 03:57:15.894688 systemd-logind[1560]: New session c1 of user core. Sep 5 03:57:16.186717 systemd[1714]: Queued start job for default target default.target. Sep 5 03:57:16.198619 systemd[1714]: Created slice app.slice - User Application Slice. Sep 5 03:57:16.199304 systemd[1714]: Reached target paths.target - Paths. Sep 5 03:57:16.199998 systemd[1714]: Reached target timers.target - Timers. Sep 5 03:57:16.204346 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 03:57:16.229774 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 03:57:16.230002 systemd[1714]: Reached target sockets.target - Sockets. Sep 5 03:57:16.230226 systemd[1714]: Reached target basic.target - Basic System. Sep 5 03:57:16.230387 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 03:57:16.231279 systemd[1714]: Reached target default.target - Main User Target. Sep 5 03:57:16.231370 systemd[1714]: Startup finished in 321ms. Sep 5 03:57:16.242547 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 03:57:16.283412 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:16.288435 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:16.465519 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 03:57:17.060792 systemd[1]: Started sshd@1-10.230.58.50:22-139.178.89.65:33426.service - OpenSSH per-connection server daemon (139.178.89.65:33426). Sep 5 03:57:17.750943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:57:17.764895 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:57:17.989303 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 33426 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:17.991524 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:18.001642 systemd-logind[1560]: New session 2 of user core. Sep 5 03:57:18.012430 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 03:57:18.314281 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:18.331507 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:18.622387 sshd[1740]: Connection closed by 139.178.89.65 port 33426 Sep 5 03:57:18.624225 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:18.631891 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Sep 5 03:57:18.633175 systemd[1]: sshd@1-10.230.58.50:22-139.178.89.65:33426.service: Deactivated successfully. Sep 5 03:57:18.638032 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 03:57:18.642174 systemd-logind[1560]: Removed session 2. Sep 5 03:57:18.703548 kubelet[1735]: E0905 03:57:18.703455 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:57:18.707144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:57:18.707469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:57:18.708606 systemd[1]: kubelet.service: Consumed 2.381s CPU time, 264.8M memory peak. Sep 5 03:57:18.801957 systemd[1]: Started sshd@2-10.230.58.50:22-139.178.89.65:33442.service - OpenSSH per-connection server daemon (139.178.89.65:33442). Sep 5 03:57:19.823422 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 33442 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:19.825719 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:19.835130 systemd-logind[1560]: New session 3 of user core. Sep 5 03:57:19.851652 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 03:57:19.933144 login[1688]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 03:57:19.940286 login[1687]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 03:57:19.941855 systemd-logind[1560]: New session 4 of user core. Sep 5 03:57:19.951619 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 03:57:19.958333 systemd-logind[1560]: New session 5 of user core. Sep 5 03:57:19.963578 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 03:57:20.868306 sshd[1753]: Connection closed by 139.178.89.65 port 33442 Sep 5 03:57:20.869392 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:20.875003 systemd[1]: sshd@2-10.230.58.50:22-139.178.89.65:33442.service: Deactivated successfully. Sep 5 03:57:20.878353 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 03:57:20.880536 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Sep 5 03:57:20.883780 systemd-logind[1560]: Removed session 3. Sep 5 03:57:22.358226 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:22.358439 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 5 03:57:22.369239 coreos-metadata[1621]: Sep 05 03:57:22.369 WARN failed to locate config-drive, using the metadata service API instead Sep 5 03:57:22.372717 coreos-metadata[1545]: Sep 05 03:57:22.372 WARN failed to locate config-drive, using the metadata service API instead Sep 5 03:57:22.396001 coreos-metadata[1545]: Sep 05 03:57:22.395 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 5 03:57:22.396209 coreos-metadata[1621]: Sep 05 03:57:22.395 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 5 03:57:22.403770 coreos-metadata[1545]: Sep 05 03:57:22.403 INFO Fetch failed with 404: resource not found Sep 5 03:57:22.403857 coreos-metadata[1545]: Sep 05 03:57:22.403 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 5 03:57:22.404506 coreos-metadata[1545]: Sep 05 03:57:22.404 INFO Fetch successful Sep 5 03:57:22.404671 coreos-metadata[1545]: Sep 05 03:57:22.404 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 5 03:57:22.418608 coreos-metadata[1621]: Sep 05 03:57:22.418 INFO Fetch successful Sep 5 03:57:22.418893 coreos-metadata[1545]: Sep 05 03:57:22.418 INFO Fetch successful Sep 5 03:57:22.419003 coreos-metadata[1621]: Sep 05 03:57:22.418 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 5 03:57:22.419358 coreos-metadata[1545]: Sep 05 03:57:22.419 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 5 03:57:22.430819 coreos-metadata[1545]: Sep 05 03:57:22.430 INFO Fetch successful Sep 5 03:57:22.431099 coreos-metadata[1545]: Sep 05 03:57:22.431 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 5 03:57:22.447427 coreos-metadata[1545]: Sep 05 03:57:22.447 INFO Fetch successful Sep 5 03:57:22.447745 coreos-metadata[1545]: Sep 05 03:57:22.447 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 5 03:57:22.448469 coreos-metadata[1621]: Sep 05 03:57:22.448 INFO Fetch successful Sep 5 03:57:22.451428 unknown[1621]: wrote ssh authorized keys file for user: core Sep 5 03:57:22.465814 coreos-metadata[1545]: Sep 05 03:57:22.465 INFO Fetch successful Sep 5 03:57:22.480314 update-ssh-keys[1791]: Updated "/home/core/.ssh/authorized_keys" Sep 5 03:57:22.483101 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 03:57:22.488439 systemd[1]: Finished sshkeys.service. Sep 5 03:57:22.507046 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 03:57:22.508304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 03:57:22.508533 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 03:57:22.508819 systemd[1]: Startup finished in 3.987s (kernel) + 17.867s (initrd) + 14.032s (userspace) = 35.887s. Sep 5 03:57:28.958154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 03:57:28.960725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:57:29.264771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:57:29.277004 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:57:29.340062 kubelet[1808]: E0905 03:57:29.339992 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:57:29.345303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:57:29.345584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:57:29.346549 systemd[1]: kubelet.service: Consumed 320ms CPU time, 108.9M memory peak. Sep 5 03:57:31.036560 systemd[1]: Started sshd@3-10.230.58.50:22-139.178.89.65:39410.service - OpenSSH per-connection server daemon (139.178.89.65:39410). Sep 5 03:57:31.965850 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 39410 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:31.967848 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:31.976801 systemd-logind[1560]: New session 6 of user core. Sep 5 03:57:31.983466 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 03:57:32.618138 sshd[1818]: Connection closed by 139.178.89.65 port 39410 Sep 5 03:57:32.620108 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:32.626334 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Sep 5 03:57:32.627060 systemd[1]: sshd@3-10.230.58.50:22-139.178.89.65:39410.service: Deactivated successfully. Sep 5 03:57:32.630729 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 03:57:32.633995 systemd-logind[1560]: Removed session 6. Sep 5 03:57:32.787283 systemd[1]: Started sshd@4-10.230.58.50:22-139.178.89.65:39424.service - OpenSSH per-connection server daemon (139.178.89.65:39424). Sep 5 03:57:33.763584 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 39424 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:33.765577 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:33.775242 systemd-logind[1560]: New session 7 of user core. Sep 5 03:57:33.785769 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 03:57:34.421346 sshd[1827]: Connection closed by 139.178.89.65 port 39424 Sep 5 03:57:34.422761 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:34.429274 systemd[1]: sshd@4-10.230.58.50:22-139.178.89.65:39424.service: Deactivated successfully. Sep 5 03:57:34.432293 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 03:57:34.434264 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Sep 5 03:57:34.435928 systemd-logind[1560]: Removed session 7. Sep 5 03:57:34.587375 systemd[1]: Started sshd@5-10.230.58.50:22-139.178.89.65:39430.service - OpenSSH per-connection server daemon (139.178.89.65:39430). Sep 5 03:57:35.534593 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 39430 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:35.536785 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:35.545258 systemd-logind[1560]: New session 8 of user core. Sep 5 03:57:35.555444 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 03:57:36.191437 sshd[1836]: Connection closed by 139.178.89.65 port 39430 Sep 5 03:57:36.191273 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:36.197501 systemd[1]: sshd@5-10.230.58.50:22-139.178.89.65:39430.service: Deactivated successfully. Sep 5 03:57:36.197765 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Sep 5 03:57:36.200024 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 03:57:36.203138 systemd-logind[1560]: Removed session 8. Sep 5 03:57:36.356515 systemd[1]: Started sshd@6-10.230.58.50:22-139.178.89.65:39440.service - OpenSSH per-connection server daemon (139.178.89.65:39440). Sep 5 03:57:37.313985 sshd[1842]: Accepted publickey for core from 139.178.89.65 port 39440 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:37.315951 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:37.323293 systemd-logind[1560]: New session 9 of user core. Sep 5 03:57:37.331428 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 03:57:37.829090 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 03:57:37.829596 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 03:57:37.845411 sudo[1846]: pam_unix(sudo:session): session closed for user root Sep 5 03:57:37.991808 sshd[1845]: Connection closed by 139.178.89.65 port 39440 Sep 5 03:57:37.992996 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:38.000004 systemd[1]: sshd@6-10.230.58.50:22-139.178.89.65:39440.service: Deactivated successfully. Sep 5 03:57:38.002452 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 03:57:38.003776 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Sep 5 03:57:38.006652 systemd-logind[1560]: Removed session 9. Sep 5 03:57:38.149093 systemd[1]: Started sshd@7-10.230.58.50:22-139.178.89.65:39444.service - OpenSSH per-connection server daemon (139.178.89.65:39444). Sep 5 03:57:39.098695 sshd[1852]: Accepted publickey for core from 139.178.89.65 port 39444 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:39.100644 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:39.109327 systemd-logind[1560]: New session 10 of user core. Sep 5 03:57:39.117756 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 03:57:39.576160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 03:57:39.579528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:57:39.591368 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 03:57:39.591817 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 03:57:39.602089 sudo[1858]: pam_unix(sudo:session): session closed for user root Sep 5 03:57:39.612739 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 03:57:39.613162 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 03:57:39.632558 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 03:57:39.702090 augenrules[1882]: No rules Sep 5 03:57:39.704695 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 03:57:39.705400 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 03:57:39.706710 sudo[1857]: pam_unix(sudo:session): session closed for user root Sep 5 03:57:39.784338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:57:39.802843 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:57:39.855202 sshd[1855]: Connection closed by 139.178.89.65 port 39444 Sep 5 03:57:39.854094 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Sep 5 03:57:39.863496 systemd[1]: sshd@7-10.230.58.50:22-139.178.89.65:39444.service: Deactivated successfully. Sep 5 03:57:39.867763 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 03:57:39.872576 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Sep 5 03:57:39.874991 systemd-logind[1560]: Removed session 10. Sep 5 03:57:39.881632 kubelet[1892]: E0905 03:57:39.881541 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:57:39.884914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:57:39.885157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:57:39.886387 systemd[1]: kubelet.service: Consumed 225ms CPU time, 109.1M memory peak. Sep 5 03:57:40.016618 systemd[1]: Started sshd@8-10.230.58.50:22-139.178.89.65:39448.service - OpenSSH per-connection server daemon (139.178.89.65:39448). Sep 5 03:57:41.015172 sshd[1903]: Accepted publickey for core from 139.178.89.65 port 39448 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:57:41.017121 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:57:41.024659 systemd-logind[1560]: New session 11 of user core. Sep 5 03:57:41.031634 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 03:57:41.576360 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 03:57:41.576813 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 03:57:42.505834 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 03:57:42.538032 (dockerd)[1925]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 03:57:43.190103 dockerd[1925]: time="2025-09-05T03:57:43.189982604Z" level=info msg="Starting up" Sep 5 03:57:43.194573 dockerd[1925]: time="2025-09-05T03:57:43.194259386Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 03:57:43.235533 dockerd[1925]: time="2025-09-05T03:57:43.235333184Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 03:57:43.298949 dockerd[1925]: time="2025-09-05T03:57:43.298760259Z" level=info msg="Loading containers: start." Sep 5 03:57:43.325312 kernel: Initializing XFRM netlink socket Sep 5 03:57:43.621367 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Sep 5 03:57:43.684173 systemd-networkd[1513]: docker0: Link UP Sep 5 03:57:43.689207 dockerd[1925]: time="2025-09-05T03:57:43.689038552Z" level=info msg="Loading containers: done." Sep 5 03:57:43.727111 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck326761837-merged.mount: Deactivated successfully. Sep 5 03:57:43.730756 dockerd[1925]: time="2025-09-05T03:57:43.727945472Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 03:57:43.730756 dockerd[1925]: time="2025-09-05T03:57:43.728087765Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 03:57:43.730756 dockerd[1925]: time="2025-09-05T03:57:43.728273850Z" level=info msg="Initializing buildkit" Sep 5 03:57:43.729653 systemd-timesyncd[1487]: Contacted time server [2a02:e00:ffe9:11c::1]:123 (2.flatcar.pool.ntp.org). Sep 5 03:57:43.729773 systemd-timesyncd[1487]: Initial clock synchronization to Fri 2025-09-05 03:57:43.767802 UTC. Sep 5 03:57:43.759943 dockerd[1925]: time="2025-09-05T03:57:43.759838281Z" level=info msg="Completed buildkit initialization" Sep 5 03:57:43.771451 dockerd[1925]: time="2025-09-05T03:57:43.771337328Z" level=info msg="Daemon has completed initialization" Sep 5 03:57:43.772217 dockerd[1925]: time="2025-09-05T03:57:43.771700216Z" level=info msg="API listen on /run/docker.sock" Sep 5 03:57:43.772754 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 03:57:45.090883 containerd[1583]: time="2025-09-05T03:57:45.090584739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 03:57:45.534162 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 5 03:57:45.983954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094648829.mount: Deactivated successfully. Sep 5 03:57:48.438025 containerd[1583]: time="2025-09-05T03:57:48.437919726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:48.439448 containerd[1583]: time="2025-09-05T03:57:48.439403171Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800695" Sep 5 03:57:48.440820 containerd[1583]: time="2025-09-05T03:57:48.440137236Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:48.444500 containerd[1583]: time="2025-09-05T03:57:48.444458698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:48.446624 containerd[1583]: time="2025-09-05T03:57:48.446584598Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 3.354741521s" Sep 5 03:57:48.447172 containerd[1583]: time="2025-09-05T03:57:48.447141465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 5 03:57:48.448343 containerd[1583]: time="2025-09-05T03:57:48.448299703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 03:57:50.075937 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 03:57:50.080100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:57:51.011868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:57:51.029389 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:57:51.207504 kubelet[2208]: E0905 03:57:51.207420 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:57:51.213940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:57:51.214470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:57:51.215040 systemd[1]: kubelet.service: Consumed 978ms CPU time, 110.2M memory peak. Sep 5 03:57:51.505746 containerd[1583]: time="2025-09-05T03:57:51.505680215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:51.510602 containerd[1583]: time="2025-09-05T03:57:51.510364045Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784136" Sep 5 03:57:51.511416 containerd[1583]: time="2025-09-05T03:57:51.511375993Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:51.517770 containerd[1583]: time="2025-09-05T03:57:51.517679668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:51.519581 containerd[1583]: time="2025-09-05T03:57:51.519336513Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 3.070989504s" Sep 5 03:57:51.519581 containerd[1583]: time="2025-09-05T03:57:51.519394693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 5 03:57:51.520758 containerd[1583]: time="2025-09-05T03:57:51.520709333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 03:57:53.710662 containerd[1583]: time="2025-09-05T03:57:53.710515376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:53.729409 containerd[1583]: time="2025-09-05T03:57:53.729345166Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175044" Sep 5 03:57:53.730737 containerd[1583]: time="2025-09-05T03:57:53.730677103Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:53.736267 containerd[1583]: time="2025-09-05T03:57:53.735976204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:53.737503 containerd[1583]: time="2025-09-05T03:57:53.737468634Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.216543441s" Sep 5 03:57:53.737641 containerd[1583]: time="2025-09-05T03:57:53.737613881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 5 03:57:53.738835 containerd[1583]: time="2025-09-05T03:57:53.738789093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 03:57:55.583822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374508452.mount: Deactivated successfully. Sep 5 03:57:56.593217 containerd[1583]: time="2025-09-05T03:57:56.593104194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:56.595242 containerd[1583]: time="2025-09-05T03:57:56.594955490Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897178" Sep 5 03:57:56.597804 containerd[1583]: time="2025-09-05T03:57:56.597725680Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:56.602891 containerd[1583]: time="2025-09-05T03:57:56.602818050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:56.603956 containerd[1583]: time="2025-09-05T03:57:56.603670375Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.864830186s" Sep 5 03:57:56.603956 containerd[1583]: time="2025-09-05T03:57:56.603723318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 5 03:57:56.604547 containerd[1583]: time="2025-09-05T03:57:56.604501201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 03:57:57.300990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450604311.mount: Deactivated successfully. Sep 5 03:57:58.625098 update_engine[1561]: I20250905 03:57:58.624423 1561 update_attempter.cc:509] Updating boot flags... Sep 5 03:57:59.146241 containerd[1583]: time="2025-09-05T03:57:59.144967236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:59.148316 containerd[1583]: time="2025-09-05T03:57:59.146415066Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 5 03:57:59.148316 containerd[1583]: time="2025-09-05T03:57:59.146745254Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:59.151377 containerd[1583]: time="2025-09-05T03:57:59.151335578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:57:59.152999 containerd[1583]: time="2025-09-05T03:57:59.152946271Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.548294089s" Sep 5 03:57:59.153169 containerd[1583]: time="2025-09-05T03:57:59.153135620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 03:57:59.154118 containerd[1583]: time="2025-09-05T03:57:59.154065183Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 03:57:59.876473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045141830.mount: Deactivated successfully. Sep 5 03:57:59.884559 containerd[1583]: time="2025-09-05T03:57:59.884472745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 03:57:59.885738 containerd[1583]: time="2025-09-05T03:57:59.885692441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 5 03:57:59.886687 containerd[1583]: time="2025-09-05T03:57:59.886576931Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 03:57:59.889844 containerd[1583]: time="2025-09-05T03:57:59.889778954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 03:57:59.891227 containerd[1583]: time="2025-09-05T03:57:59.890807621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 736.5883ms" Sep 5 03:57:59.891227 containerd[1583]: time="2025-09-05T03:57:59.890864774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 03:57:59.891599 containerd[1583]: time="2025-09-05T03:57:59.891531977Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 03:58:00.736881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390276055.mount: Deactivated successfully. Sep 5 03:58:01.446292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 03:58:01.450243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:01.667427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:01.682830 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:58:01.766766 kubelet[2355]: E0905 03:58:01.766556 2355 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:58:01.771621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:58:01.772083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:58:01.773374 systemd[1]: kubelet.service: Consumed 244ms CPU time, 106.9M memory peak. Sep 5 03:58:09.347810 containerd[1583]: time="2025-09-05T03:58:09.346324952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:09.347810 containerd[1583]: time="2025-09-05T03:58:09.347763310Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 5 03:58:09.348667 containerd[1583]: time="2025-09-05T03:58:09.348618241Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:09.351785 containerd[1583]: time="2025-09-05T03:58:09.351751103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:09.353363 containerd[1583]: time="2025-09-05T03:58:09.353317082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.461747409s" Sep 5 03:58:09.353447 containerd[1583]: time="2025-09-05T03:58:09.353367918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 5 03:58:11.825558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 03:58:11.830393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:12.091390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:12.101744 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 03:58:12.172811 kubelet[2397]: E0905 03:58:12.172725 2397 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 03:58:12.176540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 03:58:12.176826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 03:58:12.177754 systemd[1]: kubelet.service: Consumed 231ms CPU time, 110M memory peak. Sep 5 03:58:13.240148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:13.240451 systemd[1]: kubelet.service: Consumed 231ms CPU time, 110M memory peak. Sep 5 03:58:13.248420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:13.285152 systemd[1]: Reload requested from client PID 2411 ('systemctl') (unit session-11.scope)... Sep 5 03:58:13.285241 systemd[1]: Reloading... Sep 5 03:58:13.472216 zram_generator::config[2456]: No configuration found. Sep 5 03:58:13.829912 systemd[1]: Reloading finished in 543 ms. Sep 5 03:58:13.921483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 03:58:13.921679 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 03:58:13.922722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:13.922837 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. Sep 5 03:58:13.926346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:14.192584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:14.207759 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 03:58:14.274208 kubelet[2524]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 03:58:14.274208 kubelet[2524]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 03:58:14.274208 kubelet[2524]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 03:58:14.274843 kubelet[2524]: I0905 03:58:14.274418 2524 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 03:58:14.862244 kubelet[2524]: I0905 03:58:14.861838 2524 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 03:58:14.862244 kubelet[2524]: I0905 03:58:14.861882 2524 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 03:58:14.862604 kubelet[2524]: I0905 03:58:14.862290 2524 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 03:58:14.899394 kubelet[2524]: E0905 03:58:14.899313 2524 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.58.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:14.900967 kubelet[2524]: I0905 03:58:14.900707 2524 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 03:58:14.928436 kubelet[2524]: I0905 03:58:14.928355 2524 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 03:58:14.940210 kubelet[2524]: I0905 03:58:14.940090 2524 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 03:58:14.943643 kubelet[2524]: I0905 03:58:14.943599 2524 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 03:58:14.944148 kubelet[2524]: I0905 03:58:14.943796 2524 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-86xia.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 03:58:14.944719 kubelet[2524]: I0905 03:58:14.944692 2524 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 03:58:14.944839 kubelet[2524]: I0905 03:58:14.944821 2524 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 03:58:14.946690 kubelet[2524]: I0905 03:58:14.946275 2524 state_mem.go:36] "Initialized new in-memory state store" Sep 5 03:58:14.949898 kubelet[2524]: I0905 03:58:14.949874 2524 kubelet.go:446] "Attempting to sync node with API server" Sep 5 03:58:14.950085 kubelet[2524]: I0905 03:58:14.950061 2524 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 03:58:14.951356 kubelet[2524]: I0905 03:58:14.951333 2524 kubelet.go:352] "Adding apiserver pod source" Sep 5 03:58:14.951513 kubelet[2524]: I0905 03:58:14.951491 2524 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 03:58:14.960406 kubelet[2524]: I0905 03:58:14.959906 2524 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 03:58:14.963857 kubelet[2524]: I0905 03:58:14.963825 2524 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 03:58:14.964927 kubelet[2524]: W0905 03:58:14.964901 2524 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 03:58:14.966343 kubelet[2524]: I0905 03:58:14.966306 2524 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 03:58:14.966456 kubelet[2524]: I0905 03:58:14.966370 2524 server.go:1287] "Started kubelet" Sep 5 03:58:14.967237 kubelet[2524]: W0905 03:58:14.966633 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-86xia.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:14.967237 kubelet[2524]: E0905 03:58:14.966754 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-86xia.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:14.968387 kubelet[2524]: W0905 03:58:14.967664 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:14.968387 kubelet[2524]: E0905 03:58:14.967737 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:14.968924 kubelet[2524]: I0905 03:58:14.968874 2524 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 03:58:14.970492 kubelet[2524]: I0905 03:58:14.970467 2524 server.go:479] "Adding debug handlers to kubelet server" Sep 5 03:58:14.974138 kubelet[2524]: I0905 03:58:14.974067 2524 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 03:58:14.974707 kubelet[2524]: I0905 03:58:14.974682 2524 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 03:58:14.975222 kubelet[2524]: I0905 03:58:14.975150 2524 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 03:58:14.987439 kubelet[2524]: E0905 03:58:14.979916 2524 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.58.50:6443/api/v1/namespaces/default/events\": dial tcp 10.230.58.50:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-86xia.gb1.brightbox.com.186246d741f404e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-86xia.gb1.brightbox.com,UID:srv-86xia.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-86xia.gb1.brightbox.com,},FirstTimestamp:2025-09-05 03:58:14.966338788 +0000 UTC m=+0.752290214,LastTimestamp:2025-09-05 03:58:14.966338788 +0000 UTC m=+0.752290214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-86xia.gb1.brightbox.com,}" Sep 5 03:58:14.987439 kubelet[2524]: I0905 03:58:14.986504 2524 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 03:58:14.987439 kubelet[2524]: I0905 03:58:14.987271 2524 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 03:58:14.987700 kubelet[2524]: E0905 03:58:14.987591 2524 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-86xia.gb1.brightbox.com\" not found" Sep 5 03:58:14.991320 kubelet[2524]: E0905 03:58:14.991244 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-86xia.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.50:6443: connect: connection refused" interval="200ms" Sep 5 03:58:14.991833 kubelet[2524]: I0905 03:58:14.991804 2524 factory.go:221] Registration of the systemd container factory successfully Sep 5 03:58:14.992015 kubelet[2524]: I0905 03:58:14.991985 2524 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 03:58:14.992118 kubelet[2524]: I0905 03:58:14.992082 2524 reconciler.go:26] "Reconciler: start to sync state" Sep 5 03:58:14.992277 kubelet[2524]: I0905 03:58:14.992248 2524 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 03:58:14.999490 kubelet[2524]: I0905 03:58:14.999462 2524 factory.go:221] Registration of the containerd container factory successfully Sep 5 03:58:15.018266 kubelet[2524]: I0905 03:58:15.017849 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 03:58:15.025658 kubelet[2524]: I0905 03:58:15.025604 2524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 03:58:15.025658 kubelet[2524]: I0905 03:58:15.025659 2524 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 03:58:15.025821 kubelet[2524]: I0905 03:58:15.025728 2524 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 03:58:15.025821 kubelet[2524]: I0905 03:58:15.025744 2524 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 03:58:15.025997 kubelet[2524]: E0905 03:58:15.025841 2524 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 03:58:15.034532 kubelet[2524]: E0905 03:58:15.034253 2524 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 03:58:15.034681 kubelet[2524]: W0905 03:58:15.034594 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:15.034763 kubelet[2524]: E0905 03:58:15.034731 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:15.039305 kubelet[2524]: W0905 03:58:15.038742 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:15.039305 kubelet[2524]: E0905 03:58:15.038832 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:15.055290 kubelet[2524]: I0905 03:58:15.055248 2524 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 03:58:15.055290 kubelet[2524]: I0905 03:58:15.055275 2524 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 03:58:15.055456 kubelet[2524]: I0905 03:58:15.055341 2524 state_mem.go:36] "Initialized new in-memory state store" Sep 5 03:58:15.057335 kubelet[2524]: I0905 03:58:15.057298 2524 policy_none.go:49] "None policy: Start" Sep 5 03:58:15.057420 kubelet[2524]: I0905 03:58:15.057347 2524 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 03:58:15.057420 kubelet[2524]: I0905 03:58:15.057383 2524 state_mem.go:35] "Initializing new in-memory state store" Sep 5 03:58:15.068912 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 03:58:15.084461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 03:58:15.088635 kubelet[2524]: E0905 03:58:15.088473 2524 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-86xia.gb1.brightbox.com\" not found" Sep 5 03:58:15.090725 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 03:58:15.103636 kubelet[2524]: I0905 03:58:15.103603 2524 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 03:58:15.103962 kubelet[2524]: I0905 03:58:15.103921 2524 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 03:58:15.104060 kubelet[2524]: I0905 03:58:15.103956 2524 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 03:58:15.105358 kubelet[2524]: I0905 03:58:15.105156 2524 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 03:58:15.109006 kubelet[2524]: E0905 03:58:15.108972 2524 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 03:58:15.109091 kubelet[2524]: E0905 03:58:15.109065 2524 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-86xia.gb1.brightbox.com\" not found" Sep 5 03:58:15.141529 systemd[1]: Created slice kubepods-burstable-pod477b5fc9c2b57f6ef95c3a67d3815c08.slice - libcontainer container kubepods-burstable-pod477b5fc9c2b57f6ef95c3a67d3815c08.slice. Sep 5 03:58:15.161692 kubelet[2524]: E0905 03:58:15.161631 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.168404 systemd[1]: Created slice kubepods-burstable-pod4e57636cc40ac4ed3da88c60405dd8a4.slice - libcontainer container kubepods-burstable-pod4e57636cc40ac4ed3da88c60405dd8a4.slice. Sep 5 03:58:15.172637 kubelet[2524]: E0905 03:58:15.172347 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.176059 systemd[1]: Created slice kubepods-burstable-pod1fe080fe2c20826fecb9895a23d9ad64.slice - libcontainer container kubepods-burstable-pod1fe080fe2c20826fecb9895a23d9ad64.slice. Sep 5 03:58:15.178612 kubelet[2524]: E0905 03:58:15.178587 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.192501 kubelet[2524]: E0905 03:58:15.192411 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-86xia.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.50:6443: connect: connection refused" interval="400ms" Sep 5 03:58:15.193074 kubelet[2524]: I0905 03:58:15.193043 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe080fe2c20826fecb9895a23d9ad64-kubeconfig\") pod \"kube-scheduler-srv-86xia.gb1.brightbox.com\" (UID: \"1fe080fe2c20826fecb9895a23d9ad64\") " pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.193235 kubelet[2524]: I0905 03:58:15.193209 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-ca-certs\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.193385 kubelet[2524]: I0905 03:58:15.193356 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-flexvolume-dir\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.193518 kubelet[2524]: I0905 03:58:15.193494 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-k8s-certs\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.193654 kubelet[2524]: I0905 03:58:15.193627 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.193824 kubelet[2524]: I0905 03:58:15.193799 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-ca-certs\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.194081 kubelet[2524]: I0905 03:58:15.193975 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-k8s-certs\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.194081 kubelet[2524]: I0905 03:58:15.194013 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-usr-share-ca-certificates\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.194081 kubelet[2524]: I0905 03:58:15.194041 2524 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-kubeconfig\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.208348 kubelet[2524]: I0905 03:58:15.208306 2524 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.208786 kubelet[2524]: E0905 03:58:15.208731 2524 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.58.50:6443/api/v1/nodes\": dial tcp 10.230.58.50:6443: connect: connection refused" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.412301 kubelet[2524]: I0905 03:58:15.411963 2524 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.414232 kubelet[2524]: E0905 03:58:15.414158 2524 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.58.50:6443/api/v1/nodes\": dial tcp 10.230.58.50:6443: connect: connection refused" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.465060 containerd[1583]: time="2025-09-05T03:58:15.464553816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-86xia.gb1.brightbox.com,Uid:477b5fc9c2b57f6ef95c3a67d3815c08,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:15.482257 containerd[1583]: time="2025-09-05T03:58:15.482209041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-86xia.gb1.brightbox.com,Uid:1fe080fe2c20826fecb9895a23d9ad64,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:15.483468 containerd[1583]: time="2025-09-05T03:58:15.483424704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-86xia.gb1.brightbox.com,Uid:4e57636cc40ac4ed3da88c60405dd8a4,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:15.593723 kubelet[2524]: E0905 03:58:15.593617 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-86xia.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.50:6443: connect: connection refused" interval="800ms" Sep 5 03:58:15.653615 containerd[1583]: time="2025-09-05T03:58:15.650930477Z" level=info msg="connecting to shim d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810" address="unix:///run/containerd/s/ca3a84a8c394451d9903db559eb1a6ba7f876dcb81573944d017fa3b863e6257" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:15.668397 containerd[1583]: time="2025-09-05T03:58:15.667715334Z" level=info msg="connecting to shim b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785" address="unix:///run/containerd/s/29c5dd362bdb1d11dcf66b590a4a7627cd18e37e5291471d996b124c8d60c7b6" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:15.668982 containerd[1583]: time="2025-09-05T03:58:15.668874726Z" level=info msg="connecting to shim fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435" address="unix:///run/containerd/s/a42320d9291027937df8a2a09710c89fb666b4fb43ea5787325ed30d6e5b110c" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:15.821950 kubelet[2524]: I0905 03:58:15.821329 2524 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.823439 systemd[1]: Started cri-containerd-b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785.scope - libcontainer container b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785. Sep 5 03:58:15.825543 systemd[1]: Started cri-containerd-fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435.scope - libcontainer container fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435. Sep 5 03:58:15.828235 kubelet[2524]: E0905 03:58:15.825744 2524 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.58.50:6443/api/v1/nodes\": dial tcp 10.230.58.50:6443: connect: connection refused" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:15.835474 systemd[1]: Started cri-containerd-d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810.scope - libcontainer container d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810. Sep 5 03:58:15.902438 kubelet[2524]: W0905 03:58:15.902360 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:15.902848 kubelet[2524]: E0905 03:58:15.902781 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:15.984864 containerd[1583]: time="2025-09-05T03:58:15.983768123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-86xia.gb1.brightbox.com,Uid:477b5fc9c2b57f6ef95c3a67d3815c08,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435\"" Sep 5 03:58:15.993637 containerd[1583]: time="2025-09-05T03:58:15.993442312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-86xia.gb1.brightbox.com,Uid:4e57636cc40ac4ed3da88c60405dd8a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810\"" Sep 5 03:58:15.994502 containerd[1583]: time="2025-09-05T03:58:15.994470172Z" level=info msg="CreateContainer within sandbox \"fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 03:58:16.006743 containerd[1583]: time="2025-09-05T03:58:16.006653342Z" level=info msg="CreateContainer within sandbox \"d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 03:58:16.019847 containerd[1583]: time="2025-09-05T03:58:16.019413499Z" level=info msg="Container a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:16.021765 containerd[1583]: time="2025-09-05T03:58:16.021705493Z" level=info msg="Container a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:16.037512 containerd[1583]: time="2025-09-05T03:58:16.037467317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-86xia.gb1.brightbox.com,Uid:1fe080fe2c20826fecb9895a23d9ad64,Namespace:kube-system,Attempt:0,} returns sandbox id \"b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785\"" Sep 5 03:58:16.045353 containerd[1583]: time="2025-09-05T03:58:16.045315151Z" level=info msg="CreateContainer within sandbox \"d14bdcefeb4b42e4ea4f29faa70a136a8b41a07b43696249d095ffa375f42810\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6\"" Sep 5 03:58:16.048984 containerd[1583]: time="2025-09-05T03:58:16.048244894Z" level=info msg="CreateContainer within sandbox \"b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 03:58:16.048984 containerd[1583]: time="2025-09-05T03:58:16.048855172Z" level=info msg="CreateContainer within sandbox \"fc8bacf654ce261888910a0135b01a26a2901ddf5d5a53ee23cae827d9ba5435\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9\"" Sep 5 03:58:16.048984 containerd[1583]: time="2025-09-05T03:58:16.048874053Z" level=info msg="StartContainer for \"a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6\"" Sep 5 03:58:16.049944 containerd[1583]: time="2025-09-05T03:58:16.049906539Z" level=info msg="StartContainer for \"a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9\"" Sep 5 03:58:16.051369 containerd[1583]: time="2025-09-05T03:58:16.051335427Z" level=info msg="connecting to shim a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6" address="unix:///run/containerd/s/ca3a84a8c394451d9903db559eb1a6ba7f876dcb81573944d017fa3b863e6257" protocol=ttrpc version=3 Sep 5 03:58:16.052825 containerd[1583]: time="2025-09-05T03:58:16.051401748Z" level=info msg="connecting to shim a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9" address="unix:///run/containerd/s/a42320d9291027937df8a2a09710c89fb666b4fb43ea5787325ed30d6e5b110c" protocol=ttrpc version=3 Sep 5 03:58:16.060054 containerd[1583]: time="2025-09-05T03:58:16.060020785Z" level=info msg="Container 498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:16.088383 systemd[1]: Started cri-containerd-a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9.scope - libcontainer container a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9. Sep 5 03:58:16.092014 containerd[1583]: time="2025-09-05T03:58:16.090510199Z" level=info msg="CreateContainer within sandbox \"b61999f3a3d6f6ef8f21d564643cec0f30212231726171d17bdf7ecb32ae8785\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3\"" Sep 5 03:58:16.093670 containerd[1583]: time="2025-09-05T03:58:16.093118382Z" level=info msg="StartContainer for \"498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3\"" Sep 5 03:58:16.101202 containerd[1583]: time="2025-09-05T03:58:16.099670628Z" level=info msg="connecting to shim 498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3" address="unix:///run/containerd/s/29c5dd362bdb1d11dcf66b590a4a7627cd18e37e5291471d996b124c8d60c7b6" protocol=ttrpc version=3 Sep 5 03:58:16.124431 systemd[1]: Started cri-containerd-a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6.scope - libcontainer container a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6. Sep 5 03:58:16.155532 systemd[1]: Started cri-containerd-498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3.scope - libcontainer container 498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3. Sep 5 03:58:16.176329 kubelet[2524]: W0905 03:58:16.175959 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-86xia.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:16.176846 kubelet[2524]: E0905 03:58:16.176693 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-86xia.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:16.218678 kubelet[2524]: W0905 03:58:16.218333 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:16.219418 kubelet[2524]: E0905 03:58:16.218580 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:16.259004 containerd[1583]: time="2025-09-05T03:58:16.258780446Z" level=info msg="StartContainer for \"a2411a29b5dc4a7ce123cc53c7c14701702106ebeb52b24bc437dd5e305b42a9\" returns successfully" Sep 5 03:58:16.268107 containerd[1583]: time="2025-09-05T03:58:16.268053273Z" level=info msg="StartContainer for \"a2009d3fb358b460d0d523b6b1730fa0a19f22968c1b63ceff0f9f6b1df17aa6\" returns successfully" Sep 5 03:58:16.300212 kubelet[2524]: W0905 03:58:16.299745 2524 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.50:6443: connect: connection refused Sep 5 03:58:16.300511 kubelet[2524]: E0905 03:58:16.300452 2524 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.50:6443: connect: connection refused" logger="UnhandledError" Sep 5 03:58:16.374407 containerd[1583]: time="2025-09-05T03:58:16.374295201Z" level=info msg="StartContainer for \"498833eff8b1420ce9062ad7ca0ca868cbffb783baa87f5c2d832e5b55ba07b3\" returns successfully" Sep 5 03:58:16.395262 kubelet[2524]: E0905 03:58:16.395159 2524 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-86xia.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.50:6443: connect: connection refused" interval="1.6s" Sep 5 03:58:16.630622 kubelet[2524]: I0905 03:58:16.629773 2524 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:16.631895 kubelet[2524]: E0905 03:58:16.631851 2524 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.58.50:6443/api/v1/nodes\": dial tcp 10.230.58.50:6443: connect: connection refused" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:17.107109 kubelet[2524]: E0905 03:58:17.106939 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:17.114201 kubelet[2524]: E0905 03:58:17.112912 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:17.116817 kubelet[2524]: E0905 03:58:17.116792 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:18.123358 kubelet[2524]: E0905 03:58:18.123266 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:18.124296 kubelet[2524]: E0905 03:58:18.123609 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:18.124296 kubelet[2524]: E0905 03:58:18.124145 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:18.238020 kubelet[2524]: I0905 03:58:18.237969 2524 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.095216 kubelet[2524]: E0905 03:58:19.095073 2524 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.125574 kubelet[2524]: E0905 03:58:19.124875 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.126999 kubelet[2524]: E0905 03:58:19.126611 2524 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-86xia.gb1.brightbox.com\" not found" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.178226 kubelet[2524]: E0905 03:58:19.177778 2524 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-86xia.gb1.brightbox.com.186246d741f404e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-86xia.gb1.brightbox.com,UID:srv-86xia.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-86xia.gb1.brightbox.com,},FirstTimestamp:2025-09-05 03:58:14.966338788 +0000 UTC m=+0.752290214,LastTimestamp:2025-09-05 03:58:14.966338788 +0000 UTC m=+0.752290214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-86xia.gb1.brightbox.com,}" Sep 5 03:58:19.228113 kubelet[2524]: I0905 03:58:19.228059 2524 kubelet_node_status.go:78] "Successfully registered node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.288841 kubelet[2524]: I0905 03:58:19.288770 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.300209 kubelet[2524]: E0905 03:58:19.300152 2524 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-86xia.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.300209 kubelet[2524]: I0905 03:58:19.300215 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.303268 kubelet[2524]: E0905 03:58:19.303225 2524 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.303268 kubelet[2524]: I0905 03:58:19.303265 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.306115 kubelet[2524]: E0905 03:58:19.306077 2524 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-86xia.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:19.969824 kubelet[2524]: I0905 03:58:19.969747 2524 apiserver.go:52] "Watching apiserver" Sep 5 03:58:19.992384 kubelet[2524]: I0905 03:58:19.992331 2524 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 03:58:20.123986 kubelet[2524]: I0905 03:58:20.123941 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:20.131162 kubelet[2524]: W0905 03:58:20.131112 2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:20.203213 kubelet[2524]: I0905 03:58:20.201580 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:20.210352 kubelet[2524]: W0905 03:58:20.210232 2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:20.341937 kubelet[2524]: I0905 03:58:20.341263 2524 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:20.354376 kubelet[2524]: W0905 03:58:20.354331 2524 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:21.680344 systemd[1]: Reload requested from client PID 2794 ('systemctl') (unit session-11.scope)... Sep 5 03:58:21.680376 systemd[1]: Reloading... Sep 5 03:58:21.855368 zram_generator::config[2839]: No configuration found. Sep 5 03:58:22.252956 systemd[1]: Reloading finished in 571 ms. Sep 5 03:58:22.293770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:22.308302 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 03:58:22.308984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:22.309088 systemd[1]: kubelet.service: Consumed 1.406s CPU time, 126.4M memory peak. Sep 5 03:58:22.313511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 03:58:22.642955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 03:58:22.664140 (kubelet)[2903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 03:58:22.755211 kubelet[2903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 03:58:22.755211 kubelet[2903]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 03:58:22.755211 kubelet[2903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 03:58:22.755211 kubelet[2903]: I0905 03:58:22.755046 2903 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 03:58:22.770212 kubelet[2903]: I0905 03:58:22.769936 2903 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 03:58:22.770212 kubelet[2903]: I0905 03:58:22.769971 2903 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 03:58:22.771958 kubelet[2903]: I0905 03:58:22.771912 2903 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 03:58:22.775672 kubelet[2903]: I0905 03:58:22.775645 2903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 03:58:22.780217 kubelet[2903]: I0905 03:58:22.780133 2903 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 03:58:22.788221 kubelet[2903]: I0905 03:58:22.787607 2903 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 03:58:22.797212 kubelet[2903]: I0905 03:58:22.797158 2903 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 03:58:22.798378 kubelet[2903]: I0905 03:58:22.798287 2903 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 03:58:22.798971 kubelet[2903]: I0905 03:58:22.798503 2903 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-86xia.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 03:58:22.799258 kubelet[2903]: I0905 03:58:22.799236 2903 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 03:58:22.799665 kubelet[2903]: I0905 03:58:22.799391 2903 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 03:58:22.799665 kubelet[2903]: I0905 03:58:22.799555 2903 state_mem.go:36] "Initialized new in-memory state store" Sep 5 03:58:22.800197 kubelet[2903]: I0905 03:58:22.800154 2903 kubelet.go:446] "Attempting to sync node with API server" Sep 5 03:58:22.800492 kubelet[2903]: I0905 03:58:22.800470 2903 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 03:58:22.800703 kubelet[2903]: I0905 03:58:22.800683 2903 kubelet.go:352] "Adding apiserver pod source" Sep 5 03:58:22.800802 kubelet[2903]: I0905 03:58:22.800784 2903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 03:58:22.802442 sudo[2918]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 03:58:22.803102 sudo[2918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 03:58:22.808757 kubelet[2903]: I0905 03:58:22.808731 2903 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 03:58:22.812851 kubelet[2903]: I0905 03:58:22.812820 2903 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 03:58:22.814094 kubelet[2903]: I0905 03:58:22.813960 2903 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 03:58:22.814279 kubelet[2903]: I0905 03:58:22.814259 2903 server.go:1287] "Started kubelet" Sep 5 03:58:22.826926 kubelet[2903]: I0905 03:58:22.826764 2903 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 03:58:22.839941 kubelet[2903]: I0905 03:58:22.839536 2903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 03:58:22.840297 kubelet[2903]: I0905 03:58:22.840273 2903 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 03:58:22.848035 kubelet[2903]: I0905 03:58:22.847471 2903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 03:58:22.862796 kubelet[2903]: I0905 03:58:22.862750 2903 server.go:479] "Adding debug handlers to kubelet server" Sep 5 03:58:22.865527 kubelet[2903]: I0905 03:58:22.865225 2903 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 03:58:22.866354 kubelet[2903]: I0905 03:58:22.866330 2903 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 03:58:22.866752 kubelet[2903]: I0905 03:58:22.866731 2903 reconciler.go:26] "Reconciler: start to sync state" Sep 5 03:58:22.875340 kubelet[2903]: I0905 03:58:22.875036 2903 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 03:58:22.892961 kubelet[2903]: I0905 03:58:22.892920 2903 factory.go:221] Registration of the systemd container factory successfully Sep 5 03:58:22.893160 kubelet[2903]: I0905 03:58:22.893058 2903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 03:58:22.899003 kubelet[2903]: E0905 03:58:22.898494 2903 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 03:58:22.903278 kubelet[2903]: I0905 03:58:22.903243 2903 factory.go:221] Registration of the containerd container factory successfully Sep 5 03:58:22.907630 kubelet[2903]: I0905 03:58:22.907550 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 03:58:22.924083 kubelet[2903]: I0905 03:58:22.923351 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 03:58:22.924083 kubelet[2903]: I0905 03:58:22.923402 2903 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 03:58:22.924083 kubelet[2903]: I0905 03:58:22.923515 2903 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 03:58:22.924083 kubelet[2903]: I0905 03:58:22.923530 2903 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 03:58:22.924083 kubelet[2903]: E0905 03:58:22.923598 2903 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 03:58:23.023948 kubelet[2903]: E0905 03:58:23.023725 2903 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 03:58:23.069392 kubelet[2903]: I0905 03:58:23.069351 2903 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.069742 2903 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.069781 2903 state_mem.go:36] "Initialized new in-memory state store" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.070156 2903 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.070191 2903 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.070228 2903 policy_none.go:49] "None policy: Start" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.070244 2903 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 03:58:23.070582 kubelet[2903]: I0905 03:58:23.070261 2903 state_mem.go:35] "Initializing new in-memory state store" Sep 5 03:58:23.071427 kubelet[2903]: I0905 03:58:23.071404 2903 state_mem.go:75] "Updated machine memory state" Sep 5 03:58:23.085829 kubelet[2903]: I0905 03:58:23.085795 2903 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 03:58:23.087657 kubelet[2903]: I0905 03:58:23.087442 2903 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 03:58:23.087970 kubelet[2903]: I0905 03:58:23.087898 2903 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 03:58:23.088588 kubelet[2903]: I0905 03:58:23.088567 2903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 03:58:23.097461 kubelet[2903]: E0905 03:58:23.096797 2903 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 03:58:23.228220 kubelet[2903]: I0905 03:58:23.227268 2903 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.230580 kubelet[2903]: I0905 03:58:23.230556 2903 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.232490 kubelet[2903]: I0905 03:58:23.232426 2903 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.234989 kubelet[2903]: I0905 03:58:23.233250 2903 kubelet_node_status.go:75] "Attempting to register node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.243197 kubelet[2903]: W0905 03:58:23.243138 2903 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:23.243848 kubelet[2903]: E0905 03:58:23.243754 2903 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-86xia.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.244468 kubelet[2903]: W0905 03:58:23.244243 2903 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:23.244468 kubelet[2903]: E0905 03:58:23.244292 2903 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-86xia.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.250203 kubelet[2903]: W0905 03:58:23.249705 2903 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 5 03:58:23.250203 kubelet[2903]: E0905 03:58:23.249756 2903 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.254412 kubelet[2903]: I0905 03:58:23.254101 2903 kubelet_node_status.go:124] "Node was previously registered" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.254881 kubelet[2903]: I0905 03:58:23.254554 2903 kubelet_node_status.go:78] "Successfully registered node" node="srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373243 kubelet[2903]: I0905 03:58:23.371598 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-flexvolume-dir\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373243 kubelet[2903]: I0905 03:58:23.371695 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe080fe2c20826fecb9895a23d9ad64-kubeconfig\") pod \"kube-scheduler-srv-86xia.gb1.brightbox.com\" (UID: \"1fe080fe2c20826fecb9895a23d9ad64\") " pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373243 kubelet[2903]: I0905 03:58:23.371751 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-k8s-certs\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373243 kubelet[2903]: I0905 03:58:23.371786 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-usr-share-ca-certificates\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373243 kubelet[2903]: I0905 03:58:23.371863 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-k8s-certs\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373655 kubelet[2903]: I0905 03:58:23.371897 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-kubeconfig\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373655 kubelet[2903]: I0905 03:58:23.371946 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373655 kubelet[2903]: I0905 03:58:23.371995 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/477b5fc9c2b57f6ef95c3a67d3815c08-ca-certs\") pod \"kube-apiserver-srv-86xia.gb1.brightbox.com\" (UID: \"477b5fc9c2b57f6ef95c3a67d3815c08\") " pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.373655 kubelet[2903]: I0905 03:58:23.372030 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e57636cc40ac4ed3da88c60405dd8a4-ca-certs\") pod \"kube-controller-manager-srv-86xia.gb1.brightbox.com\" (UID: \"4e57636cc40ac4ed3da88c60405dd8a4\") " pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" Sep 5 03:58:23.524218 sudo[2918]: pam_unix(sudo:session): session closed for user root Sep 5 03:58:23.818772 kubelet[2903]: I0905 03:58:23.818247 2903 apiserver.go:52] "Watching apiserver" Sep 5 03:58:23.866963 kubelet[2903]: I0905 03:58:23.866872 2903 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 03:58:24.024340 kubelet[2903]: I0905 03:58:24.024002 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-86xia.gb1.brightbox.com" podStartSLOduration=4.023961762 podStartE2EDuration="4.023961762s" podCreationTimestamp="2025-09-05 03:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:24.022697917 +0000 UTC m=+1.349775035" watchObservedRunningTime="2025-09-05 03:58:24.023961762 +0000 UTC m=+1.351038878" Sep 5 03:58:24.054822 kubelet[2903]: I0905 03:58:24.054512 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-86xia.gb1.brightbox.com" podStartSLOduration=4.054479545 podStartE2EDuration="4.054479545s" podCreationTimestamp="2025-09-05 03:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:24.041154568 +0000 UTC m=+1.368231674" watchObservedRunningTime="2025-09-05 03:58:24.054479545 +0000 UTC m=+1.381556637" Sep 5 03:58:25.581268 sudo[1907]: pam_unix(sudo:session): session closed for user root Sep 5 03:58:25.728223 sshd[1906]: Connection closed by 139.178.89.65 port 39448 Sep 5 03:58:25.729373 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Sep 5 03:58:25.741077 systemd[1]: sshd@8-10.230.58.50:22-139.178.89.65:39448.service: Deactivated successfully. Sep 5 03:58:25.742446 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Sep 5 03:58:25.748442 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 03:58:25.750297 systemd[1]: session-11.scope: Consumed 6.851s CPU time, 217.8M memory peak. Sep 5 03:58:25.759917 systemd-logind[1560]: Removed session 11. Sep 5 03:58:25.845996 kubelet[2903]: I0905 03:58:25.845641 2903 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 03:58:25.847340 containerd[1583]: time="2025-09-05T03:58:25.847254592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 03:58:25.847958 kubelet[2903]: I0905 03:58:25.847706 2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 03:58:26.521113 kubelet[2903]: I0905 03:58:26.521031 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-86xia.gb1.brightbox.com" podStartSLOduration=6.521009253 podStartE2EDuration="6.521009253s" podCreationTimestamp="2025-09-05 03:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:24.055718566 +0000 UTC m=+1.382795672" watchObservedRunningTime="2025-09-05 03:58:26.521009253 +0000 UTC m=+3.848086371" Sep 5 03:58:26.598215 kubelet[2903]: I0905 03:58:26.594297 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28582046-bd19-423f-ab90-d36b777c0103-lib-modules\") pod \"kube-proxy-gq6kn\" (UID: \"28582046-bd19-423f-ab90-d36b777c0103\") " pod="kube-system/kube-proxy-gq6kn" Sep 5 03:58:26.595696 systemd[1]: Created slice kubepods-besteffort-pod28582046_bd19_423f_ab90_d36b777c0103.slice - libcontainer container kubepods-besteffort-pod28582046_bd19_423f_ab90_d36b777c0103.slice. Sep 5 03:58:26.598659 kubelet[2903]: I0905 03:58:26.598624 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6x9d\" (UniqueName: \"kubernetes.io/projected/28582046-bd19-423f-ab90-d36b777c0103-kube-api-access-w6x9d\") pod \"kube-proxy-gq6kn\" (UID: \"28582046-bd19-423f-ab90-d36b777c0103\") " pod="kube-system/kube-proxy-gq6kn" Sep 5 03:58:26.598824 kubelet[2903]: I0905 03:58:26.598761 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28582046-bd19-423f-ab90-d36b777c0103-kube-proxy\") pod \"kube-proxy-gq6kn\" (UID: \"28582046-bd19-423f-ab90-d36b777c0103\") " pod="kube-system/kube-proxy-gq6kn" Sep 5 03:58:26.599283 kubelet[2903]: I0905 03:58:26.599255 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28582046-bd19-423f-ab90-d36b777c0103-xtables-lock\") pod \"kube-proxy-gq6kn\" (UID: \"28582046-bd19-423f-ab90-d36b777c0103\") " pod="kube-system/kube-proxy-gq6kn" Sep 5 03:58:26.620225 systemd[1]: Created slice kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice - libcontainer container kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice. Sep 5 03:58:26.700238 kubelet[2903]: I0905 03:58:26.700039 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-run\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.700238 kubelet[2903]: I0905 03:58:26.700098 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-kernel\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.700238 kubelet[2903]: I0905 03:58:26.700156 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-cgroup\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701313 kubelet[2903]: I0905 03:58:26.701277 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-config-path\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701412 kubelet[2903]: I0905 03:58:26.701325 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hubble-tls\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701412 kubelet[2903]: I0905 03:58:26.701387 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hostproc\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701504 kubelet[2903]: I0905 03:58:26.701413 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-clustermesh-secrets\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701504 kubelet[2903]: I0905 03:58:26.701441 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cni-path\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701504 kubelet[2903]: I0905 03:58:26.701465 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-lib-modules\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701636 kubelet[2903]: I0905 03:58:26.701490 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-net\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701636 kubelet[2903]: I0905 03:58:26.701540 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslq5\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-kube-api-access-xslq5\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701636 kubelet[2903]: I0905 03:58:26.701573 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-bpf-maps\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701636 kubelet[2903]: I0905 03:58:26.701604 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-xtables-lock\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.701789 kubelet[2903]: I0905 03:58:26.701645 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-etc-cni-netd\") pod \"cilium-7dvxj\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " pod="kube-system/cilium-7dvxj" Sep 5 03:58:26.900724 systemd[1]: Created slice kubepods-besteffort-podb626b31a_9266_4fde_97c1_f352392c78b7.slice - libcontainer container kubepods-besteffort-podb626b31a_9266_4fde_97c1_f352392c78b7.slice. Sep 5 03:58:26.903949 kubelet[2903]: I0905 03:58:26.902451 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfmf\" (UniqueName: \"kubernetes.io/projected/b626b31a-9266-4fde-97c1-f352392c78b7-kube-api-access-bzfmf\") pod \"cilium-operator-6c4d7847fc-w6rw7\" (UID: \"b626b31a-9266-4fde-97c1-f352392c78b7\") " pod="kube-system/cilium-operator-6c4d7847fc-w6rw7" Sep 5 03:58:26.903949 kubelet[2903]: I0905 03:58:26.902514 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b626b31a-9266-4fde-97c1-f352392c78b7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w6rw7\" (UID: \"b626b31a-9266-4fde-97c1-f352392c78b7\") " pod="kube-system/cilium-operator-6c4d7847fc-w6rw7" Sep 5 03:58:26.911450 containerd[1583]: time="2025-09-05T03:58:26.911379213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gq6kn,Uid:28582046-bd19-423f-ab90-d36b777c0103,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:26.931773 containerd[1583]: time="2025-09-05T03:58:26.931705348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dvxj,Uid:0b6b07a6-76c3-4e64-bf73-ed99f617b1d7,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:26.957502 containerd[1583]: time="2025-09-05T03:58:26.957430707Z" level=info msg="connecting to shim e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf" address="unix:///run/containerd/s/3a37bd4f864e66723414a2ce846905a9dd6b2228ace1d9820668bbaeb951ca88" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:26.968710 containerd[1583]: time="2025-09-05T03:58:26.968605236Z" level=info msg="connecting to shim c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:27.023427 systemd[1]: Started cri-containerd-c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a.scope - libcontainer container c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a. Sep 5 03:58:27.068856 systemd[1]: Started cri-containerd-e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf.scope - libcontainer container e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf. Sep 5 03:58:27.139372 containerd[1583]: time="2025-09-05T03:58:27.139300398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dvxj,Uid:0b6b07a6-76c3-4e64-bf73-ed99f617b1d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\"" Sep 5 03:58:27.144642 containerd[1583]: time="2025-09-05T03:58:27.143860448Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 03:58:27.153321 containerd[1583]: time="2025-09-05T03:58:27.152990877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gq6kn,Uid:28582046-bd19-423f-ab90-d36b777c0103,Namespace:kube-system,Attempt:0,} returns sandbox id \"e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf\"" Sep 5 03:58:27.159494 containerd[1583]: time="2025-09-05T03:58:27.159283292Z" level=info msg="CreateContainer within sandbox \"e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 03:58:27.179357 containerd[1583]: time="2025-09-05T03:58:27.179288745Z" level=info msg="Container 61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:27.187410 containerd[1583]: time="2025-09-05T03:58:27.187309414Z" level=info msg="CreateContainer within sandbox \"e22bc15859119c4292bed8d397509714edc3dac33bf478ef5adc11d5c0fe0cdf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57\"" Sep 5 03:58:27.188575 containerd[1583]: time="2025-09-05T03:58:27.188539029Z" level=info msg="StartContainer for \"61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57\"" Sep 5 03:58:27.190461 containerd[1583]: time="2025-09-05T03:58:27.190413992Z" level=info msg="connecting to shim 61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57" address="unix:///run/containerd/s/3a37bd4f864e66723414a2ce846905a9dd6b2228ace1d9820668bbaeb951ca88" protocol=ttrpc version=3 Sep 5 03:58:27.208495 containerd[1583]: time="2025-09-05T03:58:27.208251139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w6rw7,Uid:b626b31a-9266-4fde-97c1-f352392c78b7,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:27.225882 systemd[1]: Started cri-containerd-61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57.scope - libcontainer container 61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57. Sep 5 03:58:27.249493 containerd[1583]: time="2025-09-05T03:58:27.249427009Z" level=info msg="connecting to shim 8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd" address="unix:///run/containerd/s/91d43194d6b62b343e856f856b8a84370c3b3257fdd613db8dc678fc3ee7778f" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:27.297705 systemd[1]: Started cri-containerd-8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd.scope - libcontainer container 8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd. Sep 5 03:58:27.333230 containerd[1583]: time="2025-09-05T03:58:27.333159241Z" level=info msg="StartContainer for \"61ce09d9615692e5a8219e29681787218768c857e199b7ee02c3f97e6042dd57\" returns successfully" Sep 5 03:58:27.409534 containerd[1583]: time="2025-09-05T03:58:27.409318146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w6rw7,Uid:b626b31a-9266-4fde-97c1-f352392c78b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\"" Sep 5 03:58:33.042585 kubelet[2903]: I0905 03:58:33.041658 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gq6kn" podStartSLOduration=7.041605556 podStartE2EDuration="7.041605556s" podCreationTimestamp="2025-09-05 03:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:28.022519606 +0000 UTC m=+5.349596724" watchObservedRunningTime="2025-09-05 03:58:33.041605556 +0000 UTC m=+10.368682653" Sep 5 03:58:34.912602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053138395.mount: Deactivated successfully. Sep 5 03:58:38.486716 containerd[1583]: time="2025-09-05T03:58:38.486556408Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:38.489216 containerd[1583]: time="2025-09-05T03:58:38.488676126Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 03:58:38.490220 containerd[1583]: time="2025-09-05T03:58:38.489575973Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:38.493338 containerd[1583]: time="2025-09-05T03:58:38.493296689Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.349266882s" Sep 5 03:58:38.493525 containerd[1583]: time="2025-09-05T03:58:38.493352819Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 03:58:38.495260 containerd[1583]: time="2025-09-05T03:58:38.495227126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 03:58:38.500565 containerd[1583]: time="2025-09-05T03:58:38.500084690Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 03:58:38.526530 containerd[1583]: time="2025-09-05T03:58:38.526262722Z" level=info msg="Container 035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:38.535737 containerd[1583]: time="2025-09-05T03:58:38.535686035Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\"" Sep 5 03:58:38.538316 containerd[1583]: time="2025-09-05T03:58:38.536738630Z" level=info msg="StartContainer for \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\"" Sep 5 03:58:38.538316 containerd[1583]: time="2025-09-05T03:58:38.537981559Z" level=info msg="connecting to shim 035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" protocol=ttrpc version=3 Sep 5 03:58:38.609414 systemd[1]: Started cri-containerd-035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae.scope - libcontainer container 035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae. Sep 5 03:58:38.663090 containerd[1583]: time="2025-09-05T03:58:38.663010538Z" level=info msg="StartContainer for \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" returns successfully" Sep 5 03:58:38.680898 systemd[1]: cri-containerd-035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae.scope: Deactivated successfully. Sep 5 03:58:38.725234 containerd[1583]: time="2025-09-05T03:58:38.724216731Z" level=info msg="received exit event container_id:\"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" id:\"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" pid:3317 exited_at:{seconds:1757044718 nanos:682510308}" Sep 5 03:58:38.736472 containerd[1583]: time="2025-09-05T03:58:38.736436985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" id:\"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" pid:3317 exited_at:{seconds:1757044718 nanos:682510308}" Sep 5 03:58:38.763686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae-rootfs.mount: Deactivated successfully. Sep 5 03:58:39.065850 containerd[1583]: time="2025-09-05T03:58:39.065318159Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 03:58:39.100635 containerd[1583]: time="2025-09-05T03:58:39.100231106Z" level=info msg="Container 820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:39.110088 containerd[1583]: time="2025-09-05T03:58:39.110018824Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\"" Sep 5 03:58:39.110912 containerd[1583]: time="2025-09-05T03:58:39.110880543Z" level=info msg="StartContainer for \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\"" Sep 5 03:58:39.112587 containerd[1583]: time="2025-09-05T03:58:39.112553292Z" level=info msg="connecting to shim 820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" protocol=ttrpc version=3 Sep 5 03:58:39.144445 systemd[1]: Started cri-containerd-820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626.scope - libcontainer container 820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626. Sep 5 03:58:39.218606 containerd[1583]: time="2025-09-05T03:58:39.218552790Z" level=info msg="StartContainer for \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" returns successfully" Sep 5 03:58:39.239649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 03:58:39.240156 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 03:58:39.241922 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 03:58:39.246055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 03:58:39.247169 systemd[1]: cri-containerd-820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626.scope: Deactivated successfully. Sep 5 03:58:39.250907 containerd[1583]: time="2025-09-05T03:58:39.249375616Z" level=info msg="received exit event container_id:\"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" id:\"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" pid:3361 exited_at:{seconds:1757044719 nanos:246098292}" Sep 5 03:58:39.251962 containerd[1583]: time="2025-09-05T03:58:39.251165798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" id:\"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" pid:3361 exited_at:{seconds:1757044719 nanos:246098292}" Sep 5 03:58:39.286612 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 03:58:40.072341 containerd[1583]: time="2025-09-05T03:58:40.072113518Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 03:58:40.111085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711277000.mount: Deactivated successfully. Sep 5 03:58:40.119308 containerd[1583]: time="2025-09-05T03:58:40.119255873Z" level=info msg="Container 0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:40.123408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972583545.mount: Deactivated successfully. Sep 5 03:58:40.156606 containerd[1583]: time="2025-09-05T03:58:40.156542444Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\"" Sep 5 03:58:40.157776 containerd[1583]: time="2025-09-05T03:58:40.157747879Z" level=info msg="StartContainer for \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\"" Sep 5 03:58:40.160992 containerd[1583]: time="2025-09-05T03:58:40.160920164Z" level=info msg="connecting to shim 0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" protocol=ttrpc version=3 Sep 5 03:58:40.204524 systemd[1]: Started cri-containerd-0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8.scope - libcontainer container 0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8. Sep 5 03:58:40.309656 systemd[1]: cri-containerd-0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8.scope: Deactivated successfully. Sep 5 03:58:40.319937 containerd[1583]: time="2025-09-05T03:58:40.319881324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" id:\"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" pid:3407 exited_at:{seconds:1757044720 nanos:313474607}" Sep 5 03:58:40.320459 containerd[1583]: time="2025-09-05T03:58:40.320418496Z" level=info msg="received exit event container_id:\"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" id:\"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" pid:3407 exited_at:{seconds:1757044720 nanos:313474607}" Sep 5 03:58:40.339806 containerd[1583]: time="2025-09-05T03:58:40.339762241Z" level=info msg="StartContainer for \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" returns successfully" Sep 5 03:58:40.524054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8-rootfs.mount: Deactivated successfully. Sep 5 03:58:41.085352 containerd[1583]: time="2025-09-05T03:58:41.085255871Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 03:58:41.124988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510741197.mount: Deactivated successfully. Sep 5 03:58:41.131574 containerd[1583]: time="2025-09-05T03:58:41.131510539Z" level=info msg="Container a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:41.163005 containerd[1583]: time="2025-09-05T03:58:41.162941715Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\"" Sep 5 03:58:41.164893 containerd[1583]: time="2025-09-05T03:58:41.164863014Z" level=info msg="StartContainer for \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\"" Sep 5 03:58:41.170438 containerd[1583]: time="2025-09-05T03:58:41.169607741Z" level=info msg="connecting to shim a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" protocol=ttrpc version=3 Sep 5 03:58:41.241435 systemd[1]: Started cri-containerd-a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5.scope - libcontainer container a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5. Sep 5 03:58:41.367701 systemd[1]: cri-containerd-a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5.scope: Deactivated successfully. Sep 5 03:58:41.374238 containerd[1583]: time="2025-09-05T03:58:41.372900402Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice/cri-containerd-a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5.scope/memory.events\": no such file or directory" Sep 5 03:58:41.375230 containerd[1583]: time="2025-09-05T03:58:41.375125928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" id:\"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" pid:3462 exited_at:{seconds:1757044721 nanos:371591451}" Sep 5 03:58:41.380825 containerd[1583]: time="2025-09-05T03:58:41.380777584Z" level=info msg="received exit event container_id:\"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" id:\"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" pid:3462 exited_at:{seconds:1757044721 nanos:371591451}" Sep 5 03:58:41.387358 containerd[1583]: time="2025-09-05T03:58:41.387299702Z" level=info msg="StartContainer for \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" returns successfully" Sep 5 03:58:41.523739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5-rootfs.mount: Deactivated successfully. Sep 5 03:58:41.642104 containerd[1583]: time="2025-09-05T03:58:41.641945507Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:41.643273 containerd[1583]: time="2025-09-05T03:58:41.643241799Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 03:58:41.645042 containerd[1583]: time="2025-09-05T03:58:41.644689979Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 03:58:41.655087 containerd[1583]: time="2025-09-05T03:58:41.655039976Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.159616758s" Sep 5 03:58:41.655200 containerd[1583]: time="2025-09-05T03:58:41.655091363Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 03:58:41.672162 containerd[1583]: time="2025-09-05T03:58:41.671960206Z" level=info msg="CreateContainer within sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 03:58:41.689848 containerd[1583]: time="2025-09-05T03:58:41.688963817Z" level=info msg="Container 4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:41.692923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304131500.mount: Deactivated successfully. Sep 5 03:58:41.704049 containerd[1583]: time="2025-09-05T03:58:41.703999398Z" level=info msg="CreateContainer within sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\"" Sep 5 03:58:41.706482 containerd[1583]: time="2025-09-05T03:58:41.705604773Z" level=info msg="StartContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\"" Sep 5 03:58:41.707275 containerd[1583]: time="2025-09-05T03:58:41.707237154Z" level=info msg="connecting to shim 4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281" address="unix:///run/containerd/s/91d43194d6b62b343e856f856b8a84370c3b3257fdd613db8dc678fc3ee7778f" protocol=ttrpc version=3 Sep 5 03:58:41.751464 systemd[1]: Started cri-containerd-4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281.scope - libcontainer container 4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281. Sep 5 03:58:41.805714 containerd[1583]: time="2025-09-05T03:58:41.805646243Z" level=info msg="StartContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" returns successfully" Sep 5 03:58:42.096239 containerd[1583]: time="2025-09-05T03:58:42.096015635Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 03:58:42.140485 containerd[1583]: time="2025-09-05T03:58:42.140347749Z" level=info msg="Container 84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:42.164650 containerd[1583]: time="2025-09-05T03:58:42.164581633Z" level=info msg="CreateContainer within sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\"" Sep 5 03:58:42.167494 containerd[1583]: time="2025-09-05T03:58:42.166480709Z" level=info msg="StartContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\"" Sep 5 03:58:42.169156 containerd[1583]: time="2025-09-05T03:58:42.169121992Z" level=info msg="connecting to shim 84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1" address="unix:///run/containerd/s/fec4c64c0b6877454e5eeb01946cf12a32e0c8ca4336f2f659ccfd0bd68c6d45" protocol=ttrpc version=3 Sep 5 03:58:42.224434 systemd[1]: Started cri-containerd-84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1.scope - libcontainer container 84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1. Sep 5 03:58:42.353974 containerd[1583]: time="2025-09-05T03:58:42.353808861Z" level=info msg="StartContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" returns successfully" Sep 5 03:58:42.529757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556527906.mount: Deactivated successfully. Sep 5 03:58:42.715757 containerd[1583]: time="2025-09-05T03:58:42.714126527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" id:\"3989d8fe4ee48b2813df26c1760c46817bd7a90e150e96620e76ce81a9f4c9e2\" pid:3564 exited_at:{seconds:1757044722 nanos:713599620}" Sep 5 03:58:42.795436 kubelet[2903]: I0905 03:58:42.795318 2903 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 03:58:42.980951 kubelet[2903]: I0905 03:58:42.979774 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w6rw7" podStartSLOduration=2.722421007 podStartE2EDuration="16.979734071s" podCreationTimestamp="2025-09-05 03:58:26 +0000 UTC" firstStartedPulling="2025-09-05 03:58:27.411620366 +0000 UTC m=+4.738697456" lastFinishedPulling="2025-09-05 03:58:41.668933434 +0000 UTC m=+18.996010520" observedRunningTime="2025-09-05 03:58:42.189174436 +0000 UTC m=+19.516251554" watchObservedRunningTime="2025-09-05 03:58:42.979734071 +0000 UTC m=+20.306811200" Sep 5 03:58:42.993439 systemd[1]: Created slice kubepods-burstable-pod0ea408c4_5cf7_44ad_9c01_580c4cdc10cb.slice - libcontainer container kubepods-burstable-pod0ea408c4_5cf7_44ad_9c01_580c4cdc10cb.slice. Sep 5 03:58:43.009333 systemd[1]: Created slice kubepods-burstable-pod838a1a75_13e4_4ea8_80bd_18b96deb79a4.slice - libcontainer container kubepods-burstable-pod838a1a75_13e4_4ea8_80bd_18b96deb79a4.slice. Sep 5 03:58:43.038714 kubelet[2903]: I0905 03:58:43.038627 2903 status_manager.go:890] "Failed to get status for pod" podUID="0ea408c4-5cf7-44ad-9c01-580c4cdc10cb" pod="kube-system/coredns-668d6bf9bc-p6v94" err="pods \"coredns-668d6bf9bc-p6v94\" is forbidden: User \"system:node:srv-86xia.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-86xia.gb1.brightbox.com' and this object" Sep 5 03:58:43.039353 kubelet[2903]: W0905 03:58:43.039222 2903 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-86xia.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-86xia.gb1.brightbox.com' and this object Sep 5 03:58:43.039353 kubelet[2903]: E0905 03:58:43.039310 2903 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-86xia.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-86xia.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 5 03:58:43.044944 kubelet[2903]: I0905 03:58:43.044829 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/838a1a75-13e4-4ea8-80bd-18b96deb79a4-config-volume\") pod \"coredns-668d6bf9bc-vf77m\" (UID: \"838a1a75-13e4-4ea8-80bd-18b96deb79a4\") " pod="kube-system/coredns-668d6bf9bc-vf77m" Sep 5 03:58:43.045194 kubelet[2903]: I0905 03:58:43.045077 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ea408c4-5cf7-44ad-9c01-580c4cdc10cb-config-volume\") pod \"coredns-668d6bf9bc-p6v94\" (UID: \"0ea408c4-5cf7-44ad-9c01-580c4cdc10cb\") " pod="kube-system/coredns-668d6bf9bc-p6v94" Sep 5 03:58:43.045448 kubelet[2903]: I0905 03:58:43.045417 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2kf\" (UniqueName: \"kubernetes.io/projected/0ea408c4-5cf7-44ad-9c01-580c4cdc10cb-kube-api-access-gf2kf\") pod \"coredns-668d6bf9bc-p6v94\" (UID: \"0ea408c4-5cf7-44ad-9c01-580c4cdc10cb\") " pod="kube-system/coredns-668d6bf9bc-p6v94" Sep 5 03:58:43.045633 kubelet[2903]: I0905 03:58:43.045526 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdrs\" (UniqueName: \"kubernetes.io/projected/838a1a75-13e4-4ea8-80bd-18b96deb79a4-kube-api-access-nbdrs\") pod \"coredns-668d6bf9bc-vf77m\" (UID: \"838a1a75-13e4-4ea8-80bd-18b96deb79a4\") " pod="kube-system/coredns-668d6bf9bc-vf77m" Sep 5 03:58:43.431990 kubelet[2903]: I0905 03:58:43.431382 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7dvxj" podStartSLOduration=6.0789119 podStartE2EDuration="17.431358128s" podCreationTimestamp="2025-09-05 03:58:26 +0000 UTC" firstStartedPulling="2025-09-05 03:58:27.142620671 +0000 UTC m=+4.469697761" lastFinishedPulling="2025-09-05 03:58:38.495066892 +0000 UTC m=+15.822143989" observedRunningTime="2025-09-05 03:58:43.428006378 +0000 UTC m=+20.755083501" watchObservedRunningTime="2025-09-05 03:58:43.431358128 +0000 UTC m=+20.758435234" Sep 5 03:58:44.148781 kubelet[2903]: E0905 03:58:44.148671 2903 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 5 03:58:44.150427 kubelet[2903]: E0905 03:58:44.148824 2903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ea408c4-5cf7-44ad-9c01-580c4cdc10cb-config-volume podName:0ea408c4-5cf7-44ad-9c01-580c4cdc10cb nodeName:}" failed. No retries permitted until 2025-09-05 03:58:44.648778126 +0000 UTC m=+21.975855212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ea408c4-5cf7-44ad-9c01-580c4cdc10cb-config-volume") pod "coredns-668d6bf9bc-p6v94" (UID: "0ea408c4-5cf7-44ad-9c01-580c4cdc10cb") : failed to sync configmap cache: timed out waiting for the condition Sep 5 03:58:44.159140 kubelet[2903]: E0905 03:58:44.159105 2903 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 5 03:58:44.159297 kubelet[2903]: E0905 03:58:44.159256 2903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/838a1a75-13e4-4ea8-80bd-18b96deb79a4-config-volume podName:838a1a75-13e4-4ea8-80bd-18b96deb79a4 nodeName:}" failed. No retries permitted until 2025-09-05 03:58:44.659237679 +0000 UTC m=+21.986314769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/838a1a75-13e4-4ea8-80bd-18b96deb79a4-config-volume") pod "coredns-668d6bf9bc-vf77m" (UID: "838a1a75-13e4-4ea8-80bd-18b96deb79a4") : failed to sync configmap cache: timed out waiting for the condition Sep 5 03:58:44.805366 containerd[1583]: time="2025-09-05T03:58:44.805215241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p6v94,Uid:0ea408c4-5cf7-44ad-9c01-580c4cdc10cb,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:44.822950 containerd[1583]: time="2025-09-05T03:58:44.822550069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf77m,Uid:838a1a75-13e4-4ea8-80bd-18b96deb79a4,Namespace:kube-system,Attempt:0,}" Sep 5 03:58:45.880762 systemd-networkd[1513]: cilium_host: Link UP Sep 5 03:58:45.881136 systemd-networkd[1513]: cilium_net: Link UP Sep 5 03:58:45.881530 systemd-networkd[1513]: cilium_net: Gained carrier Sep 5 03:58:45.881949 systemd-networkd[1513]: cilium_host: Gained carrier Sep 5 03:58:46.069736 systemd-networkd[1513]: cilium_vxlan: Link UP Sep 5 03:58:46.069760 systemd-networkd[1513]: cilium_vxlan: Gained carrier Sep 5 03:58:46.637261 kernel: NET: Registered PF_ALG protocol family Sep 5 03:58:46.703620 systemd-networkd[1513]: cilium_net: Gained IPv6LL Sep 5 03:58:46.831440 systemd-networkd[1513]: cilium_host: Gained IPv6LL Sep 5 03:58:47.279455 systemd-networkd[1513]: cilium_vxlan: Gained IPv6LL Sep 5 03:58:47.753706 systemd-networkd[1513]: lxc_health: Link UP Sep 5 03:58:47.756595 systemd-networkd[1513]: lxc_health: Gained carrier Sep 5 03:58:48.463610 kernel: eth0: renamed from tmp4c2fc Sep 5 03:58:48.471277 kernel: eth0: renamed from tmp6a8fa Sep 5 03:58:48.476730 systemd-networkd[1513]: lxc3341f1793372: Link UP Sep 5 03:58:48.484926 systemd-networkd[1513]: lxcc640f04d854f: Link UP Sep 5 03:58:48.488348 systemd-networkd[1513]: lxcc640f04d854f: Gained carrier Sep 5 03:58:48.491801 systemd-networkd[1513]: lxc3341f1793372: Gained carrier Sep 5 03:58:48.816420 systemd-networkd[1513]: lxc_health: Gained IPv6LL Sep 5 03:58:49.711491 systemd-networkd[1513]: lxc3341f1793372: Gained IPv6LL Sep 5 03:58:49.839415 systemd-networkd[1513]: lxcc640f04d854f: Gained IPv6LL Sep 5 03:58:54.276735 containerd[1583]: time="2025-09-05T03:58:54.276619236Z" level=info msg="connecting to shim 4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114" address="unix:///run/containerd/s/a1b721a23c61e306e776156e514738b4670da9efeb1478e8c8344a4cf2dc7ee0" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:54.331383 systemd[1]: Started cri-containerd-4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114.scope - libcontainer container 4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114. Sep 5 03:58:54.411398 containerd[1583]: time="2025-09-05T03:58:54.411115536Z" level=info msg="connecting to shim 6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013" address="unix:///run/containerd/s/e3537728c1d8ac71fa8de01566768cace830b9fbac072eaa1525d65c0c3a0f57" namespace=k8s.io protocol=ttrpc version=3 Sep 5 03:58:54.464240 containerd[1583]: time="2025-09-05T03:58:54.463389620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf77m,Uid:838a1a75-13e4-4ea8-80bd-18b96deb79a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114\"" Sep 5 03:58:54.477161 containerd[1583]: time="2025-09-05T03:58:54.477071777Z" level=info msg="CreateContainer within sandbox \"4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 03:58:54.489789 systemd[1]: Started cri-containerd-6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013.scope - libcontainer container 6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013. Sep 5 03:58:54.511265 containerd[1583]: time="2025-09-05T03:58:54.508403534Z" level=info msg="Container ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:54.524483 containerd[1583]: time="2025-09-05T03:58:54.524362834Z" level=info msg="CreateContainer within sandbox \"4c2fcd7bb1be48173f684af4e2890170f0996c0d08fdd2e30fd9d23844fe4114\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753\"" Sep 5 03:58:54.526036 containerd[1583]: time="2025-09-05T03:58:54.525992005Z" level=info msg="StartContainer for \"ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753\"" Sep 5 03:58:54.530357 containerd[1583]: time="2025-09-05T03:58:54.530256486Z" level=info msg="connecting to shim ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753" address="unix:///run/containerd/s/a1b721a23c61e306e776156e514738b4670da9efeb1478e8c8344a4cf2dc7ee0" protocol=ttrpc version=3 Sep 5 03:58:54.564395 systemd[1]: Started cri-containerd-ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753.scope - libcontainer container ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753. Sep 5 03:58:54.651866 containerd[1583]: time="2025-09-05T03:58:54.651813272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p6v94,Uid:0ea408c4-5cf7-44ad-9c01-580c4cdc10cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013\"" Sep 5 03:58:54.653535 containerd[1583]: time="2025-09-05T03:58:54.653503651Z" level=info msg="StartContainer for \"ce50cf7a24ce0833024ee6a29cba90ffda8c1f424c175d7ac7f99bb5df08f753\" returns successfully" Sep 5 03:58:54.659209 containerd[1583]: time="2025-09-05T03:58:54.659090783Z" level=info msg="CreateContainer within sandbox \"6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 03:58:54.672612 containerd[1583]: time="2025-09-05T03:58:54.672573961Z" level=info msg="Container 585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14: CDI devices from CRI Config.CDIDevices: []" Sep 5 03:58:54.682749 containerd[1583]: time="2025-09-05T03:58:54.682683458Z" level=info msg="CreateContainer within sandbox \"6a8fa449174fe6dce5bd0270b1621a58e98548a9ea444974505b66efd05e8013\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14\"" Sep 5 03:58:54.685207 containerd[1583]: time="2025-09-05T03:58:54.683863418Z" level=info msg="StartContainer for \"585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14\"" Sep 5 03:58:54.688743 containerd[1583]: time="2025-09-05T03:58:54.688666458Z" level=info msg="connecting to shim 585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14" address="unix:///run/containerd/s/e3537728c1d8ac71fa8de01566768cace830b9fbac072eaa1525d65c0c3a0f57" protocol=ttrpc version=3 Sep 5 03:58:54.728387 systemd[1]: Started cri-containerd-585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14.scope - libcontainer container 585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14. Sep 5 03:58:54.802222 containerd[1583]: time="2025-09-05T03:58:54.802063690Z" level=info msg="StartContainer for \"585429e424bf857f02c2c53cc65cd62d87fc39f91572aef0e66b7a5b90c2ac14\" returns successfully" Sep 5 03:58:55.221918 kubelet[2903]: I0905 03:58:55.221475 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p6v94" podStartSLOduration=29.221331066 podStartE2EDuration="29.221331066s" podCreationTimestamp="2025-09-05 03:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:55.217980536 +0000 UTC m=+32.545057659" watchObservedRunningTime="2025-09-05 03:58:55.221331066 +0000 UTC m=+32.548408175" Sep 5 03:58:55.287650 kubelet[2903]: I0905 03:58:55.287567 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vf77m" podStartSLOduration=29.287545876 podStartE2EDuration="29.287545876s" podCreationTimestamp="2025-09-05 03:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 03:58:55.284877078 +0000 UTC m=+32.611954199" watchObservedRunningTime="2025-09-05 03:58:55.287545876 +0000 UTC m=+32.614622975" Sep 5 03:59:49.979921 systemd[1]: Started sshd@9-10.230.58.50:22-139.178.89.65:59040.service - OpenSSH per-connection server daemon (139.178.89.65:59040). Sep 5 03:59:50.973136 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 59040 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:59:50.979064 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:59:51.005436 systemd-logind[1560]: New session 12 of user core. Sep 5 03:59:51.009504 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 03:59:52.161226 sshd[4226]: Connection closed by 139.178.89.65 port 59040 Sep 5 03:59:52.161501 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Sep 5 03:59:52.169837 systemd[1]: sshd@9-10.230.58.50:22-139.178.89.65:59040.service: Deactivated successfully. Sep 5 03:59:52.170675 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Sep 5 03:59:52.175032 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 03:59:52.178329 systemd-logind[1560]: Removed session 12. Sep 5 03:59:57.352785 systemd[1]: Started sshd@10-10.230.58.50:22-139.178.89.65:50614.service - OpenSSH per-connection server daemon (139.178.89.65:50614). Sep 5 03:59:58.387980 sshd[4243]: Accepted publickey for core from 139.178.89.65 port 50614 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 03:59:58.389973 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 03:59:58.397851 systemd-logind[1560]: New session 13 of user core. Sep 5 03:59:58.410451 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 03:59:59.124886 sshd[4248]: Connection closed by 139.178.89.65 port 50614 Sep 5 03:59:59.125472 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Sep 5 03:59:59.133756 systemd[1]: sshd@10-10.230.58.50:22-139.178.89.65:50614.service: Deactivated successfully. Sep 5 03:59:59.141219 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 03:59:59.144607 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Sep 5 03:59:59.147390 systemd-logind[1560]: Removed session 13. Sep 5 04:00:04.299767 systemd[1]: Started sshd@11-10.230.58.50:22-139.178.89.65:43066.service - OpenSSH per-connection server daemon (139.178.89.65:43066). Sep 5 04:00:05.292166 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 43066 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:05.296002 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:05.304678 systemd-logind[1560]: New session 14 of user core. Sep 5 04:00:05.316709 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 04:00:06.074445 sshd[4264]: Connection closed by 139.178.89.65 port 43066 Sep 5 04:00:06.075360 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:06.081105 systemd[1]: sshd@11-10.230.58.50:22-139.178.89.65:43066.service: Deactivated successfully. Sep 5 04:00:06.084744 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 04:00:06.090552 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Sep 5 04:00:06.095027 systemd-logind[1560]: Removed session 14. Sep 5 04:00:11.236482 systemd[1]: Started sshd@12-10.230.58.50:22-139.178.89.65:45338.service - OpenSSH per-connection server daemon (139.178.89.65:45338). Sep 5 04:00:12.206232 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 45338 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:12.207748 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:12.215315 systemd-logind[1560]: New session 15 of user core. Sep 5 04:00:12.230461 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 04:00:12.968392 sshd[4280]: Connection closed by 139.178.89.65 port 45338 Sep 5 04:00:12.969621 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:12.977301 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Sep 5 04:00:12.980060 systemd[1]: sshd@12-10.230.58.50:22-139.178.89.65:45338.service: Deactivated successfully. Sep 5 04:00:12.984532 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 04:00:12.990883 systemd-logind[1560]: Removed session 15. Sep 5 04:00:13.127646 systemd[1]: Started sshd@13-10.230.58.50:22-139.178.89.65:45342.service - OpenSSH per-connection server daemon (139.178.89.65:45342). Sep 5 04:00:14.072345 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 45342 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:14.075207 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:14.082927 systemd-logind[1560]: New session 16 of user core. Sep 5 04:00:14.091502 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 04:00:14.879773 sshd[4296]: Connection closed by 139.178.89.65 port 45342 Sep 5 04:00:14.881140 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:14.887960 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Sep 5 04:00:14.891219 systemd[1]: sshd@13-10.230.58.50:22-139.178.89.65:45342.service: Deactivated successfully. Sep 5 04:00:14.895212 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 04:00:14.897839 systemd-logind[1560]: Removed session 16. Sep 5 04:00:15.041938 systemd[1]: Started sshd@14-10.230.58.50:22-139.178.89.65:45354.service - OpenSSH per-connection server daemon (139.178.89.65:45354). Sep 5 04:00:15.999839 sshd[4306]: Accepted publickey for core from 139.178.89.65 port 45354 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:16.001830 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:16.009055 systemd-logind[1560]: New session 17 of user core. Sep 5 04:00:16.018480 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 04:00:16.751620 sshd[4309]: Connection closed by 139.178.89.65 port 45354 Sep 5 04:00:16.752650 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:16.758767 systemd[1]: sshd@14-10.230.58.50:22-139.178.89.65:45354.service: Deactivated successfully. Sep 5 04:00:16.762309 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 04:00:16.764015 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Sep 5 04:00:16.766564 systemd-logind[1560]: Removed session 17. Sep 5 04:00:21.934564 systemd[1]: Started sshd@15-10.230.58.50:22-139.178.89.65:43410.service - OpenSSH per-connection server daemon (139.178.89.65:43410). Sep 5 04:00:22.895888 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 43410 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:22.898463 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:22.907573 systemd-logind[1560]: New session 18 of user core. Sep 5 04:00:22.918440 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 04:00:23.634523 sshd[4323]: Connection closed by 139.178.89.65 port 43410 Sep 5 04:00:23.637476 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:23.643919 systemd[1]: sshd@15-10.230.58.50:22-139.178.89.65:43410.service: Deactivated successfully. Sep 5 04:00:23.646862 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 04:00:23.648275 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Sep 5 04:00:23.652140 systemd-logind[1560]: Removed session 18. Sep 5 04:00:28.807693 systemd[1]: Started sshd@16-10.230.58.50:22-139.178.89.65:43426.service - OpenSSH per-connection server daemon (139.178.89.65:43426). Sep 5 04:00:30.173703 sshd[4339]: Accepted publickey for core from 139.178.89.65 port 43426 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:30.175800 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:30.183719 systemd-logind[1560]: New session 19 of user core. Sep 5 04:00:30.195416 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 04:00:30.908527 sshd[4342]: Connection closed by 139.178.89.65 port 43426 Sep 5 04:00:30.909536 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:30.916094 systemd[1]: sshd@16-10.230.58.50:22-139.178.89.65:43426.service: Deactivated successfully. Sep 5 04:00:30.919731 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 04:00:30.922323 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Sep 5 04:00:30.925870 systemd-logind[1560]: Removed session 19. Sep 5 04:00:31.075814 systemd[1]: Started sshd@17-10.230.58.50:22-139.178.89.65:49164.service - OpenSSH per-connection server daemon (139.178.89.65:49164). Sep 5 04:00:32.006289 sshd[4354]: Accepted publickey for core from 139.178.89.65 port 49164 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:32.008274 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:32.015810 systemd-logind[1560]: New session 20 of user core. Sep 5 04:00:32.025757 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 04:00:33.173780 sshd[4357]: Connection closed by 139.178.89.65 port 49164 Sep 5 04:00:33.175837 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:33.190098 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Sep 5 04:00:33.190450 systemd[1]: sshd@17-10.230.58.50:22-139.178.89.65:49164.service: Deactivated successfully. Sep 5 04:00:33.194889 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 04:00:33.197330 systemd-logind[1560]: Removed session 20. Sep 5 04:00:33.360492 systemd[1]: Started sshd@18-10.230.58.50:22-139.178.89.65:49170.service - OpenSSH per-connection server daemon (139.178.89.65:49170). Sep 5 04:00:34.421801 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 49170 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:34.424089 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:34.437093 systemd-logind[1560]: New session 21 of user core. Sep 5 04:00:34.445477 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 04:00:36.019694 sshd[4370]: Connection closed by 139.178.89.65 port 49170 Sep 5 04:00:36.020215 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:36.027497 systemd[1]: sshd@18-10.230.58.50:22-139.178.89.65:49170.service: Deactivated successfully. Sep 5 04:00:36.030840 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 04:00:36.033540 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Sep 5 04:00:36.035470 systemd-logind[1560]: Removed session 21. Sep 5 04:00:36.175848 systemd[1]: Started sshd@19-10.230.58.50:22-139.178.89.65:49184.service - OpenSSH per-connection server daemon (139.178.89.65:49184). Sep 5 04:00:37.155866 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 49184 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:37.157907 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:37.164664 systemd-logind[1560]: New session 22 of user core. Sep 5 04:00:37.171428 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 04:00:38.110603 sshd[4390]: Connection closed by 139.178.89.65 port 49184 Sep 5 04:00:38.111848 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:38.118382 systemd[1]: sshd@19-10.230.58.50:22-139.178.89.65:49184.service: Deactivated successfully. Sep 5 04:00:38.121700 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 04:00:38.123568 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Sep 5 04:00:38.127407 systemd-logind[1560]: Removed session 22. Sep 5 04:00:38.271554 systemd[1]: Started sshd@20-10.230.58.50:22-139.178.89.65:49188.service - OpenSSH per-connection server daemon (139.178.89.65:49188). Sep 5 04:00:39.229986 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 49188 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:39.232039 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:39.239134 systemd-logind[1560]: New session 23 of user core. Sep 5 04:00:39.247455 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 04:00:39.980486 sshd[4403]: Connection closed by 139.178.89.65 port 49188 Sep 5 04:00:39.980223 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:39.986997 systemd[1]: sshd@20-10.230.58.50:22-139.178.89.65:49188.service: Deactivated successfully. Sep 5 04:00:39.990841 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 04:00:39.993031 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Sep 5 04:00:39.996403 systemd-logind[1560]: Removed session 23. Sep 5 04:00:45.145225 systemd[1]: Started sshd@21-10.230.58.50:22-139.178.89.65:41612.service - OpenSSH per-connection server daemon (139.178.89.65:41612). Sep 5 04:00:46.096522 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 41612 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:46.098633 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:46.108331 systemd-logind[1560]: New session 24 of user core. Sep 5 04:00:46.110462 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 04:00:46.833154 sshd[4420]: Connection closed by 139.178.89.65 port 41612 Sep 5 04:00:46.833515 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:46.839683 systemd[1]: sshd@21-10.230.58.50:22-139.178.89.65:41612.service: Deactivated successfully. Sep 5 04:00:46.842431 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 04:00:46.845290 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Sep 5 04:00:46.847001 systemd-logind[1560]: Removed session 24. Sep 5 04:00:52.008535 systemd[1]: Started sshd@22-10.230.58.50:22-139.178.89.65:45830.service - OpenSSH per-connection server daemon (139.178.89.65:45830). Sep 5 04:00:53.002359 sshd[4432]: Accepted publickey for core from 139.178.89.65 port 45830 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:53.004371 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:53.012310 systemd-logind[1560]: New session 25 of user core. Sep 5 04:00:53.020600 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 04:00:53.774229 sshd[4435]: Connection closed by 139.178.89.65 port 45830 Sep 5 04:00:53.775123 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Sep 5 04:00:53.782997 systemd[1]: sshd@22-10.230.58.50:22-139.178.89.65:45830.service: Deactivated successfully. Sep 5 04:00:53.787582 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 04:00:53.791776 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Sep 5 04:00:53.793906 systemd-logind[1560]: Removed session 25. Sep 5 04:00:58.926510 systemd[1]: Started sshd@23-10.230.58.50:22-139.178.89.65:45834.service - OpenSSH per-connection server daemon (139.178.89.65:45834). Sep 5 04:00:59.865599 sshd[4448]: Accepted publickey for core from 139.178.89.65 port 45834 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:00:59.867597 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:00:59.878685 systemd-logind[1560]: New session 26 of user core. Sep 5 04:00:59.884444 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 04:01:00.575031 sshd[4451]: Connection closed by 139.178.89.65 port 45834 Sep 5 04:01:00.576133 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Sep 5 04:01:00.583302 systemd[1]: sshd@23-10.230.58.50:22-139.178.89.65:45834.service: Deactivated successfully. Sep 5 04:01:00.586115 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 04:01:00.587588 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Sep 5 04:01:00.590935 systemd-logind[1560]: Removed session 26. Sep 5 04:01:00.742943 systemd[1]: Started sshd@24-10.230.58.50:22-139.178.89.65:42280.service - OpenSSH per-connection server daemon (139.178.89.65:42280). Sep 5 04:01:01.709072 sshd[4462]: Accepted publickey for core from 139.178.89.65 port 42280 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:01:01.710542 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:01:01.719247 systemd-logind[1560]: New session 27 of user core. Sep 5 04:01:01.727450 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 04:01:03.791792 containerd[1583]: time="2025-09-05T04:01:03.790715260Z" level=info msg="StopContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" with timeout 30 (s)" Sep 5 04:01:03.793263 containerd[1583]: time="2025-09-05T04:01:03.793230839Z" level=info msg="Stop container \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" with signal terminated" Sep 5 04:01:03.856120 containerd[1583]: time="2025-09-05T04:01:03.856060281Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 04:01:03.860728 systemd[1]: cri-containerd-4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281.scope: Deactivated successfully. Sep 5 04:01:03.864745 containerd[1583]: time="2025-09-05T04:01:03.864537002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" id:\"36cbe67b296ad0100cbe6c190b5ffd176d34ab779f8beccfe39bef56ec5355e5\" pid:4489 exited_at:{seconds:1757044863 nanos:862547935}" Sep 5 04:01:03.867829 containerd[1583]: time="2025-09-05T04:01:03.867689435Z" level=info msg="received exit event container_id:\"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" id:\"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" pid:3503 exited_at:{seconds:1757044863 nanos:867279397}" Sep 5 04:01:03.868131 containerd[1583]: time="2025-09-05T04:01:03.868100033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" id:\"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" pid:3503 exited_at:{seconds:1757044863 nanos:867279397}" Sep 5 04:01:03.870726 containerd[1583]: time="2025-09-05T04:01:03.870689680Z" level=info msg="StopContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" with timeout 2 (s)" Sep 5 04:01:03.871296 containerd[1583]: time="2025-09-05T04:01:03.871164247Z" level=info msg="Stop container \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" with signal terminated" Sep 5 04:01:03.905149 systemd-networkd[1513]: lxc_health: Link DOWN Sep 5 04:01:03.905165 systemd-networkd[1513]: lxc_health: Lost carrier Sep 5 04:01:03.948881 systemd[1]: cri-containerd-84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1.scope: Deactivated successfully. Sep 5 04:01:03.949925 systemd[1]: cri-containerd-84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1.scope: Consumed 10.590s CPU time, 191.9M memory peak, 72.3M read from disk, 13.3M written to disk. Sep 5 04:01:03.952062 containerd[1583]: time="2025-09-05T04:01:03.951688045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" pid:3533 exited_at:{seconds:1757044863 nanos:950600012}" Sep 5 04:01:03.952816 containerd[1583]: time="2025-09-05T04:01:03.952646089Z" level=info msg="received exit event container_id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" id:\"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" pid:3533 exited_at:{seconds:1757044863 nanos:950600012}" Sep 5 04:01:03.963698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281-rootfs.mount: Deactivated successfully. Sep 5 04:01:03.974213 containerd[1583]: time="2025-09-05T04:01:03.973976416Z" level=info msg="StopContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" returns successfully" Sep 5 04:01:03.976252 containerd[1583]: time="2025-09-05T04:01:03.976174417Z" level=info msg="StopPodSandbox for \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\"" Sep 5 04:01:03.976358 containerd[1583]: time="2025-09-05T04:01:03.976319459Z" level=info msg="Container to stop \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:03.995110 systemd[1]: cri-containerd-8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd.scope: Deactivated successfully. Sep 5 04:01:04.002029 containerd[1583]: time="2025-09-05T04:01:04.001896464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" id:\"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" pid:3124 exit_status:137 exited_at:{seconds:1757044863 nanos:997965307}" Sep 5 04:01:04.009226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1-rootfs.mount: Deactivated successfully. Sep 5 04:01:04.019751 containerd[1583]: time="2025-09-05T04:01:04.019567764Z" level=info msg="StopContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" returns successfully" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020640766Z" level=info msg="StopPodSandbox for \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\"" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020733239Z" level=info msg="Container to stop \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020756202Z" level=info msg="Container to stop \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020771513Z" level=info msg="Container to stop \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020786076Z" level=info msg="Container to stop \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:04.021025 containerd[1583]: time="2025-09-05T04:01:04.020798797Z" level=info msg="Container to stop \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 04:01:04.035914 systemd[1]: cri-containerd-c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a.scope: Deactivated successfully. Sep 5 04:01:04.100739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd-rootfs.mount: Deactivated successfully. Sep 5 04:01:04.106021 containerd[1583]: time="2025-09-05T04:01:04.105768881Z" level=info msg="shim disconnected" id=8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd namespace=k8s.io Sep 5 04:01:04.106021 containerd[1583]: time="2025-09-05T04:01:04.105822276Z" level=warning msg="cleaning up after shim disconnected" id=8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd namespace=k8s.io Sep 5 04:01:04.106021 containerd[1583]: time="2025-09-05T04:01:04.105836677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 04:01:04.113209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a-rootfs.mount: Deactivated successfully. Sep 5 04:01:04.116861 containerd[1583]: time="2025-09-05T04:01:04.116815742Z" level=info msg="shim disconnected" id=c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a namespace=k8s.io Sep 5 04:01:04.117058 containerd[1583]: time="2025-09-05T04:01:04.117028086Z" level=warning msg="cleaning up after shim disconnected" id=c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a namespace=k8s.io Sep 5 04:01:04.117235 containerd[1583]: time="2025-09-05T04:01:04.117150937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 04:01:04.156213 containerd[1583]: time="2025-09-05T04:01:04.155749894Z" level=info msg="received exit event sandbox_id:\"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" exit_status:137 exited_at:{seconds:1757044864 nanos:36132200}" Sep 5 04:01:04.156213 containerd[1583]: time="2025-09-05T04:01:04.156174969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" id:\"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" pid:3040 exit_status:137 exited_at:{seconds:1757044864 nanos:36132200}" Sep 5 04:01:04.159395 containerd[1583]: time="2025-09-05T04:01:04.158408975Z" level=info msg="TearDown network for sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" successfully" Sep 5 04:01:04.159395 containerd[1583]: time="2025-09-05T04:01:04.158444130Z" level=info msg="StopPodSandbox for \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" returns successfully" Sep 5 04:01:04.159395 containerd[1583]: time="2025-09-05T04:01:04.158599866Z" level=info msg="received exit event sandbox_id:\"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" exit_status:137 exited_at:{seconds:1757044863 nanos:997965307}" Sep 5 04:01:04.159035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd-shm.mount: Deactivated successfully. Sep 5 04:01:04.161204 containerd[1583]: time="2025-09-05T04:01:04.160261140Z" level=info msg="TearDown network for sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" successfully" Sep 5 04:01:04.161204 containerd[1583]: time="2025-09-05T04:01:04.160295335Z" level=info msg="StopPodSandbox for \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" returns successfully" Sep 5 04:01:04.308818 kubelet[2903]: I0905 04:01:04.308733 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-kernel\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.308836 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-config-path\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.308880 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hostproc\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.308914 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b626b31a-9266-4fde-97c1-f352392c78b7-cilium-config-path\") pod \"b626b31a-9266-4fde-97c1-f352392c78b7\" (UID: \"b626b31a-9266-4fde-97c1-f352392c78b7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.308956 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hubble-tls\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.308996 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-run\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309668 kubelet[2903]: I0905 04:01:04.309035 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-etc-cni-netd\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309089 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-cgroup\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309125 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzfmf\" (UniqueName: \"kubernetes.io/projected/b626b31a-9266-4fde-97c1-f352392c78b7-kube-api-access-bzfmf\") pod \"b626b31a-9266-4fde-97c1-f352392c78b7\" (UID: \"b626b31a-9266-4fde-97c1-f352392c78b7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309153 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cni-path\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309215 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-net\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309249 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xslq5\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-kube-api-access-xslq5\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.309936 kubelet[2903]: I0905 04:01:04.309277 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-bpf-maps\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.310198 kubelet[2903]: I0905 04:01:04.309348 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-clustermesh-secrets\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.310198 kubelet[2903]: I0905 04:01:04.309374 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-lib-modules\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.310198 kubelet[2903]: I0905 04:01:04.309400 2903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-xtables-lock\") pod \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\" (UID: \"0b6b07a6-76c3-4e64-bf73-ed99f617b1d7\") " Sep 5 04:01:04.310198 kubelet[2903]: I0905 04:01:04.309565 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.310198 kubelet[2903]: I0905 04:01:04.310142 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.313799 kubelet[2903]: I0905 04:01:04.313514 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.313799 kubelet[2903]: I0905 04:01:04.313588 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.315501 kubelet[2903]: I0905 04:01:04.315462 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.315754 kubelet[2903]: I0905 04:01:04.315717 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.319212 kubelet[2903]: I0905 04:01:04.316680 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.321263 kubelet[2903]: I0905 04:01:04.321232 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.322203 kubelet[2903]: I0905 04:01:04.321389 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.322203 kubelet[2903]: I0905 04:01:04.321432 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 04:01:04.322872 kubelet[2903]: I0905 04:01:04.322836 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 04:01:04.325677 kubelet[2903]: I0905 04:01:04.325643 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b626b31a-9266-4fde-97c1-f352392c78b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b626b31a-9266-4fde-97c1-f352392c78b7" (UID: "b626b31a-9266-4fde-97c1-f352392c78b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 04:01:04.334031 kubelet[2903]: I0905 04:01:04.333977 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-kube-api-access-xslq5" (OuterVolumeSpecName: "kube-api-access-xslq5") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "kube-api-access-xslq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 04:01:04.334427 kubelet[2903]: I0905 04:01:04.334327 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 04:01:04.335229 kubelet[2903]: I0905 04:01:04.335198 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" (UID: "0b6b07a6-76c3-4e64-bf73-ed99f617b1d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 04:01:04.335787 kubelet[2903]: I0905 04:01:04.335659 2903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b626b31a-9266-4fde-97c1-f352392c78b7-kube-api-access-bzfmf" (OuterVolumeSpecName: "kube-api-access-bzfmf") pod "b626b31a-9266-4fde-97c1-f352392c78b7" (UID: "b626b31a-9266-4fde-97c1-f352392c78b7"). InnerVolumeSpecName "kube-api-access-bzfmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 04:01:04.410791 kubelet[2903]: I0905 04:01:04.410346 2903 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-cgroup\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.410791 kubelet[2903]: I0905 04:01:04.410511 2903 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bzfmf\" (UniqueName: \"kubernetes.io/projected/b626b31a-9266-4fde-97c1-f352392c78b7-kube-api-access-bzfmf\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.410791 kubelet[2903]: I0905 04:01:04.410539 2903 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xslq5\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-kube-api-access-xslq5\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.410791 kubelet[2903]: I0905 04:01:04.410678 2903 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-bpf-maps\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.410791 kubelet[2903]: I0905 04:01:04.410703 2903 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cni-path\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.410718 2903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-net\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.410866 2903 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-clustermesh-secrets\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.410885 2903 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-lib-modules\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.411121 2903 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-xtables-lock\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.411192 2903 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-config-path\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.411220 2903 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hostproc\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.411242 2903 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b626b31a-9266-4fde-97c1-f352392c78b7-cilium-config-path\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.411887 kubelet[2903]: I0905 04:01:04.411257 2903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-host-proc-sys-kernel\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.412332 kubelet[2903]: I0905 04:01:04.411274 2903 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-cilium-run\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.412332 kubelet[2903]: I0905 04:01:04.411331 2903 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-etc-cni-netd\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.412332 kubelet[2903]: I0905 04:01:04.411354 2903 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7-hubble-tls\") on node \"srv-86xia.gb1.brightbox.com\" DevicePath \"\"" Sep 5 04:01:04.574588 kubelet[2903]: I0905 04:01:04.574432 2903 scope.go:117] "RemoveContainer" containerID="4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281" Sep 5 04:01:04.586441 containerd[1583]: time="2025-09-05T04:01:04.586038927Z" level=info msg="RemoveContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\"" Sep 5 04:01:04.592367 systemd[1]: Removed slice kubepods-besteffort-podb626b31a_9266_4fde_97c1_f352392c78b7.slice - libcontainer container kubepods-besteffort-podb626b31a_9266_4fde_97c1_f352392c78b7.slice. Sep 5 04:01:04.603365 containerd[1583]: time="2025-09-05T04:01:04.603292437Z" level=info msg="RemoveContainer for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" returns successfully" Sep 5 04:01:04.608166 kubelet[2903]: I0905 04:01:04.608104 2903 scope.go:117] "RemoveContainer" containerID="4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281" Sep 5 04:01:04.610043 containerd[1583]: time="2025-09-05T04:01:04.609952982Z" level=error msg="ContainerStatus for \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\": not found" Sep 5 04:01:04.611697 kubelet[2903]: E0905 04:01:04.611562 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\": not found" containerID="4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281" Sep 5 04:01:04.612142 kubelet[2903]: I0905 04:01:04.611822 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281"} err="failed to get container status \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\": rpc error: code = NotFound desc = an error occurred when try to find container \"4da68b4f686d6c2e4c101c23d30ee342543ec535735489be98fc3cf16030a281\": not found" Sep 5 04:01:04.612142 kubelet[2903]: I0905 04:01:04.612075 2903 scope.go:117] "RemoveContainer" containerID="84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1" Sep 5 04:01:04.616376 systemd[1]: Removed slice kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice - libcontainer container kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice. Sep 5 04:01:04.616940 systemd[1]: kubepods-burstable-pod0b6b07a6_76c3_4e64_bf73_ed99f617b1d7.slice: Consumed 10.752s CPU time, 192.3M memory peak, 72.3M read from disk, 13.3M written to disk. Sep 5 04:01:04.619170 containerd[1583]: time="2025-09-05T04:01:04.618720391Z" level=info msg="RemoveContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\"" Sep 5 04:01:04.637812 containerd[1583]: time="2025-09-05T04:01:04.637758527Z" level=info msg="RemoveContainer for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" returns successfully" Sep 5 04:01:04.638757 kubelet[2903]: I0905 04:01:04.638677 2903 scope.go:117] "RemoveContainer" containerID="a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5" Sep 5 04:01:04.643430 containerd[1583]: time="2025-09-05T04:01:04.643135606Z" level=info msg="RemoveContainer for \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\"" Sep 5 04:01:04.649501 containerd[1583]: time="2025-09-05T04:01:04.649388757Z" level=info msg="RemoveContainer for \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" returns successfully" Sep 5 04:01:04.649914 kubelet[2903]: I0905 04:01:04.649693 2903 scope.go:117] "RemoveContainer" containerID="0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8" Sep 5 04:01:04.657209 containerd[1583]: time="2025-09-05T04:01:04.656019405Z" level=info msg="RemoveContainer for \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\"" Sep 5 04:01:04.664928 containerd[1583]: time="2025-09-05T04:01:04.664679645Z" level=info msg="RemoveContainer for \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" returns successfully" Sep 5 04:01:04.666161 kubelet[2903]: I0905 04:01:04.666057 2903 scope.go:117] "RemoveContainer" containerID="820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626" Sep 5 04:01:04.668667 containerd[1583]: time="2025-09-05T04:01:04.668579347Z" level=info msg="RemoveContainer for \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\"" Sep 5 04:01:04.673290 containerd[1583]: time="2025-09-05T04:01:04.673254790Z" level=info msg="RemoveContainer for \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" returns successfully" Sep 5 04:01:04.673492 kubelet[2903]: I0905 04:01:04.673451 2903 scope.go:117] "RemoveContainer" containerID="035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae" Sep 5 04:01:04.675547 containerd[1583]: time="2025-09-05T04:01:04.675502631Z" level=info msg="RemoveContainer for \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\"" Sep 5 04:01:04.680702 containerd[1583]: time="2025-09-05T04:01:04.680650708Z" level=info msg="RemoveContainer for \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" returns successfully" Sep 5 04:01:04.683247 kubelet[2903]: I0905 04:01:04.683213 2903 scope.go:117] "RemoveContainer" containerID="84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1" Sep 5 04:01:04.684809 containerd[1583]: time="2025-09-05T04:01:04.684763338Z" level=error msg="ContainerStatus for \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\": not found" Sep 5 04:01:04.685080 kubelet[2903]: E0905 04:01:04.685046 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\": not found" containerID="84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1" Sep 5 04:01:04.685377 kubelet[2903]: I0905 04:01:04.685261 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1"} err="failed to get container status \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"84b227a1d75bf00458413f3e9ebf93d066478ac3b1c9ec1fd7d8834b05a717d1\": not found" Sep 5 04:01:04.685377 kubelet[2903]: I0905 04:01:04.685330 2903 scope.go:117] "RemoveContainer" containerID="a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5" Sep 5 04:01:04.687015 containerd[1583]: time="2025-09-05T04:01:04.686969423Z" level=error msg="ContainerStatus for \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\": not found" Sep 5 04:01:04.687700 kubelet[2903]: E0905 04:01:04.687544 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\": not found" containerID="a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5" Sep 5 04:01:04.687700 kubelet[2903]: I0905 04:01:04.687597 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5"} err="failed to get container status \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5a6ed9deafc3d0973d029ad5de1b6f0c51d69a9eb59465d8ce035d1c75edbb5\": not found" Sep 5 04:01:04.687700 kubelet[2903]: I0905 04:01:04.687637 2903 scope.go:117] "RemoveContainer" containerID="0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8" Sep 5 04:01:04.688338 containerd[1583]: time="2025-09-05T04:01:04.688064297Z" level=error msg="ContainerStatus for \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\": not found" Sep 5 04:01:04.688405 kubelet[2903]: E0905 04:01:04.688220 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\": not found" containerID="0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8" Sep 5 04:01:04.688405 kubelet[2903]: I0905 04:01:04.688254 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8"} err="failed to get container status \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0564f005822f62d9ce1fe1e0501512a9da169d0808b8d765df23d8d50ef4e0e8\": not found" Sep 5 04:01:04.688405 kubelet[2903]: I0905 04:01:04.688276 2903 scope.go:117] "RemoveContainer" containerID="820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626" Sep 5 04:01:04.688774 containerd[1583]: time="2025-09-05T04:01:04.688736525Z" level=error msg="ContainerStatus for \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\": not found" Sep 5 04:01:04.689211 kubelet[2903]: E0905 04:01:04.689078 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\": not found" containerID="820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626" Sep 5 04:01:04.689211 kubelet[2903]: I0905 04:01:04.689138 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626"} err="failed to get container status \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\": rpc error: code = NotFound desc = an error occurred when try to find container \"820492e7460e4b6aa2141276b6f90959e7fbabef2b8b120b105d0732994bc626\": not found" Sep 5 04:01:04.689211 kubelet[2903]: I0905 04:01:04.689162 2903 scope.go:117] "RemoveContainer" containerID="035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae" Sep 5 04:01:04.689706 containerd[1583]: time="2025-09-05T04:01:04.689668286Z" level=error msg="ContainerStatus for \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\": not found" Sep 5 04:01:04.689990 kubelet[2903]: E0905 04:01:04.689963 2903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\": not found" containerID="035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae" Sep 5 04:01:04.690255 kubelet[2903]: I0905 04:01:04.690198 2903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae"} err="failed to get container status \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"035eba94eb66db9692e8821b3131005bff6f4fde9e2190663a0314a2f07476ae\": not found" Sep 5 04:01:04.928702 kubelet[2903]: I0905 04:01:04.928519 2903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" path="/var/lib/kubelet/pods/0b6b07a6-76c3-4e64-bf73-ed99f617b1d7/volumes" Sep 5 04:01:04.932246 kubelet[2903]: I0905 04:01:04.932209 2903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b626b31a-9266-4fde-97c1-f352392c78b7" path="/var/lib/kubelet/pods/b626b31a-9266-4fde-97c1-f352392c78b7/volumes" Sep 5 04:01:04.962090 systemd[1]: var-lib-kubelet-pods-b626b31a\x2d9266\x2d4fde\x2d97c1\x2df352392c78b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzfmf.mount: Deactivated successfully. Sep 5 04:01:04.962266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a-shm.mount: Deactivated successfully. Sep 5 04:01:04.962398 systemd[1]: var-lib-kubelet-pods-0b6b07a6\x2d76c3\x2d4e64\x2dbf73\x2ded99f617b1d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxslq5.mount: Deactivated successfully. Sep 5 04:01:04.962513 systemd[1]: var-lib-kubelet-pods-0b6b07a6\x2d76c3\x2d4e64\x2dbf73\x2ded99f617b1d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 04:01:04.962659 systemd[1]: var-lib-kubelet-pods-0b6b07a6\x2d76c3\x2d4e64\x2dbf73\x2ded99f617b1d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 04:01:05.844586 sshd[4465]: Connection closed by 139.178.89.65 port 42280 Sep 5 04:01:05.845505 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Sep 5 04:01:05.853997 systemd[1]: sshd@24-10.230.58.50:22-139.178.89.65:42280.service: Deactivated successfully. Sep 5 04:01:05.857588 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 04:01:05.859529 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Sep 5 04:01:05.862047 systemd-logind[1560]: Removed session 27. Sep 5 04:01:06.018243 systemd[1]: Started sshd@25-10.230.58.50:22-139.178.89.65:42292.service - OpenSSH per-connection server daemon (139.178.89.65:42292). Sep 5 04:01:07.019462 sshd[4617]: Accepted publickey for core from 139.178.89.65 port 42292 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:01:07.021520 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:01:07.029067 systemd-logind[1560]: New session 28 of user core. Sep 5 04:01:07.038455 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 04:01:08.178089 kubelet[2903]: E0905 04:01:08.178009 2903 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 04:01:08.580495 kubelet[2903]: I0905 04:01:08.580329 2903 memory_manager.go:355] "RemoveStaleState removing state" podUID="b626b31a-9266-4fde-97c1-f352392c78b7" containerName="cilium-operator" Sep 5 04:01:08.582373 kubelet[2903]: I0905 04:01:08.580759 2903 memory_manager.go:355] "RemoveStaleState removing state" podUID="0b6b07a6-76c3-4e64-bf73-ed99f617b1d7" containerName="cilium-agent" Sep 5 04:01:08.599997 systemd[1]: Created slice kubepods-burstable-pode50035ba_95a4_43c8_b7f3_cf6870160444.slice - libcontainer container kubepods-burstable-pode50035ba_95a4_43c8_b7f3_cf6870160444.slice. Sep 5 04:01:08.719777 sshd[4620]: Connection closed by 139.178.89.65 port 42292 Sep 5 04:01:08.720381 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Sep 5 04:01:08.728071 systemd[1]: sshd@25-10.230.58.50:22-139.178.89.65:42292.service: Deactivated successfully. Sep 5 04:01:08.731340 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 04:01:08.733060 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Sep 5 04:01:08.735905 systemd-logind[1560]: Removed session 28. Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.742775 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-host-proc-sys-net\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.742896 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-hostproc\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.742948 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-xtables-lock\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.742979 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e50035ba-95a4-43c8-b7f3-cf6870160444-hubble-tls\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.743034 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e50035ba-95a4-43c8-b7f3-cf6870160444-cilium-config-path\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.743660 kubelet[2903]: I0905 04:01:08.743064 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e50035ba-95a4-43c8-b7f3-cf6870160444-clustermesh-secrets\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743110 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrjw\" (UniqueName: \"kubernetes.io/projected/e50035ba-95a4-43c8-b7f3-cf6870160444-kube-api-access-2qrjw\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743144 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e50035ba-95a4-43c8-b7f3-cf6870160444-cilium-ipsec-secrets\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743227 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-cilium-run\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743259 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-bpf-maps\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743313 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-cni-path\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744118 kubelet[2903]: I0905 04:01:08.743349 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-etc-cni-netd\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744402 kubelet[2903]: I0905 04:01:08.743376 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-lib-modules\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744402 kubelet[2903]: I0905 04:01:08.743403 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-cilium-cgroup\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.744402 kubelet[2903]: I0905 04:01:08.743466 2903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e50035ba-95a4-43c8-b7f3-cf6870160444-host-proc-sys-kernel\") pod \"cilium-tpshf\" (UID: \"e50035ba-95a4-43c8-b7f3-cf6870160444\") " pod="kube-system/cilium-tpshf" Sep 5 04:01:08.909617 containerd[1583]: time="2025-09-05T04:01:08.908332492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpshf,Uid:e50035ba-95a4-43c8-b7f3-cf6870160444,Namespace:kube-system,Attempt:0,}" Sep 5 04:01:08.917714 systemd[1]: Started sshd@26-10.230.58.50:22-139.178.89.65:42304.service - OpenSSH per-connection server daemon (139.178.89.65:42304). Sep 5 04:01:08.963218 containerd[1583]: time="2025-09-05T04:01:08.962451484Z" level=info msg="connecting to shim a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" namespace=k8s.io protocol=ttrpc version=3 Sep 5 04:01:09.002567 systemd[1]: Started cri-containerd-a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db.scope - libcontainer container a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db. Sep 5 04:01:09.072001 containerd[1583]: time="2025-09-05T04:01:09.071805625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpshf,Uid:e50035ba-95a4-43c8-b7f3-cf6870160444,Namespace:kube-system,Attempt:0,} returns sandbox id \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\"" Sep 5 04:01:09.086480 containerd[1583]: time="2025-09-05T04:01:09.085579724Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 04:01:09.110936 containerd[1583]: time="2025-09-05T04:01:09.110870963Z" level=info msg="Container fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe: CDI devices from CRI Config.CDIDevices: []" Sep 5 04:01:09.118902 containerd[1583]: time="2025-09-05T04:01:09.118860938Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\"" Sep 5 04:01:09.120379 containerd[1583]: time="2025-09-05T04:01:09.120348804Z" level=info msg="StartContainer for \"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\"" Sep 5 04:01:09.123018 containerd[1583]: time="2025-09-05T04:01:09.122984757Z" level=info msg="connecting to shim fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" protocol=ttrpc version=3 Sep 5 04:01:09.163453 systemd[1]: Started cri-containerd-fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe.scope - libcontainer container fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe. Sep 5 04:01:09.221211 containerd[1583]: time="2025-09-05T04:01:09.220653990Z" level=info msg="StartContainer for \"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\" returns successfully" Sep 5 04:01:09.240476 systemd[1]: cri-containerd-fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe.scope: Deactivated successfully. Sep 5 04:01:09.241167 systemd[1]: cri-containerd-fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe.scope: Consumed 37ms CPU time, 9.1M memory peak, 2.7M read from disk. Sep 5 04:01:09.248224 containerd[1583]: time="2025-09-05T04:01:09.248095712Z" level=info msg="received exit event container_id:\"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\" id:\"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\" pid:4692 exited_at:{seconds:1757044869 nanos:247520314}" Sep 5 04:01:09.248540 containerd[1583]: time="2025-09-05T04:01:09.248505250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\" id:\"fbd3cb1a6c3cae1a3a2545a2926787817571bd4563ff389dd473589718de41fe\" pid:4692 exited_at:{seconds:1757044869 nanos:247520314}" Sep 5 04:01:09.629208 containerd[1583]: time="2025-09-05T04:01:09.628984032Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 04:01:09.662103 containerd[1583]: time="2025-09-05T04:01:09.660325876Z" level=info msg="Container 45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063: CDI devices from CRI Config.CDIDevices: []" Sep 5 04:01:09.680953 containerd[1583]: time="2025-09-05T04:01:09.680863370Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\"" Sep 5 04:01:09.683216 containerd[1583]: time="2025-09-05T04:01:09.683041693Z" level=info msg="StartContainer for \"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\"" Sep 5 04:01:09.685120 containerd[1583]: time="2025-09-05T04:01:09.685087887Z" level=info msg="connecting to shim 45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" protocol=ttrpc version=3 Sep 5 04:01:09.746826 systemd[1]: Started cri-containerd-45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063.scope - libcontainer container 45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063. Sep 5 04:01:09.818738 containerd[1583]: time="2025-09-05T04:01:09.818571475Z" level=info msg="StartContainer for \"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\" returns successfully" Sep 5 04:01:09.832975 systemd[1]: cri-containerd-45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063.scope: Deactivated successfully. Sep 5 04:01:09.833477 systemd[1]: cri-containerd-45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063.scope: Consumed 31ms CPU time, 7.2M memory peak, 1.9M read from disk. Sep 5 04:01:09.835894 containerd[1583]: time="2025-09-05T04:01:09.835812550Z" level=info msg="received exit event container_id:\"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\" id:\"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\" pid:4735 exited_at:{seconds:1757044869 nanos:835460623}" Sep 5 04:01:09.836893 containerd[1583]: time="2025-09-05T04:01:09.836677215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\" id:\"45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063\" pid:4735 exited_at:{seconds:1757044869 nanos:835460623}" Sep 5 04:01:09.883826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45597b28c8744729312db44ececabaf17679f9959242d723d46512fd25bac063-rootfs.mount: Deactivated successfully. Sep 5 04:01:09.977368 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 42304 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:01:09.979476 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:01:09.993690 systemd-logind[1560]: New session 29 of user core. Sep 5 04:01:09.998653 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 5 04:01:10.637158 containerd[1583]: time="2025-09-05T04:01:10.636933418Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 04:01:10.676694 sshd[4766]: Connection closed by 139.178.89.65 port 42304 Sep 5 04:01:10.679617 containerd[1583]: time="2025-09-05T04:01:10.678555369Z" level=info msg="Container 7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4: CDI devices from CRI Config.CDIDevices: []" Sep 5 04:01:10.680300 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Sep 5 04:01:10.694046 systemd[1]: sshd@26-10.230.58.50:22-139.178.89.65:42304.service: Deactivated successfully. Sep 5 04:01:10.700061 systemd[1]: session-29.scope: Deactivated successfully. Sep 5 04:01:10.702895 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Sep 5 04:01:10.707484 systemd-logind[1560]: Removed session 29. Sep 5 04:01:10.712260 containerd[1583]: time="2025-09-05T04:01:10.711750048Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\"" Sep 5 04:01:10.715084 containerd[1583]: time="2025-09-05T04:01:10.713172750Z" level=info msg="StartContainer for \"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\"" Sep 5 04:01:10.715327 containerd[1583]: time="2025-09-05T04:01:10.715293351Z" level=info msg="connecting to shim 7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" protocol=ttrpc version=3 Sep 5 04:01:10.760518 systemd[1]: Started cri-containerd-7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4.scope - libcontainer container 7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4. Sep 5 04:01:10.843684 systemd[1]: Started sshd@27-10.230.58.50:22-139.178.89.65:36186.service - OpenSSH per-connection server daemon (139.178.89.65:36186). Sep 5 04:01:10.865612 containerd[1583]: time="2025-09-05T04:01:10.865564528Z" level=info msg="StartContainer for \"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\" returns successfully" Sep 5 04:01:10.873925 systemd[1]: cri-containerd-7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4.scope: Deactivated successfully. Sep 5 04:01:10.878557 containerd[1583]: time="2025-09-05T04:01:10.878505083Z" level=info msg="received exit event container_id:\"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\" id:\"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\" pid:4786 exited_at:{seconds:1757044870 nanos:875375869}" Sep 5 04:01:10.879055 containerd[1583]: time="2025-09-05T04:01:10.879016869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\" id:\"7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4\" pid:4786 exited_at:{seconds:1757044870 nanos:875375869}" Sep 5 04:01:10.920744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e778b6669a42de180812ae0c0d40ec1e712f39df2ead4238d28bb5283e473b4-rootfs.mount: Deactivated successfully. Sep 5 04:01:11.646488 containerd[1583]: time="2025-09-05T04:01:11.646423255Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 04:01:11.666523 containerd[1583]: time="2025-09-05T04:01:11.664728487Z" level=info msg="Container 02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d: CDI devices from CRI Config.CDIDevices: []" Sep 5 04:01:11.685458 containerd[1583]: time="2025-09-05T04:01:11.685341449Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\"" Sep 5 04:01:11.686848 containerd[1583]: time="2025-09-05T04:01:11.686805552Z" level=info msg="StartContainer for \"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\"" Sep 5 04:01:11.689720 containerd[1583]: time="2025-09-05T04:01:11.689664469Z" level=info msg="connecting to shim 02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" protocol=ttrpc version=3 Sep 5 04:01:11.726463 systemd[1]: Started cri-containerd-02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d.scope - libcontainer container 02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d. Sep 5 04:01:11.778071 systemd[1]: cri-containerd-02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d.scope: Deactivated successfully. Sep 5 04:01:11.784639 containerd[1583]: time="2025-09-05T04:01:11.784438629Z" level=info msg="received exit event container_id:\"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\" id:\"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\" pid:4828 exited_at:{seconds:1757044871 nanos:783707090}" Sep 5 04:01:11.784639 containerd[1583]: time="2025-09-05T04:01:11.784457793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\" id:\"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\" pid:4828 exited_at:{seconds:1757044871 nanos:783707090}" Sep 5 04:01:11.785669 containerd[1583]: time="2025-09-05T04:01:11.785329187Z" level=info msg="StartContainer for \"02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d\" returns successfully" Sep 5 04:01:11.822800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02d0a0a180eaca43eb64901f2610c02b06b4d13276f1dcec57dd8409a6547c0d-rootfs.mount: Deactivated successfully. Sep 5 04:01:11.854461 sshd[4799]: Accepted publickey for core from 139.178.89.65 port 36186 ssh2: RSA SHA256:pJgY0R2bR+NGmrE2ksZ5E1RXqWQKl4/Ei9ytMezjLL4 Sep 5 04:01:11.855306 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 04:01:11.862748 systemd-logind[1560]: New session 30 of user core. Sep 5 04:01:11.877502 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 5 04:01:12.660026 containerd[1583]: time="2025-09-05T04:01:12.659957247Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 04:01:12.705470 containerd[1583]: time="2025-09-05T04:01:12.699444531Z" level=info msg="Container 77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d: CDI devices from CRI Config.CDIDevices: []" Sep 5 04:01:12.704725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724665836.mount: Deactivated successfully. Sep 5 04:01:12.730897 containerd[1583]: time="2025-09-05T04:01:12.730839889Z" level=info msg="CreateContainer within sandbox \"a92064a71a3c3dc88dfebb8248a0f12fbf81bf3e7f3763d8a216ce62394bd3db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\"" Sep 5 04:01:12.732500 containerd[1583]: time="2025-09-05T04:01:12.732469316Z" level=info msg="StartContainer for \"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\"" Sep 5 04:01:12.734984 containerd[1583]: time="2025-09-05T04:01:12.734943337Z" level=info msg="connecting to shim 77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d" address="unix:///run/containerd/s/7a15e521b1d7ebf414c4dcba7156ccdd54cf62246026ef23962af075b3d89ae8" protocol=ttrpc version=3 Sep 5 04:01:12.786709 systemd[1]: Started cri-containerd-77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d.scope - libcontainer container 77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d. Sep 5 04:01:12.879293 containerd[1583]: time="2025-09-05T04:01:12.878581477Z" level=info msg="StartContainer for \"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" returns successfully" Sep 5 04:01:13.056507 containerd[1583]: time="2025-09-05T04:01:13.055932919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"6a2e83a7da9f7c0861d80ee3e664fe20c32a93befea32a9886f7b1728ea384ef\" pid:4905 exited_at:{seconds:1757044873 nanos:55422697}" Sep 5 04:01:13.691104 kubelet[2903]: I0905 04:01:13.690992 2903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tpshf" podStartSLOduration=5.6909588509999995 podStartE2EDuration="5.690958851s" podCreationTimestamp="2025-09-05 04:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 04:01:13.689646821 +0000 UTC m=+171.016723950" watchObservedRunningTime="2025-09-05 04:01:13.690958851 +0000 UTC m=+171.018035957" Sep 5 04:01:13.792259 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 5 04:01:15.198971 containerd[1583]: time="2025-09-05T04:01:15.198847278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"7ef059a52d94d23a4a2373145c8222ab2b4b68f3ee239c5da6a4361a3e2c8ed0\" pid:4985 exit_status:1 exited_at:{seconds:1757044875 nanos:197710182}" Sep 5 04:01:17.423360 containerd[1583]: time="2025-09-05T04:01:17.423109427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"84e6f408ccde5be057930e056358a0c29f317efdeedafcd7d85d55360e8eff52\" pid:5360 exit_status:1 exited_at:{seconds:1757044877 nanos:422308266}" Sep 5 04:01:17.619142 systemd-networkd[1513]: lxc_health: Link UP Sep 5 04:01:17.623817 systemd-networkd[1513]: lxc_health: Gained carrier Sep 5 04:01:19.472390 systemd-networkd[1513]: lxc_health: Gained IPv6LL Sep 5 04:01:19.616728 containerd[1583]: time="2025-09-05T04:01:19.616629338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"382f85b3392b4b93b1e294e72e586cd952151ee4ab7395390838b07e786b10cf\" pid:5472 exited_at:{seconds:1757044879 nanos:615630963}" Sep 5 04:01:21.941245 containerd[1583]: time="2025-09-05T04:01:21.941153993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"492fc3867ca379d064d3da26024704af843fba293e47c6c6b8757f074a9ce9db\" pid:5499 exited_at:{seconds:1757044881 nanos:940272800}" Sep 5 04:01:21.947460 kubelet[2903]: E0905 04:01:21.947370 2903 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41446->127.0.0.1:46471: write tcp 127.0.0.1:41446->127.0.0.1:46471: write: broken pipe Sep 5 04:01:22.917109 containerd[1583]: time="2025-09-05T04:01:22.917036899Z" level=info msg="StopPodSandbox for \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\"" Sep 5 04:01:22.917485 containerd[1583]: time="2025-09-05T04:01:22.917397722Z" level=info msg="TearDown network for sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" successfully" Sep 5 04:01:22.917485 containerd[1583]: time="2025-09-05T04:01:22.917444745Z" level=info msg="StopPodSandbox for \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" returns successfully" Sep 5 04:01:22.919203 containerd[1583]: time="2025-09-05T04:01:22.918376629Z" level=info msg="RemovePodSandbox for \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\"" Sep 5 04:01:22.919203 containerd[1583]: time="2025-09-05T04:01:22.918450255Z" level=info msg="Forcibly stopping sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\"" Sep 5 04:01:22.919203 containerd[1583]: time="2025-09-05T04:01:22.918542578Z" level=info msg="TearDown network for sandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" successfully" Sep 5 04:01:22.920442 containerd[1583]: time="2025-09-05T04:01:22.920379095Z" level=info msg="Ensure that sandbox c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a in task-service has been cleanup successfully" Sep 5 04:01:22.927724 containerd[1583]: time="2025-09-05T04:01:22.927653368Z" level=info msg="RemovePodSandbox \"c478b13528c523e0b4acbc2c56133e03f9deb7334ca3e15c5c1ba3ac67ca574a\" returns successfully" Sep 5 04:01:22.928555 containerd[1583]: time="2025-09-05T04:01:22.928515209Z" level=info msg="StopPodSandbox for \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\"" Sep 5 04:01:22.928712 containerd[1583]: time="2025-09-05T04:01:22.928677557Z" level=info msg="TearDown network for sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" successfully" Sep 5 04:01:22.928712 containerd[1583]: time="2025-09-05T04:01:22.928708995Z" level=info msg="StopPodSandbox for \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" returns successfully" Sep 5 04:01:22.929299 containerd[1583]: time="2025-09-05T04:01:22.929263352Z" level=info msg="RemovePodSandbox for \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\"" Sep 5 04:01:22.929394 containerd[1583]: time="2025-09-05T04:01:22.929301957Z" level=info msg="Forcibly stopping sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\"" Sep 5 04:01:22.929451 containerd[1583]: time="2025-09-05T04:01:22.929393846Z" level=info msg="TearDown network for sandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" successfully" Sep 5 04:01:22.931203 containerd[1583]: time="2025-09-05T04:01:22.930940261Z" level=info msg="Ensure that sandbox 8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd in task-service has been cleanup successfully" Sep 5 04:01:22.934524 containerd[1583]: time="2025-09-05T04:01:22.934479602Z" level=info msg="RemovePodSandbox \"8c1f0de3aac5f0c13bb59ec151a199e5ec954c1ca93ff502b6560c6f47bfaecd\" returns successfully" Sep 5 04:01:24.163654 containerd[1583]: time="2025-09-05T04:01:24.163544054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77dfd276ea65f527b231b55cd4d88aedb7e3ae415cfa8bc14a2831b9b5f3a53d\" id:\"55a31086220c240f5d4996a1b255f7e8b6355db249b2a720019d47b7e6941868\" pid:5527 exited_at:{seconds:1757044884 nanos:163040220}" Sep 5 04:01:24.331416 sshd[4853]: Connection closed by 139.178.89.65 port 36186 Sep 5 04:01:24.334317 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Sep 5 04:01:24.343124 systemd[1]: sshd@27-10.230.58.50:22-139.178.89.65:36186.service: Deactivated successfully. Sep 5 04:01:24.347231 systemd[1]: session-30.scope: Deactivated successfully. Sep 5 04:01:24.352706 systemd-logind[1560]: Session 30 logged out. Waiting for processes to exit. Sep 5 04:01:24.355162 systemd-logind[1560]: Removed session 30.