Sep 6 01:16:51.909798 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 01:16:51.909838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:16:51.909857 kernel: BIOS-provided physical RAM map: Sep 6 01:16:51.909867 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 01:16:51.909876 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 01:16:51.909885 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 01:16:51.909896 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 6 01:16:51.909906 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 6 01:16:51.909915 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 01:16:51.909925 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 01:16:51.909938 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 01:16:51.909948 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 01:16:51.909957 kernel: NX (Execute Disable) protection: active Sep 6 01:16:51.909967 kernel: SMBIOS 2.8 present. Sep 6 01:16:51.909979 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 6 01:16:51.909989 kernel: Hypervisor detected: KVM Sep 6 01:16:51.910003 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 01:16:51.910014 kernel: kvm-clock: cpu 0, msr 2519f001, primary cpu clock Sep 6 01:16:51.910024 kernel: kvm-clock: using sched offset of 4801572657 cycles Sep 6 01:16:51.910035 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 01:16:51.910046 kernel: tsc: Detected 2799.998 MHz processor Sep 6 01:16:51.910057 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 01:16:51.910067 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 01:16:51.910078 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 6 01:16:51.910088 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 01:16:51.910102 kernel: Using GB pages for direct mapping Sep 6 01:16:51.910112 kernel: ACPI: Early table checksum verification disabled Sep 6 01:16:51.910123 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 6 01:16:51.910133 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910144 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910154 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910165 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 6 01:16:51.910175 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910186 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910200 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910210 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:16:51.910220 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 6 01:16:51.910231 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 6 01:16:51.910241 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 6 01:16:51.910252 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 6 01:16:51.910267 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 6 01:16:51.910281 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 6 01:16:51.910302 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 6 01:16:51.910315 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 01:16:51.910326 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 01:16:51.910337 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 6 01:16:51.910348 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 6 01:16:51.910359 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 6 01:16:51.910374 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 6 01:16:51.910385 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 6 01:16:51.910396 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 6 01:16:51.910407 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 6 01:16:51.910418 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 6 01:16:51.910429 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 6 01:16:51.910440 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 6 01:16:51.910451 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 6 01:16:51.910462 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 6 01:16:51.910483 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 6 01:16:51.911528 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 6 01:16:51.911541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 01:16:51.911552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 01:16:51.911563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 6 01:16:51.911575 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 6 01:16:51.911586 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 6 01:16:51.911598 kernel: Zone ranges: Sep 6 01:16:51.911609 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 01:16:51.911633 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 6 01:16:51.911648 kernel: Normal empty Sep 6 01:16:51.911659 kernel: Movable zone start for each node Sep 6 01:16:51.911670 kernel: Early memory node ranges Sep 6 01:16:51.911681 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 01:16:51.911705 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 6 01:16:51.911716 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 6 01:16:51.911728 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 01:16:51.911739 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 01:16:51.911750 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 6 01:16:51.911766 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 01:16:51.911777 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 01:16:51.911789 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 01:16:51.911800 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 01:16:51.911811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 01:16:51.911822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 01:16:51.911834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 01:16:51.911845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 01:16:51.911856 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 01:16:51.911870 kernel: TSC deadline timer available Sep 6 01:16:51.911882 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 6 01:16:51.911893 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 01:16:51.911904 kernel: Booting paravirtualized kernel on KVM Sep 6 01:16:51.911916 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 01:16:51.911927 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 6 01:16:51.911939 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 6 01:16:51.911950 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 6 01:16:51.911961 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 6 01:16:51.911975 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Sep 6 01:16:51.911986 kernel: kvm-guest: PV spinlocks enabled Sep 6 01:16:51.911998 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 01:16:51.912009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 6 01:16:51.912020 kernel: Policy zone: DMA32 Sep 6 01:16:51.912032 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:16:51.912045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:16:51.912056 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:16:51.912070 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 01:16:51.912082 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:16:51.912093 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 192524K reserved, 0K cma-reserved) Sep 6 01:16:51.912105 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 6 01:16:51.912116 kernel: Kernel/User page tables isolation: enabled Sep 6 01:16:51.912127 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 01:16:51.912138 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 01:16:51.912150 kernel: rcu: Hierarchical RCU implementation. Sep 6 01:16:51.912162 kernel: rcu: RCU event tracing is enabled. Sep 6 01:16:51.912176 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 6 01:16:51.912188 kernel: Rude variant of Tasks RCU enabled. Sep 6 01:16:51.912199 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:16:51.912211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:16:51.912222 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 6 01:16:51.912233 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 6 01:16:51.912244 kernel: random: crng init done Sep 6 01:16:51.912266 kernel: Console: colour VGA+ 80x25 Sep 6 01:16:51.912278 kernel: printk: console [tty0] enabled Sep 6 01:16:51.912300 kernel: printk: console [ttyS0] enabled Sep 6 01:16:51.912313 kernel: ACPI: Core revision 20210730 Sep 6 01:16:51.912325 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 01:16:51.912341 kernel: x2apic enabled Sep 6 01:16:51.912353 kernel: Switched APIC routing to physical x2apic. Sep 6 01:16:51.912365 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 6 01:16:51.912377 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Sep 6 01:16:51.912389 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 01:16:51.912404 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 01:16:51.912416 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 01:16:51.912427 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 01:16:51.912439 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 01:16:51.912450 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 01:16:51.912462 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 01:16:51.912487 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 01:16:51.912500 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 01:16:51.912512 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 01:16:51.912523 kernel: MMIO Stale Data: Unknown: No mitigations Sep 6 01:16:51.912535 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 6 01:16:51.912551 kernel: active return thunk: its_return_thunk Sep 6 01:16:51.912563 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 01:16:51.912575 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 01:16:51.912587 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 01:16:51.912599 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 01:16:51.912610 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 01:16:51.912622 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 01:16:51.912634 kernel: Freeing SMP alternatives memory: 32K Sep 6 01:16:51.912646 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:16:51.912657 kernel: LSM: Security Framework initializing Sep 6 01:16:51.912669 kernel: SELinux: Initializing. Sep 6 01:16:51.912684 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 01:16:51.912696 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 01:16:51.912708 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 6 01:16:51.912720 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 6 01:16:51.912732 kernel: signal: max sigframe size: 1776 Sep 6 01:16:51.912743 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:16:51.912755 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 01:16:51.912767 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:16:51.912779 kernel: x86: Booting SMP configuration: Sep 6 01:16:51.912791 kernel: .... node #0, CPUs: #1 Sep 6 01:16:51.912806 kernel: kvm-clock: cpu 1, msr 2519f041, secondary cpu clock Sep 6 01:16:51.912818 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 6 01:16:51.912829 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Sep 6 01:16:51.912841 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:16:51.912853 kernel: smpboot: Max logical packages: 16 Sep 6 01:16:51.912864 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Sep 6 01:16:51.912876 kernel: devtmpfs: initialized Sep 6 01:16:51.912888 kernel: x86/mm: Memory block size: 128MB Sep 6 01:16:51.912900 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:16:51.912915 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 6 01:16:51.912927 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:16:51.912939 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:16:51.912951 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:16:51.912962 kernel: audit: type=2000 audit(1757121410.841:1): state=initialized audit_enabled=0 res=1 Sep 6 01:16:51.912974 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:16:51.912986 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 01:16:51.912998 kernel: cpuidle: using governor menu Sep 6 01:16:51.913009 kernel: ACPI: bus type PCI registered Sep 6 01:16:51.913025 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:16:51.913036 kernel: dca service started, version 1.12.1 Sep 6 01:16:51.913048 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 01:16:51.913060 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 01:16:51.913072 kernel: PCI: Using configuration type 1 for base access Sep 6 01:16:51.913084 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 01:16:51.913096 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:16:51.913108 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:16:51.913120 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:16:51.913135 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:16:51.913147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:16:51.913158 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:16:51.913170 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:16:51.913182 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:16:51.913194 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:16:51.913206 kernel: ACPI: Interpreter enabled Sep 6 01:16:51.913217 kernel: ACPI: PM: (supports S0 S5) Sep 6 01:16:51.913229 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 01:16:51.913244 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 01:16:51.913256 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 01:16:51.913268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 01:16:51.916361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 01:16:51.916542 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 01:16:51.916701 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 01:16:51.916730 kernel: PCI host bridge to bus 0000:00 Sep 6 01:16:51.916918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 01:16:51.917052 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 01:16:51.917183 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 01:16:51.917326 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 6 01:16:51.917458 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 01:16:51.917605 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 6 01:16:51.917734 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 01:16:51.917910 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 01:16:51.918077 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 6 01:16:51.918225 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 6 01:16:51.918391 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 6 01:16:51.923596 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 6 01:16:51.923755 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 01:16:51.923930 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.924084 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 6 01:16:51.924249 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.924412 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 6 01:16:51.924595 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.924744 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 6 01:16:51.924920 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.925098 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 6 01:16:51.925271 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.925444 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 6 01:16:51.925648 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.925796 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 6 01:16:51.925969 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.926122 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 6 01:16:51.926296 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 6 01:16:51.926447 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 6 01:16:51.926624 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 01:16:51.926770 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 6 01:16:51.926912 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 6 01:16:51.927065 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 6 01:16:51.927208 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 6 01:16:51.927376 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 6 01:16:51.927539 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 01:16:51.927684 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 6 01:16:51.927830 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 6 01:16:51.927992 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 01:16:51.928152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 01:16:51.928332 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 01:16:51.929605 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 6 01:16:51.929803 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 6 01:16:51.929982 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 01:16:51.930138 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 6 01:16:51.930331 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 6 01:16:51.930522 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 6 01:16:51.930691 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 6 01:16:51.930844 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 6 01:16:51.930985 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:16:51.931141 kernel: pci_bus 0000:02: extended config space not accessible Sep 6 01:16:51.931328 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 6 01:16:51.931502 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 6 01:16:51.931653 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 6 01:16:51.931832 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 6 01:16:51.931990 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 6 01:16:51.932141 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 6 01:16:51.932285 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 6 01:16:51.932446 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 6 01:16:51.932604 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:16:51.932783 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 6 01:16:51.932953 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 6 01:16:51.933096 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 6 01:16:51.933250 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 6 01:16:51.933416 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:16:51.933582 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 6 01:16:51.933752 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 6 01:16:51.933917 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:16:51.934089 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 6 01:16:51.934260 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 6 01:16:51.934435 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:16:51.934618 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 6 01:16:51.934788 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 6 01:16:51.934973 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:16:51.935124 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 6 01:16:51.935264 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 6 01:16:51.935421 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:16:51.935603 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 6 01:16:51.935758 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 6 01:16:51.935900 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:16:51.935918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 01:16:51.935931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 01:16:51.935949 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 01:16:51.935961 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 01:16:51.935973 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 01:16:51.935986 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 01:16:51.935998 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 01:16:51.936010 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 01:16:51.936022 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 01:16:51.936034 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 01:16:51.936046 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 01:16:51.936061 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 01:16:51.936074 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 01:16:51.936086 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 01:16:51.936097 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 01:16:51.936110 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 01:16:51.936122 kernel: iommu: Default domain type: Translated Sep 6 01:16:51.936134 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 01:16:51.936273 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 01:16:51.936428 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 01:16:51.943648 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 01:16:51.943673 kernel: vgaarb: loaded Sep 6 01:16:51.943687 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:16:51.943707 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:16:51.943719 kernel: PTP clock support registered Sep 6 01:16:51.943732 kernel: PCI: Using ACPI for IRQ routing Sep 6 01:16:51.943743 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 01:16:51.943755 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 01:16:51.943774 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 6 01:16:51.943786 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 01:16:51.943798 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:16:51.943810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:16:51.943822 kernel: pnp: PnP ACPI init Sep 6 01:16:51.943998 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 01:16:51.944019 kernel: pnp: PnP ACPI: found 5 devices Sep 6 01:16:51.944031 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 01:16:51.944043 kernel: NET: Registered PF_INET protocol family Sep 6 01:16:51.944061 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:16:51.944074 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 01:16:51.944086 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:16:51.944098 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 01:16:51.944110 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 01:16:51.944122 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 01:16:51.944134 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 01:16:51.944146 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 01:16:51.944162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:16:51.944174 kernel: NET: Registered PF_XDP protocol family Sep 6 01:16:51.944334 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 6 01:16:51.944503 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 6 01:16:51.944653 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 6 01:16:51.944797 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 6 01:16:51.944941 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 6 01:16:51.945091 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 6 01:16:51.945233 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 6 01:16:51.945389 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 6 01:16:51.945548 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 6 01:16:51.945691 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 6 01:16:51.945833 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 6 01:16:51.945975 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 6 01:16:51.946124 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 6 01:16:51.946266 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 6 01:16:51.946421 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 6 01:16:51.946578 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 6 01:16:51.946728 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 6 01:16:51.946874 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 6 01:16:51.947016 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 6 01:16:51.947159 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 6 01:16:51.947324 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 6 01:16:51.947488 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:16:51.947657 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 6 01:16:51.947803 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 6 01:16:51.947946 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 6 01:16:51.948087 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:16:51.948231 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 6 01:16:51.948388 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 6 01:16:51.948547 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 6 01:16:51.948700 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:16:51.948855 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 6 01:16:51.949003 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 6 01:16:51.949150 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 6 01:16:51.949302 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:16:51.949450 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 6 01:16:51.949615 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 6 01:16:51.949762 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 6 01:16:51.949907 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:16:51.950052 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 6 01:16:51.950197 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 6 01:16:51.950355 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 6 01:16:51.961537 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:16:51.961723 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 6 01:16:51.961880 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 6 01:16:51.962026 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 6 01:16:51.962170 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:16:51.962328 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 6 01:16:51.962486 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 6 01:16:51.962641 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 6 01:16:51.962785 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:16:51.962951 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 01:16:51.963081 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 01:16:51.963210 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 01:16:51.963352 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 6 01:16:51.963497 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 01:16:51.963630 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 6 01:16:51.963781 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 6 01:16:51.963932 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 6 01:16:51.964076 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:16:51.964229 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 6 01:16:51.964394 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 6 01:16:51.964552 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 6 01:16:51.964694 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:16:51.964852 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 6 01:16:51.964993 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 6 01:16:51.965159 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:16:51.965358 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 6 01:16:51.965510 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 6 01:16:51.965653 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:16:51.965811 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 6 01:16:51.965959 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 6 01:16:51.966098 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:16:51.966262 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 6 01:16:51.966416 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 6 01:16:51.966578 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:16:51.966725 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 6 01:16:51.966874 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 6 01:16:51.967014 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:16:51.967163 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 6 01:16:51.967315 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 6 01:16:51.967456 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:16:51.967494 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 01:16:51.967510 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:16:51.967522 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 01:16:51.967541 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 6 01:16:51.967554 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 01:16:51.967567 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 6 01:16:51.967580 kernel: Initialise system trusted keyrings Sep 6 01:16:51.967593 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 01:16:51.967605 kernel: Key type asymmetric registered Sep 6 01:16:51.967617 kernel: Asymmetric key parser 'x509' registered Sep 6 01:16:51.967630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:16:51.967643 kernel: io scheduler mq-deadline registered Sep 6 01:16:51.967659 kernel: io scheduler kyber registered Sep 6 01:16:51.967672 kernel: io scheduler bfq registered Sep 6 01:16:51.967818 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 6 01:16:51.967962 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 6 01:16:51.968106 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.968249 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 6 01:16:51.968405 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 6 01:16:51.974406 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.974586 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 6 01:16:51.974734 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 6 01:16:51.974878 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.975023 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 6 01:16:51.975178 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 6 01:16:51.975353 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.975524 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 6 01:16:51.975702 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 6 01:16:51.975856 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.976011 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 6 01:16:51.976164 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 6 01:16:51.976327 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.976499 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 6 01:16:51.976666 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 6 01:16:51.976819 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.976965 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 6 01:16:51.977108 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 6 01:16:51.977258 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:16:51.977278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 01:16:51.977301 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 01:16:51.977315 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 01:16:51.977328 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:16:51.977341 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 01:16:51.977354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 01:16:51.977373 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 01:16:51.977389 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 01:16:51.978620 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 01:16:51.978644 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 01:16:51.978784 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 01:16:51.978921 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T01:16:51 UTC (1757121411) Sep 6 01:16:51.979056 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 01:16:51.979075 kernel: intel_pstate: CPU model not supported Sep 6 01:16:51.979088 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:16:51.979108 kernel: Segment Routing with IPv6 Sep 6 01:16:51.979121 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:16:51.979134 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:16:51.979147 kernel: Key type dns_resolver registered Sep 6 01:16:51.979159 kernel: IPI shorthand broadcast: enabled Sep 6 01:16:51.979172 kernel: sched_clock: Marking stable (966712992, 221736706)->(1462956321, -274506623) Sep 6 01:16:51.979185 kernel: registered taskstats version 1 Sep 6 01:16:51.979197 kernel: Loading compiled-in X.509 certificates Sep 6 01:16:51.979210 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 01:16:51.979227 kernel: Key type .fscrypt registered Sep 6 01:16:51.979240 kernel: Key type fscrypt-provisioning registered Sep 6 01:16:51.979252 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:16:51.979265 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:16:51.979278 kernel: ima: No architecture policies found Sep 6 01:16:51.979304 kernel: clk: Disabling unused clocks Sep 6 01:16:51.979318 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 01:16:51.979331 kernel: Write protecting the kernel read-only data: 28672k Sep 6 01:16:51.979348 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 01:16:51.979361 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 01:16:51.979374 kernel: Run /init as init process Sep 6 01:16:51.979386 kernel: with arguments: Sep 6 01:16:51.979399 kernel: /init Sep 6 01:16:51.979411 kernel: with environment: Sep 6 01:16:51.979423 kernel: HOME=/ Sep 6 01:16:51.979435 kernel: TERM=linux Sep 6 01:16:51.979447 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:16:51.979469 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:16:51.979504 systemd[1]: Detected virtualization kvm. Sep 6 01:16:51.979518 systemd[1]: Detected architecture x86-64. Sep 6 01:16:51.979530 systemd[1]: Running in initrd. Sep 6 01:16:51.979544 systemd[1]: No hostname configured, using default hostname. Sep 6 01:16:51.979557 systemd[1]: Hostname set to . Sep 6 01:16:51.979570 systemd[1]: Initializing machine ID from VM UUID. Sep 6 01:16:51.979583 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:16:51.979600 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:16:51.979613 systemd[1]: Reached target cryptsetup.target. Sep 6 01:16:51.979626 systemd[1]: Reached target paths.target. Sep 6 01:16:51.979640 systemd[1]: Reached target slices.target. Sep 6 01:16:51.979653 systemd[1]: Reached target swap.target. Sep 6 01:16:51.979666 systemd[1]: Reached target timers.target. Sep 6 01:16:51.979679 systemd[1]: Listening on iscsid.socket. Sep 6 01:16:51.979696 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:16:51.979710 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:16:51.979723 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:16:51.979736 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:16:51.979749 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:16:51.979762 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:16:51.979775 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:16:51.979788 systemd[1]: Reached target sockets.target. Sep 6 01:16:51.979802 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:16:51.979819 systemd[1]: Finished network-cleanup.service. Sep 6 01:16:51.979832 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:16:51.979846 systemd[1]: Starting systemd-journald.service... Sep 6 01:16:51.979863 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:16:51.979876 systemd[1]: Starting systemd-resolved.service... Sep 6 01:16:51.979890 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:16:51.979903 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:16:51.979916 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:16:51.979943 systemd-journald[202]: Journal started Sep 6 01:16:51.980015 systemd-journald[202]: Runtime Journal (/run/log/journal/761d5963981e4d698349c70384ecdeb4) is 4.7M, max 38.1M, 33.3M free. Sep 6 01:16:51.906522 systemd-modules-load[203]: Inserted module 'overlay' Sep 6 01:16:52.009010 kernel: Bridge firewalling registered Sep 6 01:16:52.009039 systemd[1]: Started systemd-resolved.service. Sep 6 01:16:52.009060 kernel: audit: type=1130 audit(1757121412.001:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:51.961564 systemd-resolved[204]: Positive Trust Anchors: Sep 6 01:16:52.017377 systemd[1]: Started systemd-journald.service. Sep 6 01:16:52.017409 kernel: audit: type=1130 audit(1757121412.009:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.017429 kernel: SCSI subsystem initialized Sep 6 01:16:52.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:51.961593 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:16:52.023276 kernel: audit: type=1130 audit(1757121412.017:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:51.961637 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:16:52.030825 kernel: audit: type=1130 audit(1757121412.023:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:51.965228 systemd-resolved[204]: Defaulting to hostname 'linux'. Sep 6 01:16:52.042781 kernel: audit: type=1130 audit(1757121412.030:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.042809 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:16:52.042827 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:16:52.042844 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:16:52.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:51.984230 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 6 01:16:52.018338 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:16:52.024131 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:16:52.031676 systemd[1]: Reached target nss-lookup.target. Sep 6 01:16:52.044297 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:16:52.046547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:16:52.047906 systemd-modules-load[203]: Inserted module 'dm_multipath' Sep 6 01:16:52.049461 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:16:52.064365 kernel: audit: type=1130 audit(1757121412.053:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.064408 kernel: audit: type=1130 audit(1757121412.063:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.059844 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:16:52.063084 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:16:52.078457 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:16:52.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.084831 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:16:52.086175 kernel: audit: type=1130 audit(1757121412.078:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.104491 kernel: audit: type=1130 audit(1757121412.085:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.086611 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:16:52.105696 dracut-cmdline[224]: dracut-dracut-053 Sep 6 01:16:52.105696 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 6 01:16:52.105696 dracut-cmdline[224]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:16:52.182519 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:16:52.203504 kernel: iscsi: registered transport (tcp) Sep 6 01:16:52.230604 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:16:52.230686 kernel: QLogic iSCSI HBA Driver Sep 6 01:16:52.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.279897 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:16:52.281805 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:16:52.340521 kernel: raid6: sse2x4 gen() 13395 MB/s Sep 6 01:16:52.358512 kernel: raid6: sse2x4 xor() 8121 MB/s Sep 6 01:16:52.376511 kernel: raid6: sse2x2 gen() 9282 MB/s Sep 6 01:16:52.394520 kernel: raid6: sse2x2 xor() 8343 MB/s Sep 6 01:16:52.412517 kernel: raid6: sse2x1 gen() 9752 MB/s Sep 6 01:16:52.431184 kernel: raid6: sse2x1 xor() 7613 MB/s Sep 6 01:16:52.431254 kernel: raid6: using algorithm sse2x4 gen() 13395 MB/s Sep 6 01:16:52.431303 kernel: raid6: .... xor() 8121 MB/s, rmw enabled Sep 6 01:16:52.432508 kernel: raid6: using ssse3x2 recovery algorithm Sep 6 01:16:52.449517 kernel: xor: automatically using best checksumming function avx Sep 6 01:16:52.559527 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 01:16:52.571915 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:16:52.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.572000 audit: BPF prog-id=7 op=LOAD Sep 6 01:16:52.572000 audit: BPF prog-id=8 op=LOAD Sep 6 01:16:52.573770 systemd[1]: Starting systemd-udevd.service... Sep 6 01:16:52.590684 systemd-udevd[401]: Using default interface naming scheme 'v252'. Sep 6 01:16:52.599396 systemd[1]: Started systemd-udevd.service. Sep 6 01:16:52.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.601248 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:16:52.616742 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Sep 6 01:16:52.655268 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:16:52.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.656984 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:16:52.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:52.751818 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:16:52.848497 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:16:52.853495 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 01:16:52.879976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 01:16:52.880008 kernel: GPT:17805311 != 125829119 Sep 6 01:16:52.880025 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 01:16:52.880041 kernel: GPT:17805311 != 125829119 Sep 6 01:16:52.880056 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 01:16:52.880071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:16:52.902822 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:16:52.954847 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Sep 6 01:16:52.954897 kernel: AVX version of gcm_enc/dec engaged. Sep 6 01:16:52.954914 kernel: AES CTR mode by8 optimization enabled Sep 6 01:16:52.962494 kernel: ACPI: bus type USB registered Sep 6 01:16:52.964494 kernel: usbcore: registered new interface driver usbfs Sep 6 01:16:52.967559 kernel: usbcore: registered new interface driver hub Sep 6 01:16:52.967590 kernel: usbcore: registered new device driver usb Sep 6 01:16:52.968173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:16:52.971257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:16:52.974496 kernel: libata version 3.00 loaded. Sep 6 01:16:52.979187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:16:52.988499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:16:52.989965 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 01:16:53.039732 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 01:16:53.039771 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 01:16:53.039976 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 01:16:53.040136 kernel: scsi host0: ahci Sep 6 01:16:53.040336 kernel: scsi host1: ahci Sep 6 01:16:53.040565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:16:53.040586 kernel: scsi host2: ahci Sep 6 01:16:53.040775 kernel: scsi host3: ahci Sep 6 01:16:53.040975 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 6 01:16:53.060657 kernel: scsi host4: ahci Sep 6 01:16:53.060876 kernel: scsi host5: ahci Sep 6 01:16:53.061060 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 6 01:16:53.061235 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 6 01:16:53.061256 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 6 01:16:53.061432 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 6 01:16:53.061451 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 6 01:16:53.061635 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 6 01:16:53.061654 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 6 01:16:53.061815 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 6 01:16:53.061833 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 6 01:16:53.062003 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 6 01:16:53.062026 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 6 01:16:53.062042 kernel: hub 1-0:1.0: USB hub found Sep 6 01:16:53.062243 kernel: hub 1-0:1.0: 4 ports detected Sep 6 01:16:53.062431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:16:53.062450 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 6 01:16:53.062656 kernel: hub 2-0:1.0: USB hub found Sep 6 01:16:53.062858 kernel: hub 2-0:1.0: 4 ports detected Sep 6 01:16:52.993702 systemd[1]: Starting disk-uuid.service... Sep 6 01:16:53.064160 disk-uuid[480]: Primary Header is updated. Sep 6 01:16:53.064160 disk-uuid[480]: Secondary Entries is updated. Sep 6 01:16:53.064160 disk-uuid[480]: Secondary Header is updated. Sep 6 01:16:53.279549 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 6 01:16:53.348512 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.355513 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.358862 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.358892 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.360592 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.362282 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 01:16:53.422502 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:16:53.429303 kernel: usbcore: registered new interface driver usbhid Sep 6 01:16:53.429347 kernel: usbhid: USB HID core driver Sep 6 01:16:53.438244 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 6 01:16:53.438308 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 6 01:16:54.048078 disk-uuid[488]: The operation has completed successfully. Sep 6 01:16:54.049081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:16:54.101145 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:16:54.101301 systemd[1]: Finished disk-uuid.service. Sep 6 01:16:54.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.107359 systemd[1]: Starting verity-setup.service... Sep 6 01:16:54.125499 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 6 01:16:54.180914 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:16:54.182754 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:16:54.184672 systemd[1]: Finished verity-setup.service. Sep 6 01:16:54.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.274511 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:16:54.275356 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:16:54.276141 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:16:54.277613 systemd[1]: Starting ignition-setup.service... Sep 6 01:16:54.279486 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:16:54.298520 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:16:54.298578 kernel: BTRFS info (device vda6): using free space tree Sep 6 01:16:54.298605 kernel: BTRFS info (device vda6): has skinny extents Sep 6 01:16:54.314468 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:16:54.321947 systemd[1]: Finished ignition-setup.service. Sep 6 01:16:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.323751 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:16:54.431514 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:16:54.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.433000 audit: BPF prog-id=9 op=LOAD Sep 6 01:16:54.434305 systemd[1]: Starting systemd-networkd.service... Sep 6 01:16:54.470185 systemd-networkd[712]: lo: Link UP Sep 6 01:16:54.470198 systemd-networkd[712]: lo: Gained carrier Sep 6 01:16:54.471665 systemd-networkd[712]: Enumeration completed Sep 6 01:16:54.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.471827 systemd[1]: Started systemd-networkd.service. Sep 6 01:16:54.472904 systemd[1]: Reached target network.target. Sep 6 01:16:54.475222 systemd[1]: Starting iscsiuio.service... Sep 6 01:16:54.482097 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:16:54.492049 systemd-networkd[712]: eth0: Link UP Sep 6 01:16:54.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.492056 systemd-networkd[712]: eth0: Gained carrier Sep 6 01:16:54.493607 systemd[1]: Started iscsiuio.service. Sep 6 01:16:54.497256 systemd[1]: Starting iscsid.service... Sep 6 01:16:54.506287 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:16:54.506287 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:16:54.506287 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:16:54.506287 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:16:54.506287 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:16:54.506287 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:16:54.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.500895 ignition[628]: Ignition 2.14.0 Sep 6 01:16:54.505285 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:16:54.500910 ignition[628]: Stage: fetch-offline Sep 6 01:16:54.509826 systemd[1]: Starting ignition-fetch.service... Sep 6 01:16:54.501035 ignition[628]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:54.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.518612 systemd-networkd[712]: eth0: DHCPv4 address 10.230.51.142/30, gateway 10.230.51.141 acquired from 10.230.51.141 Sep 6 01:16:54.501092 ignition[628]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:54.523931 systemd[1]: Started iscsid.service. Sep 6 01:16:54.503126 ignition[628]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:54.527202 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:16:54.503299 ignition[628]: parsed url from cmdline: "" Sep 6 01:16:54.503307 ignition[628]: no config URL provided Sep 6 01:16:54.503317 ignition[628]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:16:54.503344 ignition[628]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:16:54.503354 ignition[628]: failed to fetch config: resource requires networking Sep 6 01:16:54.503544 ignition[628]: Ignition finished successfully Sep 6 01:16:54.525010 ignition[719]: Ignition 2.14.0 Sep 6 01:16:54.525019 ignition[719]: Stage: fetch Sep 6 01:16:54.525737 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:54.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.525779 ignition[719]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:54.547194 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:16:54.526980 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:54.547990 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:16:54.527112 ignition[719]: parsed url from cmdline: "" Sep 6 01:16:54.548602 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:16:54.527119 ignition[719]: no config URL provided Sep 6 01:16:54.549189 systemd[1]: Reached target remote-fs.target. Sep 6 01:16:54.527129 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:16:54.550879 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:16:54.527144 ignition[719]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:16:54.531032 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 6 01:16:54.533628 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 6 01:16:54.533675 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 6 01:16:54.553820 ignition[719]: GET result: OK Sep 6 01:16:54.554524 ignition[719]: parsing config with SHA512: cc557988f4c4c462ee7fc338f637785ec9d463adc58c7187b7a7f336bfb56760d67e1257e843662a903207288641bd39aa1a3c4ab0cdef334a478603f697cb46 Sep 6 01:16:54.567941 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:16:54.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.573190 unknown[719]: fetched base config from "system" Sep 6 01:16:54.573209 unknown[719]: fetched base config from "system" Sep 6 01:16:54.573902 ignition[719]: fetch: fetch complete Sep 6 01:16:54.573245 unknown[719]: fetched user config from "openstack" Sep 6 01:16:54.573912 ignition[719]: fetch: fetch passed Sep 6 01:16:54.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.576009 systemd[1]: Finished ignition-fetch.service. Sep 6 01:16:54.573999 ignition[719]: Ignition finished successfully Sep 6 01:16:54.577917 systemd[1]: Starting ignition-kargs.service... Sep 6 01:16:54.590933 ignition[738]: Ignition 2.14.0 Sep 6 01:16:54.590953 ignition[738]: Stage: kargs Sep 6 01:16:54.591124 ignition[738]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:54.591158 ignition[738]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:54.592403 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:54.594022 ignition[738]: kargs: kargs passed Sep 6 01:16:54.595295 systemd[1]: Finished ignition-kargs.service. Sep 6 01:16:54.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.594098 ignition[738]: Ignition finished successfully Sep 6 01:16:54.597713 systemd[1]: Starting ignition-disks.service... Sep 6 01:16:54.608757 ignition[744]: Ignition 2.14.0 Sep 6 01:16:54.608773 ignition[744]: Stage: disks Sep 6 01:16:54.608917 ignition[744]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:54.608949 ignition[744]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:54.610239 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:54.611912 ignition[744]: disks: disks passed Sep 6 01:16:54.611980 ignition[744]: Ignition finished successfully Sep 6 01:16:54.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.612818 systemd[1]: Finished ignition-disks.service. Sep 6 01:16:54.613676 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:16:54.614840 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:16:54.616071 systemd[1]: Reached target local-fs.target. Sep 6 01:16:54.617411 systemd[1]: Reached target sysinit.target. Sep 6 01:16:54.618656 systemd[1]: Reached target basic.target. Sep 6 01:16:54.620869 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:16:54.640051 systemd-fsck[752]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 6 01:16:54.644075 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:16:54.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.645773 systemd[1]: Mounting sysroot.mount... Sep 6 01:16:54.658509 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:16:54.658792 systemd[1]: Mounted sysroot.mount. Sep 6 01:16:54.659586 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:16:54.662022 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:16:54.663207 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 01:16:54.664036 systemd[1]: Starting flatcar-openstack-hostname.service... Sep 6 01:16:54.667043 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:16:54.667102 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:16:54.671346 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:16:54.674584 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:16:54.682514 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:16:54.695387 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:16:54.702560 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:16:54.709974 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:16:54.770877 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:16:54.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.772809 systemd[1]: Starting ignition-mount.service... Sep 6 01:16:54.774350 systemd[1]: Starting sysroot-boot.service... Sep 6 01:16:54.788437 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 01:16:54.808719 ignition[808]: INFO : Ignition 2.14.0 Sep 6 01:16:54.809807 ignition[808]: INFO : Stage: mount Sep 6 01:16:54.810782 ignition[808]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:54.811817 ignition[808]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:54.814853 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:54.815921 coreos-metadata[758]: Sep 06 01:16:54.815 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 6 01:16:54.816566 systemd[1]: Finished sysroot-boot.service. Sep 6 01:16:54.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.820212 ignition[808]: INFO : mount: mount passed Sep 6 01:16:54.821110 ignition[808]: INFO : Ignition finished successfully Sep 6 01:16:54.822615 systemd[1]: Finished ignition-mount.service. Sep 6 01:16:54.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.837814 coreos-metadata[758]: Sep 06 01:16:54.837 INFO Fetch successful Sep 6 01:16:54.838644 coreos-metadata[758]: Sep 06 01:16:54.838 INFO wrote hostname srv-rd74e.gb1.brightbox.com to /sysroot/etc/hostname Sep 6 01:16:54.842997 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 6 01:16:54.843174 systemd[1]: Finished flatcar-openstack-hostname.service. Sep 6 01:16:54.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:54.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:55.203524 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:16:55.215101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (815) Sep 6 01:16:55.218921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:16:55.218958 kernel: BTRFS info (device vda6): using free space tree Sep 6 01:16:55.218992 kernel: BTRFS info (device vda6): has skinny extents Sep 6 01:16:55.225633 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:16:55.227200 systemd[1]: Starting ignition-files.service... Sep 6 01:16:55.247946 ignition[835]: INFO : Ignition 2.14.0 Sep 6 01:16:55.247946 ignition[835]: INFO : Stage: files Sep 6 01:16:55.249686 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:55.249686 ignition[835]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:55.249686 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:55.253229 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:16:55.254230 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:16:55.254230 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:16:55.257611 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:16:55.258799 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:16:55.260276 unknown[835]: wrote ssh authorized keys file for user: core Sep 6 01:16:55.261310 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:16:55.261310 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:16:55.261310 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:16:55.261310 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 01:16:55.261310 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 01:16:55.401966 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:16:55.616874 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 01:16:55.618343 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:16:55.618343 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 01:16:55.839933 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 01:16:56.147676 systemd-networkd[712]: eth0: Gained IPv6LL Sep 6 01:16:56.378733 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:16:56.380553 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:16:56.382019 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:16:56.383174 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:16:56.384917 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:16:56.384917 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:16:56.384917 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:16:56.384917 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:16:56.389277 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 01:16:56.631844 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 6 01:16:57.655561 systemd-networkd[712]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ce3:24:19ff:fee6:338e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ce3:24:19ff:fee6:338e/64 assigned by NDisc. Sep 6 01:16:57.655584 systemd-networkd[712]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 6 01:16:58.981801 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:16:58.981801 ignition[835]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:16:58.981801 ignition[835]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(e): [started] processing unit "containerd.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(e): [finished] processing unit "containerd.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:16:58.989824 ignition[835]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:16:59.017586 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 6 01:16:59.017636 kernel: audit: type=1130 audit(1757121419.003:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.001374 systemd[1]: Finished ignition-files.service. Sep 6 01:16:59.018636 ignition[835]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:16:59.018636 ignition[835]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:16:59.018636 ignition[835]: INFO : files: files passed Sep 6 01:16:59.018636 ignition[835]: INFO : Ignition finished successfully Sep 6 01:16:59.041038 kernel: audit: type=1130 audit(1757121419.022:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.041062 kernel: audit: type=1131 audit(1757121419.022:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.041080 kernel: audit: type=1130 audit(1757121419.033:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.006840 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:16:59.014745 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:16:59.044341 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:16:59.016056 systemd[1]: Starting ignition-quench.service... Sep 6 01:16:59.020879 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:16:59.021011 systemd[1]: Finished ignition-quench.service. Sep 6 01:16:59.031245 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:16:59.034345 systemd[1]: Reached target ignition-complete.target. Sep 6 01:16:59.041360 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:16:59.063144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:16:59.064081 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:16:59.088657 kernel: audit: type=1130 audit(1757121419.064:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.088694 kernel: audit: type=1131 audit(1757121419.064:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.064985 systemd[1]: Reached target initrd-fs.target. Sep 6 01:16:59.089295 systemd[1]: Reached target initrd.target. Sep 6 01:16:59.090526 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:16:59.091793 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:16:59.109692 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:16:59.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.111573 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:16:59.117586 kernel: audit: type=1130 audit(1757121419.109:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.125299 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:16:59.126103 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:16:59.127559 systemd[1]: Stopped target timers.target. Sep 6 01:16:59.128806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:16:59.135594 kernel: audit: type=1131 audit(1757121419.129:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.128948 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:16:59.130227 systemd[1]: Stopped target initrd.target. Sep 6 01:16:59.136318 systemd[1]: Stopped target basic.target. Sep 6 01:16:59.137622 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:16:59.138955 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:16:59.140320 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:16:59.141721 systemd[1]: Stopped target remote-fs.target. Sep 6 01:16:59.143062 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:16:59.144379 systemd[1]: Stopped target sysinit.target. Sep 6 01:16:59.145674 systemd[1]: Stopped target local-fs.target. Sep 6 01:16:59.146937 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:16:59.148345 systemd[1]: Stopped target swap.target. Sep 6 01:16:59.156012 kernel: audit: type=1131 audit(1757121419.150:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.149560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:16:59.149711 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:16:59.163247 kernel: audit: type=1131 audit(1757121419.157:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.150982 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:16:59.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.156697 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:16:59.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.156851 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:16:59.158191 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:16:59.158404 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:16:59.164185 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:16:59.164388 systemd[1]: Stopped ignition-files.service. Sep 6 01:16:59.166657 systemd[1]: Stopping ignition-mount.service... Sep 6 01:16:59.172880 systemd[1]: Stopping iscsid.service... Sep 6 01:16:59.173551 iscsid[718]: iscsid shutting down. Sep 6 01:16:59.176217 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:16:59.177647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:16:59.179821 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:16:59.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.181648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:16:59.185125 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:16:59.188640 ignition[873]: INFO : Ignition 2.14.0 Sep 6 01:16:59.188640 ignition[873]: INFO : Stage: umount Sep 6 01:16:59.190701 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:16:59.190701 ignition[873]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:16:59.190701 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:16:59.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.197788 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:16:59.198860 ignition[873]: INFO : umount: umount passed Sep 6 01:16:59.198860 ignition[873]: INFO : Ignition finished successfully Sep 6 01:16:59.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.198678 systemd[1]: Stopped iscsid.service. Sep 6 01:16:59.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.200828 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:16:59.201702 systemd[1]: Stopped ignition-mount.service. Sep 6 01:16:59.205590 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:16:59.209254 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:16:59.210198 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:16:59.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.212283 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:16:59.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.212397 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:16:59.214159 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:16:59.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.214222 systemd[1]: Stopped ignition-disks.service. Sep 6 01:16:59.219886 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:16:59.219971 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:16:59.220674 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:16:59.220744 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:16:59.221391 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:16:59.221467 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:16:59.222153 systemd[1]: Stopped target paths.target. Sep 6 01:16:59.222800 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:16:59.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.226551 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:16:59.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.227521 systemd[1]: Stopped target slices.target. Sep 6 01:16:59.228762 systemd[1]: Stopped target sockets.target. Sep 6 01:16:59.230225 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:16:59.230279 systemd[1]: Closed iscsid.socket. Sep 6 01:16:59.231496 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:16:59.231568 systemd[1]: Stopped ignition-setup.service. Sep 6 01:16:59.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.232740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:16:59.232799 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:16:59.237150 systemd[1]: Stopping iscsiuio.service... Sep 6 01:16:59.238359 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:16:59.238545 systemd[1]: Stopped iscsiuio.service. Sep 6 01:16:59.239557 systemd[1]: Stopped target network.target. Sep 6 01:16:59.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.240642 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:16:59.240695 systemd[1]: Closed iscsiuio.socket. Sep 6 01:16:59.242016 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:16:59.244628 systemd-networkd[712]: eth0: DHCPv6 lease lost Sep 6 01:16:59.256000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:16:59.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.244668 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:16:59.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.247787 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:16:59.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.247972 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:16:59.248990 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:16:59.249051 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:16:59.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.252208 systemd[1]: Stopping network-cleanup.service... Sep 6 01:16:59.256166 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:16:59.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.256237 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:16:59.268000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:16:59.257755 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:16:59.257856 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:16:59.259159 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:16:59.259217 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:16:59.260179 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:16:59.263079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:16:59.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.263879 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:16:59.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.264034 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:16:59.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.265732 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:16:59.265957 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:16:59.268137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:16:59.268239 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:16:59.286959 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:16:59.287044 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:16:59.288116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:16:59.288191 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:16:59.289546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:16:59.289604 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:16:59.290925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:16:59.290982 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:16:59.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.293577 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:16:59.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.294443 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 01:16:59.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.294628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 01:16:59.305619 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:16:59.305718 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:16:59.306453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:16:59.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.306640 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:16:59.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:16:59.309120 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 01:16:59.309981 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:16:59.310144 systemd[1]: Stopped network-cleanup.service. Sep 6 01:16:59.311754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:16:59.311893 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:16:59.313079 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:16:59.315186 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:16:59.327863 systemd[1]: Switching root. Sep 6 01:16:59.329000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:16:59.329000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:16:59.333000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:16:59.333000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:16:59.333000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:16:59.352420 systemd-journald[202]: Journal stopped Sep 6 01:17:03.306468 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 6 01:17:03.306671 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:17:03.306720 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:17:03.306753 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:17:03.306795 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:17:03.306815 kernel: SELinux: policy capability open_perms=1 Sep 6 01:17:03.306840 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:17:03.306866 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:17:03.306901 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:17:03.306930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:17:03.306949 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:17:03.306984 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:17:03.312666 systemd[1]: Successfully loaded SELinux policy in 65.972ms. Sep 6 01:17:03.314204 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.726ms. Sep 6 01:17:03.314246 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:17:03.314270 systemd[1]: Detected virtualization kvm. Sep 6 01:17:03.314291 systemd[1]: Detected architecture x86-64. Sep 6 01:17:03.314325 systemd[1]: Detected first boot. Sep 6 01:17:03.314348 systemd[1]: Hostname set to . Sep 6 01:17:03.314379 systemd[1]: Initializing machine ID from VM UUID. Sep 6 01:17:03.314404 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:17:03.314431 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:17:03.314452 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:17:03.314497 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:17:03.316585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:17:03.316641 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:17:03.316664 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 01:17:03.316686 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:17:03.316714 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:17:03.316748 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 01:17:03.316769 systemd[1]: Created slice system-getty.slice. Sep 6 01:17:03.316811 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:17:03.316893 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:17:03.316916 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:17:03.316936 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:17:03.316956 systemd[1]: Created slice user.slice. Sep 6 01:17:03.317027 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:17:03.317056 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:17:03.317077 systemd[1]: Set up automount boot.automount. Sep 6 01:17:03.317102 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:17:03.317138 systemd[1]: Reached target integritysetup.target. Sep 6 01:17:03.317171 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:17:03.317198 systemd[1]: Reached target remote-fs.target. Sep 6 01:17:03.317219 systemd[1]: Reached target slices.target. Sep 6 01:17:03.317240 systemd[1]: Reached target swap.target. Sep 6 01:17:03.317268 systemd[1]: Reached target torcx.target. Sep 6 01:17:03.317289 systemd[1]: Reached target veritysetup.target. Sep 6 01:17:03.317319 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:17:03.317340 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:17:03.317360 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:17:03.317386 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:17:03.317412 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:17:03.317433 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:17:03.317453 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:17:03.317494 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:17:03.317517 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:17:03.321416 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:17:03.321465 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:17:03.322526 systemd[1]: Mounting media.mount... Sep 6 01:17:03.322554 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:03.322587 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:17:03.322623 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:17:03.322644 systemd[1]: Mounting tmp.mount... Sep 6 01:17:03.322671 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:17:03.322692 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:17:03.322727 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:17:03.322756 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:17:03.322787 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:17:03.322807 systemd[1]: Starting modprobe@drm.service... Sep 6 01:17:03.322828 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:17:03.322847 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:17:03.322875 systemd[1]: Starting modprobe@loop.service... Sep 6 01:17:03.322895 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:17:03.322922 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 01:17:03.322958 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 01:17:03.322986 systemd[1]: Starting systemd-journald.service... Sep 6 01:17:03.323020 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:17:03.323049 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:17:03.323070 kernel: fuse: init (API version 7.34) Sep 6 01:17:03.323097 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:17:03.323118 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:17:03.323144 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:03.323173 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:17:03.323215 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:17:03.323236 systemd[1]: Mounted media.mount. Sep 6 01:17:03.323262 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:17:03.323284 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:17:03.323304 systemd[1]: Mounted tmp.mount. Sep 6 01:17:03.323326 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:17:03.323352 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:17:03.323377 systemd-journald[1022]: Journal started Sep 6 01:17:03.323493 systemd-journald[1022]: Runtime Journal (/run/log/journal/761d5963981e4d698349c70384ecdeb4) is 4.7M, max 38.1M, 33.3M free. Sep 6 01:17:03.100000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:17:03.100000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 01:17:03.289000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:17:03.289000 audit[1022]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc3e738a80 a2=4000 a3=7ffc3e738b1c items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:17:03.289000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:17:03.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.326513 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:17:03.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.329519 systemd[1]: Started systemd-journald.service. Sep 6 01:17:03.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.335633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:17:03.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.335985 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:17:03.337222 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:17:03.337537 systemd[1]: Finished modprobe@drm.service. Sep 6 01:17:03.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.338549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:17:03.338767 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:17:03.339817 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:17:03.340686 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:17:03.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.344190 kernel: loop: module loaded Sep 6 01:17:03.343034 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:17:03.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.345677 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:17:03.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.348804 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:17:03.349071 systemd[1]: Finished modprobe@loop.service. Sep 6 01:17:03.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.350171 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:17:03.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.351271 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:17:03.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.352650 systemd[1]: Reached target network-pre.target. Sep 6 01:17:03.355386 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:17:03.359767 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:17:03.364202 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:17:03.368606 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:17:03.371270 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:17:03.372636 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:17:03.399172 systemd-journald[1022]: Time spent on flushing to /var/log/journal/761d5963981e4d698349c70384ecdeb4 is 46.131ms for 1233 entries. Sep 6 01:17:03.399172 systemd-journald[1022]: System Journal (/var/log/journal/761d5963981e4d698349c70384ecdeb4) is 8.0M, max 584.8M, 576.8M free. Sep 6 01:17:03.479924 systemd-journald[1022]: Received client request to flush runtime journal. Sep 6 01:17:03.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.375110 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:17:03.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.376540 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:17:03.378427 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:17:03.382336 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:17:03.390098 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:17:03.390923 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:17:03.399372 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:17:03.402042 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:17:03.422902 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:17:03.448946 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:17:03.454144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:17:03.480976 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:17:03.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.509595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:17:03.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:03.561035 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:17:03.563872 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:17:03.575721 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 01:17:04.042236 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:17:04.053807 kernel: kauditd_printk_skb: 78 callbacks suppressed Sep 6 01:17:04.053917 kernel: audit: type=1130 audit(1757121424.045:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.048677 systemd[1]: Starting systemd-udevd.service... Sep 6 01:17:04.077998 systemd-udevd[1070]: Using default interface naming scheme 'v252'. Sep 6 01:17:04.119320 kernel: audit: type=1130 audit(1757121424.109:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.108789 systemd[1]: Started systemd-udevd.service. Sep 6 01:17:04.118095 systemd[1]: Starting systemd-networkd.service... Sep 6 01:17:04.130802 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:17:04.192346 systemd[1]: Found device dev-ttyS0.device. Sep 6 01:17:04.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.218730 systemd[1]: Started systemd-userdbd.service. Sep 6 01:17:04.225495 kernel: audit: type=1130 audit(1757121424.219:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.311173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:17:04.333616 systemd-networkd[1084]: lo: Link UP Sep 6 01:17:04.334107 systemd-networkd[1084]: lo: Gained carrier Sep 6 01:17:04.335185 systemd-networkd[1084]: Enumeration completed Sep 6 01:17:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.335491 systemd[1]: Started systemd-networkd.service. Sep 6 01:17:04.342299 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:17:04.342504 kernel: audit: type=1130 audit(1757121424.335:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.345143 systemd-networkd[1084]: eth0: Link UP Sep 6 01:17:04.345250 systemd-networkd[1084]: eth0: Gained carrier Sep 6 01:17:04.359739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 6 01:17:04.359618 systemd-networkd[1084]: eth0: DHCPv4 address 10.230.51.142/30, gateway 10.230.51.141 acquired from 10.230.51.141 Sep 6 01:17:04.370498 kernel: ACPI: button: Power Button [PWRF] Sep 6 01:17:04.388523 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:17:04.409000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:17:04.427188 kernel: audit: type=1400 audit(1757121424.409:122): avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:17:04.427261 kernel: audit: type=1300 audit(1757121424.409:122): arch=c000003e syscall=175 success=yes exit=0 a0=55abf519a900 a1=338ec a2=7f4b53620bc5 a3=5 items=110 ppid=1070 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:17:04.409000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55abf519a900 a1=338ec a2=7f4b53620bc5 a3=5 items=110 ppid=1070 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:17:04.409000 audit: CWD cwd="/" Sep 6 01:17:04.434244 kernel: audit: type=1307 audit(1757121424.409:122): cwd="/" Sep 6 01:17:04.434292 kernel: audit: type=1302 audit(1757121424.409:122): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.439486 kernel: audit: type=1302 audit(1757121424.409:122): item=1 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=1 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.444572 kernel: audit: type=1302 audit(1757121424.409:122): item=2 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=2 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=3 name=(null) inode=16481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=4 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=5 name=(null) inode=16482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=6 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=7 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=8 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=9 name=(null) inode=16484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=10 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=11 name=(null) inode=16485 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=12 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=13 name=(null) inode=16486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=14 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=15 name=(null) inode=16487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=16 name=(null) inode=16483 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=17 name=(null) inode=16488 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=18 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=19 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=20 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=21 name=(null) inode=16490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=22 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=23 name=(null) inode=16491 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=24 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=25 name=(null) inode=16492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=26 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=27 name=(null) inode=16493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=28 name=(null) inode=16489 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=29 name=(null) inode=16494 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=30 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=31 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=32 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=33 name=(null) inode=16496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=34 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=35 name=(null) inode=16497 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=36 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=37 name=(null) inode=16498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=38 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=39 name=(null) inode=16499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=40 name=(null) inode=16495 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=41 name=(null) inode=16500 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=42 name=(null) inode=16480 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=43 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=44 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=45 name=(null) inode=16502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=46 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=47 name=(null) inode=16503 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=48 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=49 name=(null) inode=16504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=50 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=51 name=(null) inode=16505 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=52 name=(null) inode=16501 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=53 name=(null) inode=16506 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=55 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=56 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=57 name=(null) inode=16508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=58 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=59 name=(null) inode=16509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=60 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=61 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=62 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=63 name=(null) inode=16511 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=64 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=65 name=(null) inode=16512 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=66 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=67 name=(null) inode=16513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=68 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=69 name=(null) inode=16514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=70 name=(null) inode=16510 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=71 name=(null) inode=16515 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=72 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=73 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=74 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=75 name=(null) inode=16517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=76 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=77 name=(null) inode=16518 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=78 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=79 name=(null) inode=16519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=80 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=81 name=(null) inode=16520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=82 name=(null) inode=16516 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=83 name=(null) inode=16521 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=84 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=85 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=86 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=87 name=(null) inode=16523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=88 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=89 name=(null) inode=16524 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=90 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=91 name=(null) inode=16525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=92 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=93 name=(null) inode=16526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=94 name=(null) inode=16522 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=95 name=(null) inode=16527 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=96 name=(null) inode=16507 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=97 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=98 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=99 name=(null) inode=16529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=100 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=101 name=(null) inode=16530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=102 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=103 name=(null) inode=16531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=104 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=105 name=(null) inode=16532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=106 name=(null) inode=16528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=107 name=(null) inode=16533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PATH item=109 name=(null) inode=16534 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:17:04.409000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:17:04.477503 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 6 01:17:04.489500 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 01:17:04.522826 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 01:17:04.523164 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 01:17:04.653318 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:17:04.656364 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:17:04.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.681221 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:17:04.712912 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:17:04.713778 systemd[1]: Reached target cryptsetup.target. Sep 6 01:17:04.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.716355 systemd[1]: Starting lvm2-activation.service... Sep 6 01:17:04.723355 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:17:04.752033 systemd[1]: Finished lvm2-activation.service. Sep 6 01:17:04.752860 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:17:04.753531 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:17:04.753577 systemd[1]: Reached target local-fs.target. Sep 6 01:17:04.754171 systemd[1]: Reached target machines.target. Sep 6 01:17:04.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.757015 systemd[1]: Starting ldconfig.service... Sep 6 01:17:04.758494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:17:04.758584 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:04.761685 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:17:04.765825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:17:04.772348 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:17:04.775183 systemd[1]: Starting systemd-sysext.service... Sep 6 01:17:04.776704 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) Sep 6 01:17:04.778340 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:17:04.793455 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:17:04.798748 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:17:04.799065 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:17:04.846713 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 01:17:04.850944 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:17:04.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.852994 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:17:04.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.873285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:17:04.881533 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:17:04.908604 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 01:17:04.924918 (sd-sysext)[1122]: Using extensions 'kubernetes'. Sep 6 01:17:04.927596 (sd-sysext)[1122]: Merged extensions into '/usr'. Sep 6 01:17:04.949656 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Sep 6 01:17:04.949656 systemd-fsck[1120]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 01:17:04.958127 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:17:04.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:04.966593 systemd[1]: Mounting boot.mount... Sep 6 01:17:04.967187 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:04.969531 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:17:04.970554 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:17:04.972620 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:17:04.974875 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:17:04.982125 systemd[1]: Starting modprobe@loop.service... Sep 6 01:17:04.983638 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:17:04.983816 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:04.984019 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:04.990994 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:17:04.997108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:17:04.997351 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:17:05.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.003862 systemd[1]: Finished systemd-sysext.service. Sep 6 01:17:05.004893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:17:05.005135 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:17:05.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.009302 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:17:05.009638 systemd[1]: Finished modprobe@loop.service. Sep 6 01:17:05.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.016448 systemd[1]: Starting ensure-sysext.service... Sep 6 01:17:05.017150 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:17:05.017259 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.018861 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:17:05.032367 systemd[1]: Mounted boot.mount. Sep 6 01:17:05.046302 systemd[1]: Reloading. Sep 6 01:17:05.079039 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:17:05.086907 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:17:05.095567 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:17:05.184719 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-09-06T01:17:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:17:05.184817 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-09-06T01:17:05Z" level=info msg="torcx already run" Sep 6 01:17:05.300008 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:17:05.362233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:17:05.362275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:17:05.393429 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:17:05.471529 systemd[1]: Finished ldconfig.service. Sep 6 01:17:05.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.476250 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:17:05.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.478686 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:17:05.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.484022 systemd[1]: Starting audit-rules.service... Sep 6 01:17:05.486622 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:17:05.489659 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:17:05.507199 systemd[1]: Starting systemd-resolved.service... Sep 6 01:17:05.513581 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:17:05.516972 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:17:05.522569 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:17:05.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.530000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.536716 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:17:05.542827 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.545718 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:17:05.550737 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:17:05.554329 systemd[1]: Starting modprobe@loop.service... Sep 6 01:17:05.559549 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.559896 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:05.560133 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:17:05.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.564688 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:17:05.566291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:17:05.566612 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:17:05.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.572241 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.573931 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:17:05.575964 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.576165 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:05.576390 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:17:05.578120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:17:05.578379 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:17:05.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.581159 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:17:05.581387 systemd[1]: Finished modprobe@loop.service. Sep 6 01:17:05.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.582353 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:17:05.588841 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.593016 systemd[1]: Starting modprobe@drm.service... Sep 6 01:17:05.597589 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:17:05.607695 systemd[1]: Starting modprobe@loop.service... Sep 6 01:17:05.611157 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.611324 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:05.613591 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:17:05.614657 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:17:05.618732 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:17:05.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.624304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:17:05.624567 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:17:05.625765 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:17:05.626017 systemd[1]: Finished modprobe@drm.service. Sep 6 01:17:05.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.627252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:17:05.627607 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:17:05.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.629412 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:17:05.630634 systemd[1]: Finished modprobe@loop.service. Sep 6 01:17:05.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.632240 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:17:05.632406 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.637597 systemd[1]: Starting systemd-update-done.service... Sep 6 01:17:05.638965 systemd[1]: Finished ensure-sysext.service. Sep 6 01:17:05.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.657688 systemd[1]: Finished systemd-update-done.service. Sep 6 01:17:05.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:17:05.659167 augenrules[1257]: No rules Sep 6 01:17:05.658000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:17:05.658000 audit[1257]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffece11de70 a2=420 a3=0 items=0 ppid=1216 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:17:05.658000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:17:05.660109 systemd[1]: Finished audit-rules.service. Sep 6 01:17:05.684343 systemd-networkd[1084]: eth0: Gained IPv6LL Sep 6 01:17:05.688621 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:17:05.718743 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:17:05.719627 systemd[1]: Reached target time-set.target. Sep 6 01:17:05.729751 systemd-resolved[1220]: Positive Trust Anchors: Sep 6 01:17:05.730318 systemd-resolved[1220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:17:05.730461 systemd-resolved[1220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:17:05.737982 systemd-resolved[1220]: Using system hostname 'srv-rd74e.gb1.brightbox.com'. Sep 6 01:17:05.740727 systemd[1]: Started systemd-resolved.service. Sep 6 01:17:05.741579 systemd[1]: Reached target network.target. Sep 6 01:17:05.742194 systemd[1]: Reached target network-online.target. Sep 6 01:17:05.742799 systemd[1]: Reached target nss-lookup.target. Sep 6 01:17:05.743426 systemd[1]: Reached target sysinit.target. Sep 6 01:17:05.744157 systemd[1]: Started motdgen.path. Sep 6 01:17:05.744777 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:17:05.745721 systemd[1]: Started logrotate.timer. Sep 6 01:17:05.746439 systemd[1]: Started mdadm.timer. Sep 6 01:17:05.747044 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:17:05.747696 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:17:05.747750 systemd[1]: Reached target paths.target. Sep 6 01:17:05.748391 systemd[1]: Reached target timers.target. Sep 6 01:17:05.749585 systemd[1]: Listening on dbus.socket. Sep 6 01:17:05.752280 systemd[1]: Starting docker.socket... Sep 6 01:17:05.755055 systemd[1]: Listening on sshd.socket. Sep 6 01:17:05.755911 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:05.756457 systemd[1]: Listening on docker.socket. Sep 6 01:17:05.757262 systemd[1]: Reached target sockets.target. Sep 6 01:17:05.757992 systemd[1]: Reached target basic.target. Sep 6 01:17:05.758893 systemd[1]: System is tainted: cgroupsv1 Sep 6 01:17:05.759153 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.759305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:17:05.761156 systemd[1]: Starting containerd.service... Sep 6 01:17:05.763555 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 01:17:05.766198 systemd[1]: Starting dbus.service... Sep 6 01:17:05.772004 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:17:05.777365 systemd[1]: Starting extend-filesystems.service... Sep 6 01:17:05.779008 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:17:05.783096 jq[1272]: false Sep 6 01:17:05.783634 systemd[1]: Starting kubelet.service... Sep 6 01:17:05.786898 systemd[1]: Starting motdgen.service... Sep 6 01:17:05.795089 systemd[1]: Starting prepare-helm.service... Sep 6 01:17:05.807030 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:17:05.809850 systemd[1]: Starting sshd-keygen.service... Sep 6 01:17:05.818889 systemd[1]: Starting systemd-logind.service... Sep 6 01:17:05.824642 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:17:05.824797 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:17:05.828188 systemd[1]: Starting update-engine.service... Sep 6 01:17:05.839255 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:17:05.843656 jq[1294]: true Sep 6 01:17:05.848918 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:17:05.849311 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:17:05.852360 extend-filesystems[1275]: Found loop1 Sep 6 01:17:05.861724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:17:05.862121 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:17:05.871239 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:05.871355 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:17:05.882818 tar[1298]: linux-amd64/helm Sep 6 01:17:05.892924 dbus-daemon[1271]: [system] SELinux support is enabled Sep 6 01:17:05.893736 systemd[1]: Started dbus.service. Sep 6 01:17:06.600250 systemd-resolved[1220]: Clock change detected. Flushing caches. Sep 6 01:17:06.600410 systemd-timesyncd[1222]: Contacted time server 178.62.68.79:123 (0.flatcar.pool.ntp.org). Sep 6 01:17:06.600501 systemd-timesyncd[1222]: Initial clock synchronization to Sat 2025-09-06 01:17:06.600184 UTC. Sep 6 01:17:06.601120 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:17:06.601171 systemd[1]: Reached target system-config.target. Sep 6 01:17:06.601897 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:17:06.601940 systemd[1]: Reached target user-config.target. Sep 6 01:17:06.604604 jq[1300]: true Sep 6 01:17:06.606265 extend-filesystems[1275]: Found vda Sep 6 01:17:06.607917 extend-filesystems[1275]: Found vda1 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda2 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda3 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found usr Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda4 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda6 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda7 Sep 6 01:17:06.613040 extend-filesystems[1275]: Found vda9 Sep 6 01:17:06.613040 extend-filesystems[1275]: Checking size of /dev/vda9 Sep 6 01:17:06.616341 dbus-daemon[1271]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1084 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 01:17:06.646930 systemd[1]: Starting systemd-hostnamed.service... Sep 6 01:17:06.620934 dbus-daemon[1271]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 01:17:06.648784 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:17:06.649216 systemd[1]: Finished motdgen.service. Sep 6 01:17:06.699343 update_engine[1292]: I0906 01:17:06.698375 1292 main.cc:92] Flatcar Update Engine starting Sep 6 01:17:06.703513 extend-filesystems[1275]: Resized partition /dev/vda9 Sep 6 01:17:06.706316 systemd[1]: Started update-engine.service. Sep 6 01:17:06.706697 update_engine[1292]: I0906 01:17:06.706393 1292 update_check_scheduler.cc:74] Next update check in 11m21s Sep 6 01:17:06.707645 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 01:17:06.709899 systemd[1]: Started locksmithd.service. Sep 6 01:17:06.717045 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 6 01:17:06.718930 systemd[1]: Created slice system-sshd.slice. Sep 6 01:17:06.778873 bash[1334]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:17:06.779832 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:17:06.888090 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 01:17:06.912171 env[1301]: time="2025-09-06T01:17:06.879316075Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:17:06.908397 systemd[1]: Started systemd-hostnamed.service. Sep 6 01:17:06.908153 dbus-daemon[1271]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 01:17:06.909834 dbus-daemon[1271]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1326 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 01:17:06.915565 systemd[1]: Starting polkit.service... Sep 6 01:17:06.918967 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 01:17:06.918967 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 01:17:06.918967 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 01:17:06.936515 extend-filesystems[1275]: Resized filesystem in /dev/vda9 Sep 6 01:17:06.920897 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:17:06.921313 systemd[1]: Finished extend-filesystems.service. Sep 6 01:17:06.945364 systemd-logind[1289]: Watching system buttons on /dev/input/event2 (Power Button) Sep 6 01:17:06.945419 systemd-logind[1289]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 01:17:06.953763 systemd-logind[1289]: New seat seat0. Sep 6 01:17:06.954903 polkitd[1343]: Started polkitd version 121 Sep 6 01:17:06.959033 systemd[1]: Started systemd-logind.service. Sep 6 01:17:06.990134 polkitd[1343]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 01:17:06.991509 polkitd[1343]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 01:17:06.996535 polkitd[1343]: Finished loading, compiling and executing 2 rules Sep 6 01:17:06.998171 dbus-daemon[1271]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 01:17:06.998376 systemd[1]: Started polkit.service. Sep 6 01:17:07.002118 polkitd[1343]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 01:17:07.032476 systemd-hostnamed[1326]: Hostname set to (static) Sep 6 01:17:07.042109 systemd-networkd[1084]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ce3:24:19ff:fee6:338e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ce3:24:19ff:fee6:338e/64 assigned by NDisc. Sep 6 01:17:07.042121 systemd-networkd[1084]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 6 01:17:07.045257 env[1301]: time="2025-09-06T01:17:07.045193714Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:17:07.051334 env[1301]: time="2025-09-06T01:17:07.051298198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061226 env[1301]: time="2025-09-06T01:17:07.060852226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061226 env[1301]: time="2025-09-06T01:17:07.060901347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061425 env[1301]: time="2025-09-06T01:17:07.061303520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061425 env[1301]: time="2025-09-06T01:17:07.061331509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061425 env[1301]: time="2025-09-06T01:17:07.061352856Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:17:07.061425 env[1301]: time="2025-09-06T01:17:07.061368642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.061596 env[1301]: time="2025-09-06T01:17:07.061527877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.062111 env[1301]: time="2025-09-06T01:17:07.062081867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:17:07.062330 env[1301]: time="2025-09-06T01:17:07.062293498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:17:07.062330 env[1301]: time="2025-09-06T01:17:07.062328012Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:17:07.062476 env[1301]: time="2025-09-06T01:17:07.062409202Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:17:07.062476 env[1301]: time="2025-09-06T01:17:07.062429014Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074211806Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074257729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074279046Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074346587Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074371378Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074391112Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074411234Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074430437Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074448445Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074478943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074499859Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.074598 env[1301]: time="2025-09-06T01:17:07.074526236Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:17:07.075221 env[1301]: time="2025-09-06T01:17:07.074680402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:17:07.075221 env[1301]: time="2025-09-06T01:17:07.074864579Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:17:07.075444 env[1301]: time="2025-09-06T01:17:07.075363524Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:17:07.075519 env[1301]: time="2025-09-06T01:17:07.075451798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075519 env[1301]: time="2025-09-06T01:17:07.075504947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:17:07.075616 env[1301]: time="2025-09-06T01:17:07.075588874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075673 env[1301]: time="2025-09-06T01:17:07.075622345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075673 env[1301]: time="2025-09-06T01:17:07.075649476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075751 env[1301]: time="2025-09-06T01:17:07.075674425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075751 env[1301]: time="2025-09-06T01:17:07.075693999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075751 env[1301]: time="2025-09-06T01:17:07.075711279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075751 env[1301]: time="2025-09-06T01:17:07.075728261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075751 env[1301]: time="2025-09-06T01:17:07.075746425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.075979 env[1301]: time="2025-09-06T01:17:07.075767794Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:17:07.076042 env[1301]: time="2025-09-06T01:17:07.075987769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.076042 env[1301]: time="2025-09-06T01:17:07.076026161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.076145 env[1301]: time="2025-09-06T01:17:07.076049737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.076145 env[1301]: time="2025-09-06T01:17:07.076068188Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:17:07.076145 env[1301]: time="2025-09-06T01:17:07.076089059Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:17:07.076145 env[1301]: time="2025-09-06T01:17:07.076106019Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:17:07.076304 env[1301]: time="2025-09-06T01:17:07.076156579Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:17:07.076304 env[1301]: time="2025-09-06T01:17:07.076227124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:17:07.076755 env[1301]: time="2025-09-06T01:17:07.076666022Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:17:07.079217 env[1301]: time="2025-09-06T01:17:07.076757707Z" level=info msg="Connect containerd service" Sep 6 01:17:07.079217 env[1301]: time="2025-09-06T01:17:07.076842583Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:17:07.086060 env[1301]: time="2025-09-06T01:17:07.085175282Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:17:07.086060 env[1301]: time="2025-09-06T01:17:07.085968293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:17:07.086205 env[1301]: time="2025-09-06T01:17:07.086062162Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:17:07.086301 systemd[1]: Started containerd.service. Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086376768Z" level=info msg="Start subscribing containerd event" Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086495955Z" level=info msg="Start recovering state" Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086644575Z" level=info msg="Start event monitor" Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086699570Z" level=info msg="Start snapshots syncer" Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086725046Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:17:07.088045 env[1301]: time="2025-09-06T01:17:07.086740088Z" level=info msg="Start streaming server" Sep 6 01:17:07.088416 env[1301]: time="2025-09-06T01:17:07.088388534Z" level=info msg="containerd successfully booted in 0.219095s" Sep 6 01:17:07.458248 tar[1298]: linux-amd64/LICENSE Sep 6 01:17:07.458592 tar[1298]: linux-amd64/README.md Sep 6 01:17:07.465347 systemd[1]: Finished prepare-helm.service. Sep 6 01:17:07.653403 locksmithd[1336]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:17:08.197809 systemd[1]: Started kubelet.service. Sep 6 01:17:08.338146 sshd_keygen[1293]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:17:08.368636 systemd[1]: Finished sshd-keygen.service. Sep 6 01:17:08.376897 systemd[1]: Starting issuegen.service... Sep 6 01:17:08.379643 systemd[1]: Started sshd@0-10.230.51.142:22-139.178.89.65:36344.service. Sep 6 01:17:08.384811 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:17:08.385200 systemd[1]: Finished issuegen.service. Sep 6 01:17:08.393234 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:17:08.409794 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:17:08.414441 systemd[1]: Started getty@tty1.service. Sep 6 01:17:08.417353 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 01:17:08.420374 systemd[1]: Reached target getty.target. Sep 6 01:17:08.859168 kubelet[1368]: E0906 01:17:08.859077 1368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:17:08.861325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:17:08.861609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:17:09.312922 sshd[1384]: Accepted publickey for core from 139.178.89.65 port 36344 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:09.316027 sshd[1384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:09.332523 systemd[1]: Created slice user-500.slice. Sep 6 01:17:09.334991 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:17:09.342156 systemd-logind[1289]: New session 1 of user core. Sep 6 01:17:09.353207 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:17:09.357283 systemd[1]: Starting user@500.service... Sep 6 01:17:09.364303 (systemd)[1397]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:09.478289 systemd[1397]: Queued start job for default target default.target. Sep 6 01:17:09.479267 systemd[1397]: Reached target paths.target. Sep 6 01:17:09.479483 systemd[1397]: Reached target sockets.target. Sep 6 01:17:09.479664 systemd[1397]: Reached target timers.target. Sep 6 01:17:09.479891 systemd[1397]: Reached target basic.target. Sep 6 01:17:09.480170 systemd[1397]: Reached target default.target. Sep 6 01:17:09.480328 systemd[1]: Started user@500.service. Sep 6 01:17:09.480526 systemd[1397]: Startup finished in 107ms. Sep 6 01:17:09.484715 systemd[1]: Started session-1.scope. Sep 6 01:17:10.114255 systemd[1]: Started sshd@1-10.230.51.142:22-139.178.89.65:32994.service. Sep 6 01:17:11.010185 sshd[1407]: Accepted publickey for core from 139.178.89.65 port 32994 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:11.012223 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:11.020076 systemd-logind[1289]: New session 2 of user core. Sep 6 01:17:11.020953 systemd[1]: Started session-2.scope. Sep 6 01:17:11.634545 sshd[1407]: pam_unix(sshd:session): session closed for user core Sep 6 01:17:11.638792 systemd[1]: sshd@1-10.230.51.142:22-139.178.89.65:32994.service: Deactivated successfully. Sep 6 01:17:11.640241 systemd-logind[1289]: Session 2 logged out. Waiting for processes to exit. Sep 6 01:17:11.640340 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 01:17:11.642257 systemd-logind[1289]: Removed session 2. Sep 6 01:17:11.798907 systemd[1]: Started sshd@2-10.230.51.142:22-139.178.89.65:32998.service. Sep 6 01:17:12.747106 sshd[1414]: Accepted publickey for core from 139.178.89.65 port 32998 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:12.748915 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:12.755696 systemd-logind[1289]: New session 3 of user core. Sep 6 01:17:12.756409 systemd[1]: Started session-3.scope. Sep 6 01:17:13.410389 sshd[1414]: pam_unix(sshd:session): session closed for user core Sep 6 01:17:13.413908 systemd[1]: sshd@2-10.230.51.142:22-139.178.89.65:32998.service: Deactivated successfully. Sep 6 01:17:13.415320 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 01:17:13.415338 systemd-logind[1289]: Session 3 logged out. Waiting for processes to exit. Sep 6 01:17:13.416789 systemd-logind[1289]: Removed session 3. Sep 6 01:17:13.615629 coreos-metadata[1270]: Sep 06 01:17:13.615 WARN failed to locate config-drive, using the metadata service API instead Sep 6 01:17:13.669425 coreos-metadata[1270]: Sep 06 01:17:13.669 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 6 01:17:13.709210 coreos-metadata[1270]: Sep 06 01:17:13.709 INFO Fetch successful Sep 6 01:17:13.709380 coreos-metadata[1270]: Sep 06 01:17:13.709 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 01:17:13.742117 coreos-metadata[1270]: Sep 06 01:17:13.742 INFO Fetch successful Sep 6 01:17:13.744181 unknown[1270]: wrote ssh authorized keys file for user: core Sep 6 01:17:13.757161 update-ssh-keys[1424]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:17:13.757705 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 01:17:13.758177 systemd[1]: Reached target multi-user.target. Sep 6 01:17:13.760233 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:17:13.773711 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:17:13.774040 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:17:13.774309 systemd[1]: Startup finished in 8.974s (kernel) + 13.548s (userspace) = 22.523s. Sep 6 01:17:19.113173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:17:19.113532 systemd[1]: Stopped kubelet.service. Sep 6 01:17:19.116401 systemd[1]: Starting kubelet.service... Sep 6 01:17:19.311251 systemd[1]: Started kubelet.service. Sep 6 01:17:19.427490 kubelet[1437]: E0906 01:17:19.427401 1437 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:17:19.431637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:17:19.431939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:17:23.549366 systemd[1]: Started sshd@3-10.230.51.142:22-139.178.89.65:45566.service. Sep 6 01:17:24.446289 sshd[1443]: Accepted publickey for core from 139.178.89.65 port 45566 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:24.448567 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:24.456578 systemd-logind[1289]: New session 4 of user core. Sep 6 01:17:24.458476 systemd[1]: Started session-4.scope. Sep 6 01:17:25.070407 sshd[1443]: pam_unix(sshd:session): session closed for user core Sep 6 01:17:25.075210 systemd[1]: sshd@3-10.230.51.142:22-139.178.89.65:45566.service: Deactivated successfully. Sep 6 01:17:25.076384 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:17:25.077808 systemd-logind[1289]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:17:25.079589 systemd-logind[1289]: Removed session 4. Sep 6 01:17:25.218430 systemd[1]: Started sshd@4-10.230.51.142:22-139.178.89.65:45570.service. Sep 6 01:17:26.111930 sshd[1450]: Accepted publickey for core from 139.178.89.65 port 45570 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:26.114518 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:26.121313 systemd[1]: Started session-5.scope. Sep 6 01:17:26.122198 systemd-logind[1289]: New session 5 of user core. Sep 6 01:17:26.733403 sshd[1450]: pam_unix(sshd:session): session closed for user core Sep 6 01:17:26.737335 systemd[1]: sshd@4-10.230.51.142:22-139.178.89.65:45570.service: Deactivated successfully. Sep 6 01:17:26.738394 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:17:26.740268 systemd-logind[1289]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:17:26.741846 systemd-logind[1289]: Removed session 5. Sep 6 01:17:26.880249 systemd[1]: Started sshd@5-10.230.51.142:22-139.178.89.65:45572.service. Sep 6 01:17:27.771310 sshd[1457]: Accepted publickey for core from 139.178.89.65 port 45572 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:27.773729 sshd[1457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:27.780204 systemd-logind[1289]: New session 6 of user core. Sep 6 01:17:27.781064 systemd[1]: Started session-6.scope. Sep 6 01:17:28.393691 sshd[1457]: pam_unix(sshd:session): session closed for user core Sep 6 01:17:28.397299 systemd[1]: sshd@5-10.230.51.142:22-139.178.89.65:45572.service: Deactivated successfully. Sep 6 01:17:28.398311 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:17:28.400242 systemd-logind[1289]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:17:28.402744 systemd-logind[1289]: Removed session 6. Sep 6 01:17:28.540031 systemd[1]: Started sshd@6-10.230.51.142:22-139.178.89.65:45576.service. Sep 6 01:17:29.432143 sshd[1464]: Accepted publickey for core from 139.178.89.65 port 45576 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:17:29.434710 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:17:29.436056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:17:29.436439 systemd[1]: Stopped kubelet.service. Sep 6 01:17:29.439183 systemd[1]: Starting kubelet.service... Sep 6 01:17:29.445657 systemd[1]: Started session-7.scope. Sep 6 01:17:29.447200 systemd-logind[1289]: New session 7 of user core. Sep 6 01:17:29.603393 systemd[1]: Started kubelet.service. Sep 6 01:17:29.684336 kubelet[1475]: E0906 01:17:29.684178 1475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:17:29.686581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:17:29.686886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:17:29.924480 sudo[1483]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:17:29.925581 sudo[1483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:17:29.964566 systemd[1]: Starting docker.service... Sep 6 01:17:30.032761 env[1493]: time="2025-09-06T01:17:30.032668822Z" level=info msg="Starting up" Sep 6 01:17:30.035802 env[1493]: time="2025-09-06T01:17:30.035767392Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:17:30.035802 env[1493]: time="2025-09-06T01:17:30.035797595Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:17:30.035954 env[1493]: time="2025-09-06T01:17:30.035826190Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:17:30.035954 env[1493]: time="2025-09-06T01:17:30.035850787Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:17:30.040344 env[1493]: time="2025-09-06T01:17:30.040300511Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:17:30.040344 env[1493]: time="2025-09-06T01:17:30.040334406Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:17:30.040502 env[1493]: time="2025-09-06T01:17:30.040356429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:17:30.040502 env[1493]: time="2025-09-06T01:17:30.040373358Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:17:30.049448 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport289949254-merged.mount: Deactivated successfully. Sep 6 01:17:30.094312 env[1493]: time="2025-09-06T01:17:30.094264288Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 01:17:30.094580 env[1493]: time="2025-09-06T01:17:30.094552985Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 01:17:30.095076 env[1493]: time="2025-09-06T01:17:30.095049477Z" level=info msg="Loading containers: start." Sep 6 01:17:30.276477 kernel: Initializing XFRM netlink socket Sep 6 01:17:30.320540 env[1493]: time="2025-09-06T01:17:30.320474300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:17:30.416541 systemd-networkd[1084]: docker0: Link UP Sep 6 01:17:30.434066 env[1493]: time="2025-09-06T01:17:30.433993048Z" level=info msg="Loading containers: done." Sep 6 01:17:30.470468 env[1493]: time="2025-09-06T01:17:30.470407109Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:17:30.471144 env[1493]: time="2025-09-06T01:17:30.471114938Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:17:30.471514 env[1493]: time="2025-09-06T01:17:30.471476904Z" level=info msg="Daemon has completed initialization" Sep 6 01:17:30.489535 systemd[1]: Started docker.service. Sep 6 01:17:30.498549 env[1493]: time="2025-09-06T01:17:30.498458294Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:17:31.648290 env[1301]: time="2025-09-06T01:17:31.648067137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 01:17:32.492927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520070913.mount: Deactivated successfully. Sep 6 01:17:36.800880 env[1301]: time="2025-09-06T01:17:36.800814752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:36.803151 env[1301]: time="2025-09-06T01:17:36.803111851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:36.805623 env[1301]: time="2025-09-06T01:17:36.805574287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:36.807887 env[1301]: time="2025-09-06T01:17:36.807850406Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:36.809253 env[1301]: time="2025-09-06T01:17:36.809208734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 01:17:36.812664 env[1301]: time="2025-09-06T01:17:36.812542358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 01:17:37.087862 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 01:17:39.927274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 01:17:39.927584 systemd[1]: Stopped kubelet.service. Sep 6 01:17:39.930451 systemd[1]: Starting kubelet.service... Sep 6 01:17:40.118908 systemd[1]: Started kubelet.service. Sep 6 01:17:40.227623 kubelet[1630]: E0906 01:17:40.227359 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:17:40.230029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:17:40.230306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:17:42.867792 env[1301]: time="2025-09-06T01:17:42.867612787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:42.870348 env[1301]: time="2025-09-06T01:17:42.870303187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:42.872921 env[1301]: time="2025-09-06T01:17:42.872875113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:42.875226 env[1301]: time="2025-09-06T01:17:42.875193087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:42.876555 env[1301]: time="2025-09-06T01:17:42.876465806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 01:17:42.877699 env[1301]: time="2025-09-06T01:17:42.877655280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 01:17:46.666936 env[1301]: time="2025-09-06T01:17:46.666843403Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:46.669800 env[1301]: time="2025-09-06T01:17:46.669756875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:46.671821 env[1301]: time="2025-09-06T01:17:46.671760483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:46.674645 env[1301]: time="2025-09-06T01:17:46.674610847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:46.675934 env[1301]: time="2025-09-06T01:17:46.675778948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 01:17:46.676786 env[1301]: time="2025-09-06T01:17:46.676753075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 01:17:48.565891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount849440938.mount: Deactivated successfully. Sep 6 01:17:49.600290 env[1301]: time="2025-09-06T01:17:49.600171103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:49.602835 env[1301]: time="2025-09-06T01:17:49.602771616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:49.605668 env[1301]: time="2025-09-06T01:17:49.605613230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:49.607974 env[1301]: time="2025-09-06T01:17:49.607924896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:49.609604 env[1301]: time="2025-09-06T01:17:49.608852436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 01:17:49.610644 env[1301]: time="2025-09-06T01:17:49.610594739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:17:50.427301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 01:17:50.427778 systemd[1]: Stopped kubelet.service. Sep 6 01:17:50.431347 systemd[1]: Starting kubelet.service... Sep 6 01:17:50.590159 systemd[1]: Started kubelet.service. Sep 6 01:17:50.692088 kubelet[1645]: E0906 01:17:50.691631 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:17:50.695442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:17:50.695790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:17:50.707395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219594864.mount: Deactivated successfully. Sep 6 01:17:52.185180 env[1301]: time="2025-09-06T01:17:52.185096251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.187333 env[1301]: time="2025-09-06T01:17:52.187292880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.190168 env[1301]: time="2025-09-06T01:17:52.190122273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.193834 env[1301]: time="2025-09-06T01:17:52.193751388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.194973 env[1301]: time="2025-09-06T01:17:52.194928543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 01:17:52.195829 env[1301]: time="2025-09-06T01:17:52.195785761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:17:52.268022 update_engine[1292]: I0906 01:17:52.267237 1292 update_attempter.cc:509] Updating boot flags... Sep 6 01:17:52.824116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555157786.mount: Deactivated successfully. Sep 6 01:17:52.840493 env[1301]: time="2025-09-06T01:17:52.840427423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.842883 env[1301]: time="2025-09-06T01:17:52.842843011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.844764 env[1301]: time="2025-09-06T01:17:52.844733881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.847323 env[1301]: time="2025-09-06T01:17:52.846712375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:52.847800 env[1301]: time="2025-09-06T01:17:52.847755652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 01:17:52.848660 env[1301]: time="2025-09-06T01:17:52.848616817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 01:17:53.618402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703543053.mount: Deactivated successfully. Sep 6 01:17:57.013602 env[1301]: time="2025-09-06T01:17:57.013523303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:57.016766 env[1301]: time="2025-09-06T01:17:57.016713860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:57.019231 env[1301]: time="2025-09-06T01:17:57.019200649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:57.020638 env[1301]: time="2025-09-06T01:17:57.020584738Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:17:57.022030 env[1301]: time="2025-09-06T01:17:57.021956763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 01:18:00.927463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 01:18:00.927805 systemd[1]: Stopped kubelet.service. Sep 6 01:18:00.930635 systemd[1]: Starting kubelet.service... Sep 6 01:18:01.084532 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 01:18:01.084688 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 01:18:01.085115 systemd[1]: Stopped kubelet.service. Sep 6 01:18:01.088940 systemd[1]: Starting kubelet.service... Sep 6 01:18:01.131257 systemd[1]: Reloading. Sep 6 01:18:01.276151 /usr/lib/systemd/system-generators/torcx-generator[1715]: time="2025-09-06T01:18:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:18:01.276204 /usr/lib/systemd/system-generators/torcx-generator[1715]: time="2025-09-06T01:18:01Z" level=info msg="torcx already run" Sep 6 01:18:01.408542 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:18:01.408870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:18:01.439712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:18:01.563481 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 01:18:01.563839 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 01:18:01.564466 systemd[1]: Stopped kubelet.service. Sep 6 01:18:01.567996 systemd[1]: Starting kubelet.service... Sep 6 01:18:01.821866 systemd[1]: Started kubelet.service. Sep 6 01:18:01.932207 kubelet[1777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:18:01.932815 kubelet[1777]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:18:01.932926 kubelet[1777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:18:01.933336 kubelet[1777]: I0906 01:18:01.933263 1777 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:18:02.833602 kubelet[1777]: I0906 01:18:02.833468 1777 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:18:02.833602 kubelet[1777]: I0906 01:18:02.833521 1777 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:18:02.833876 kubelet[1777]: I0906 01:18:02.833853 1777 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:18:02.861100 kubelet[1777]: E0906 01:18:02.861051 1777 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.51.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:02.863215 kubelet[1777]: I0906 01:18:02.863187 1777 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:18:02.871434 kubelet[1777]: E0906 01:18:02.871405 1777 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:18:02.871559 kubelet[1777]: I0906 01:18:02.871535 1777 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:18:02.879942 kubelet[1777]: I0906 01:18:02.879916 1777 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:18:02.881919 kubelet[1777]: I0906 01:18:02.881893 1777 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:18:02.882291 kubelet[1777]: I0906 01:18:02.882257 1777 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:18:02.882677 kubelet[1777]: I0906 01:18:02.882417 1777 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rd74e.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:18:02.883052 kubelet[1777]: I0906 01:18:02.883002 1777 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:18:02.883186 kubelet[1777]: I0906 01:18:02.883165 1777 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:18:02.883520 kubelet[1777]: I0906 01:18:02.883500 1777 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:18:02.892469 kubelet[1777]: I0906 01:18:02.892442 1777 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:18:02.892621 kubelet[1777]: I0906 01:18:02.892597 1777 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:18:02.892811 kubelet[1777]: I0906 01:18:02.892788 1777 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:18:02.892984 kubelet[1777]: I0906 01:18:02.892961 1777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:18:02.896278 kubelet[1777]: W0906 01:18:02.896199 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.51.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rd74e.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:02.896392 kubelet[1777]: E0906 01:18:02.896322 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.51.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rd74e.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:02.898278 kubelet[1777]: I0906 01:18:02.898252 1777 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:18:02.899053 kubelet[1777]: I0906 01:18:02.898987 1777 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:18:02.899302 kubelet[1777]: W0906 01:18:02.899280 1777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:18:02.904352 kubelet[1777]: I0906 01:18:02.904311 1777 server.go:1274] "Started kubelet" Sep 6 01:18:02.904698 kubelet[1777]: W0906 01:18:02.904647 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.51.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:02.904864 kubelet[1777]: E0906 01:18:02.904831 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.51.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:02.918404 kubelet[1777]: I0906 01:18:02.918050 1777 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:18:02.919686 kubelet[1777]: I0906 01:18:02.919658 1777 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:18:02.920204 kubelet[1777]: I0906 01:18:02.920125 1777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:18:02.922491 kubelet[1777]: I0906 01:18:02.922422 1777 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:18:02.923656 kubelet[1777]: E0906 01:18:02.920846 1777 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.51.142:6443/api/v1/namespaces/default/events\": dial tcp 10.230.51.142:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rd74e.gb1.brightbox.com.18628cadd7b7a21f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rd74e.gb1.brightbox.com,UID:srv-rd74e.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rd74e.gb1.brightbox.com,},FirstTimestamp:2025-09-06 01:18:02.904281631 +0000 UTC m=+1.068574133,LastTimestamp:2025-09-06 01:18:02.904281631 +0000 UTC m=+1.068574133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rd74e.gb1.brightbox.com,}" Sep 6 01:18:02.930166 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:18:02.930774 kubelet[1777]: I0906 01:18:02.930743 1777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:18:02.931765 kubelet[1777]: I0906 01:18:02.931733 1777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:18:02.936919 kubelet[1777]: E0906 01:18:02.936881 1777 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:18:02.938236 kubelet[1777]: I0906 01:18:02.938209 1777 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:18:02.938438 kubelet[1777]: I0906 01:18:02.938414 1777 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:18:02.938555 kubelet[1777]: I0906 01:18:02.938530 1777 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:18:02.940080 kubelet[1777]: W0906 01:18:02.940034 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.51.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:02.940152 kubelet[1777]: E0906 01:18:02.940091 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.51.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:02.940681 kubelet[1777]: E0906 01:18:02.940631 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-rd74e.gb1.brightbox.com\" not found" Sep 6 01:18:02.941506 kubelet[1777]: E0906 01:18:02.941469 1777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.51.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rd74e.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.51.142:6443: connect: connection refused" interval="200ms" Sep 6 01:18:02.942777 kubelet[1777]: I0906 01:18:02.942744 1777 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:18:02.942777 kubelet[1777]: I0906 01:18:02.942766 1777 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:18:02.942920 kubelet[1777]: I0906 01:18:02.942854 1777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:18:02.981685 kubelet[1777]: I0906 01:18:02.981619 1777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:18:02.996553 kubelet[1777]: I0906 01:18:02.996513 1777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:18:02.996670 kubelet[1777]: I0906 01:18:02.996565 1777 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:18:02.996670 kubelet[1777]: I0906 01:18:02.996613 1777 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:18:02.996827 kubelet[1777]: E0906 01:18:02.996672 1777 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:18:03.003704 kubelet[1777]: W0906 01:18:03.003645 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.51.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:03.003891 kubelet[1777]: E0906 01:18:03.003856 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.51.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:03.005128 kubelet[1777]: I0906 01:18:03.005103 1777 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:18:03.005260 kubelet[1777]: I0906 01:18:03.005237 1777 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:18:03.005409 kubelet[1777]: I0906 01:18:03.005390 1777 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:18:03.007308 kubelet[1777]: I0906 01:18:03.007288 1777 policy_none.go:49] "None policy: Start" Sep 6 01:18:03.008169 kubelet[1777]: I0906 01:18:03.008144 1777 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:18:03.008278 kubelet[1777]: I0906 01:18:03.008185 1777 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:18:03.015994 kubelet[1777]: I0906 01:18:03.015963 1777 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:18:03.016267 kubelet[1777]: I0906 01:18:03.016238 1777 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:18:03.016371 kubelet[1777]: I0906 01:18:03.016272 1777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:18:03.018465 kubelet[1777]: I0906 01:18:03.018429 1777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:18:03.019885 kubelet[1777]: E0906 01:18:03.019858 1777 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-rd74e.gb1.brightbox.com\" not found" Sep 6 01:18:03.186368 kubelet[1777]: I0906 01:18:03.186320 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-kubeconfig\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.186602 kubelet[1777]: I0906 01:18:03.186571 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c45872385f25c5c189c597c4231e33f6-kubeconfig\") pod \"kube-scheduler-srv-rd74e.gb1.brightbox.com\" (UID: \"c45872385f25c5c189c597c4231e33f6\") " pod="kube-system/kube-scheduler-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.186794 kubelet[1777]: I0906 01:18:03.186765 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-ca-certs\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.186955 kubelet[1777]: I0906 01:18:03.186927 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-k8s-certs\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.187106 kubelet[1777]: I0906 01:18:03.187079 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-flexvolume-dir\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.187283 kubelet[1777]: I0906 01:18:03.187256 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-k8s-certs\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.187464 kubelet[1777]: I0906 01:18:03.187435 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.187626 kubelet[1777]: I0906 01:18:03.187599 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-ca-certs\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.187799 kubelet[1777]: I0906 01:18:03.187771 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.188260 kubelet[1777]: E0906 01:18:03.188226 1777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.51.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rd74e.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.51.142:6443: connect: connection refused" interval="400ms" Sep 6 01:18:03.188606 kubelet[1777]: I0906 01:18:03.188581 1777 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.189294 kubelet[1777]: E0906 01:18:03.189261 1777 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.51.142:6443/api/v1/nodes\": dial tcp 10.230.51.142:6443: connect: connection refused" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.393358 kubelet[1777]: I0906 01:18:03.393322 1777 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.394132 kubelet[1777]: E0906 01:18:03.394087 1777 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.51.142:6443/api/v1/nodes\": dial tcp 10.230.51.142:6443: connect: connection refused" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.495659 env[1301]: time="2025-09-06T01:18:03.495027050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rd74e.gb1.brightbox.com,Uid:c706ad4459a23d81a4c781634acf5258,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:03.495659 env[1301]: time="2025-09-06T01:18:03.495430385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rd74e.gb1.brightbox.com,Uid:a1f4f12dca152784f017897ffc167601,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:03.499083 env[1301]: time="2025-09-06T01:18:03.499038041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rd74e.gb1.brightbox.com,Uid:c45872385f25c5c189c597c4231e33f6,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:03.589893 kubelet[1777]: E0906 01:18:03.589809 1777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.51.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rd74e.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.51.142:6443: connect: connection refused" interval="800ms" Sep 6 01:18:03.772753 kubelet[1777]: W0906 01:18:03.772502 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.51.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:03.772753 kubelet[1777]: E0906 01:18:03.772584 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.51.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:03.797794 kubelet[1777]: I0906 01:18:03.797729 1777 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.798255 kubelet[1777]: E0906 01:18:03.798223 1777 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.51.142:6443/api/v1/nodes\": dial tcp 10.230.51.142:6443: connect: connection refused" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:03.971236 kubelet[1777]: W0906 01:18:03.971100 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.51.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:03.971236 kubelet[1777]: E0906 01:18:03.971202 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.51.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:04.016998 kubelet[1777]: W0906 01:18:04.016862 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.51.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rd74e.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:04.016998 kubelet[1777]: E0906 01:18:04.016950 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.51.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rd74e.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:04.131194 kubelet[1777]: W0906 01:18:04.130718 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.51.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.51.142:6443: connect: connection refused Sep 6 01:18:04.131194 kubelet[1777]: E0906 01:18:04.130775 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.51.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:04.164850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671171450.mount: Deactivated successfully. Sep 6 01:18:04.172536 env[1301]: time="2025-09-06T01:18:04.172490535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.179920 env[1301]: time="2025-09-06T01:18:04.179874300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.181555 env[1301]: time="2025-09-06T01:18:04.181520801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.182613 env[1301]: time="2025-09-06T01:18:04.182553744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.184538 env[1301]: time="2025-09-06T01:18:04.184498104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.186539 env[1301]: time="2025-09-06T01:18:04.186506844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.188003 env[1301]: time="2025-09-06T01:18:04.187960001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.190769 env[1301]: time="2025-09-06T01:18:04.190736430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.191838 env[1301]: time="2025-09-06T01:18:04.191804590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.193940 env[1301]: time="2025-09-06T01:18:04.193903743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.214122 env[1301]: time="2025-09-06T01:18:04.214057695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.224754 env[1301]: time="2025-09-06T01:18:04.224118005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:04.224754 env[1301]: time="2025-09-06T01:18:04.224202941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:04.224754 env[1301]: time="2025-09-06T01:18:04.224220935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:04.224957 env[1301]: time="2025-09-06T01:18:04.224795700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:04.225319 env[1301]: time="2025-09-06T01:18:04.225208585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52dc343417ac5e50c330bd8d53f60078a08439cb1dadb5a28ffb2548f9ba6515 pid=1820 runtime=io.containerd.runc.v2 Sep 6 01:18:04.257515 env[1301]: time="2025-09-06T01:18:04.255864007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:04.257515 env[1301]: time="2025-09-06T01:18:04.255910851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:04.257515 env[1301]: time="2025-09-06T01:18:04.255928012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:04.257515 env[1301]: time="2025-09-06T01:18:04.256121427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d213241853f3038f16a1ed93b617959c1b708c15caa1673884ef04eb084ec309 pid=1839 runtime=io.containerd.runc.v2 Sep 6 01:18:04.273401 env[1301]: time="2025-09-06T01:18:04.273276103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:04.273712 env[1301]: time="2025-09-06T01:18:04.273646587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:04.273877 env[1301]: time="2025-09-06T01:18:04.273836775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:04.274280 env[1301]: time="2025-09-06T01:18:04.274225686Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f438614ec46869dea6f8554f2b0e9c5a4f7f35864695e9d604df67cb6383ee8 pid=1859 runtime=io.containerd.runc.v2 Sep 6 01:18:04.355099 env[1301]: time="2025-09-06T01:18:04.354974035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rd74e.gb1.brightbox.com,Uid:c706ad4459a23d81a4c781634acf5258,Namespace:kube-system,Attempt:0,} returns sandbox id \"52dc343417ac5e50c330bd8d53f60078a08439cb1dadb5a28ffb2548f9ba6515\"" Sep 6 01:18:04.359644 env[1301]: time="2025-09-06T01:18:04.359607574Z" level=info msg="CreateContainer within sandbox \"52dc343417ac5e50c330bd8d53f60078a08439cb1dadb5a28ffb2548f9ba6515\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:18:04.390772 kubelet[1777]: E0906 01:18:04.390618 1777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.51.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rd74e.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.51.142:6443: connect: connection refused" interval="1.6s" Sep 6 01:18:04.392620 env[1301]: time="2025-09-06T01:18:04.392573078Z" level=info msg="CreateContainer within sandbox \"52dc343417ac5e50c330bd8d53f60078a08439cb1dadb5a28ffb2548f9ba6515\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a219273a4adfb51e2c0b972ef138359496e8540eeced0345c2c1dd43eddab852\"" Sep 6 01:18:04.393576 env[1301]: time="2025-09-06T01:18:04.393531092Z" level=info msg="StartContainer for \"a219273a4adfb51e2c0b972ef138359496e8540eeced0345c2c1dd43eddab852\"" Sep 6 01:18:04.444067 env[1301]: time="2025-09-06T01:18:04.440258953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rd74e.gb1.brightbox.com,Uid:a1f4f12dca152784f017897ffc167601,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f438614ec46869dea6f8554f2b0e9c5a4f7f35864695e9d604df67cb6383ee8\"" Sep 6 01:18:04.444067 env[1301]: time="2025-09-06T01:18:04.443131751Z" level=info msg="CreateContainer within sandbox \"7f438614ec46869dea6f8554f2b0e9c5a4f7f35864695e9d604df67cb6383ee8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:18:04.473353 env[1301]: time="2025-09-06T01:18:04.469774537Z" level=info msg="CreateContainer within sandbox \"7f438614ec46869dea6f8554f2b0e9c5a4f7f35864695e9d604df67cb6383ee8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"337389211775cd17cce96269a636a603d4ea008d4a30d16f436bcb0207defc87\"" Sep 6 01:18:04.473353 env[1301]: time="2025-09-06T01:18:04.470294653Z" level=info msg="StartContainer for \"337389211775cd17cce96269a636a603d4ea008d4a30d16f436bcb0207defc87\"" Sep 6 01:18:04.481272 env[1301]: time="2025-09-06T01:18:04.481229660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rd74e.gb1.brightbox.com,Uid:c45872385f25c5c189c597c4231e33f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d213241853f3038f16a1ed93b617959c1b708c15caa1673884ef04eb084ec309\"" Sep 6 01:18:04.484984 env[1301]: time="2025-09-06T01:18:04.484947540Z" level=info msg="CreateContainer within sandbox \"d213241853f3038f16a1ed93b617959c1b708c15caa1673884ef04eb084ec309\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:18:04.510042 env[1301]: time="2025-09-06T01:18:04.506404960Z" level=info msg="CreateContainer within sandbox \"d213241853f3038f16a1ed93b617959c1b708c15caa1673884ef04eb084ec309\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c813bb2a80f5ec936565bf41891a5b1387c458fbbba0a716c844269495f2744\"" Sep 6 01:18:04.510042 env[1301]: time="2025-09-06T01:18:04.507484044Z" level=info msg="StartContainer for \"5c813bb2a80f5ec936565bf41891a5b1387c458fbbba0a716c844269495f2744\"" Sep 6 01:18:04.528590 env[1301]: time="2025-09-06T01:18:04.528533135Z" level=info msg="StartContainer for \"a219273a4adfb51e2c0b972ef138359496e8540eeced0345c2c1dd43eddab852\" returns successfully" Sep 6 01:18:04.605472 kubelet[1777]: I0906 01:18:04.605423 1777 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:04.606372 kubelet[1777]: E0906 01:18:04.606323 1777 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.51.142:6443/api/v1/nodes\": dial tcp 10.230.51.142:6443: connect: connection refused" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:04.618327 env[1301]: time="2025-09-06T01:18:04.618262752Z" level=info msg="StartContainer for \"337389211775cd17cce96269a636a603d4ea008d4a30d16f436bcb0207defc87\" returns successfully" Sep 6 01:18:04.697161 env[1301]: time="2025-09-06T01:18:04.697079676Z" level=info msg="StartContainer for \"5c813bb2a80f5ec936565bf41891a5b1387c458fbbba0a716c844269495f2744\" returns successfully" Sep 6 01:18:04.911331 kubelet[1777]: E0906 01:18:04.911170 1777 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.51.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.51.142:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:18:06.209255 kubelet[1777]: I0906 01:18:06.209216 1777 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:07.719321 kubelet[1777]: E0906 01:18:07.719237 1777 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-rd74e.gb1.brightbox.com\" not found" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:07.766833 kubelet[1777]: E0906 01:18:07.766674 1777 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-rd74e.gb1.brightbox.com.18628cadd7b7a21f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rd74e.gb1.brightbox.com,UID:srv-rd74e.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rd74e.gb1.brightbox.com,},FirstTimestamp:2025-09-06 01:18:02.904281631 +0000 UTC m=+1.068574133,LastTimestamp:2025-09-06 01:18:02.904281631 +0000 UTC m=+1.068574133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rd74e.gb1.brightbox.com,}" Sep 6 01:18:07.814054 kubelet[1777]: I0906 01:18:07.813995 1777 kubelet_node_status.go:75] "Successfully registered node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:07.919219 kubelet[1777]: I0906 01:18:07.919178 1777 apiserver.go:52] "Watching apiserver" Sep 6 01:18:07.939128 kubelet[1777]: I0906 01:18:07.939096 1777 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:18:08.938447 kubelet[1777]: W0906 01:18:08.938373 1777 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:18:09.853300 systemd[1]: Reloading. Sep 6 01:18:09.939510 /usr/lib/systemd/system-generators/torcx-generator[2068]: time="2025-09-06T01:18:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:18:09.939555 /usr/lib/systemd/system-generators/torcx-generator[2068]: time="2025-09-06T01:18:09Z" level=info msg="torcx already run" Sep 6 01:18:10.063650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:18:10.063942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:18:10.095299 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:18:10.218856 systemd[1]: Stopping kubelet.service... Sep 6 01:18:10.242139 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:18:10.242963 systemd[1]: Stopped kubelet.service. Sep 6 01:18:10.247283 systemd[1]: Starting kubelet.service... Sep 6 01:18:11.539642 systemd[1]: Started kubelet.service. Sep 6 01:18:11.653765 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:18:11.653765 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:18:11.653765 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:18:11.655065 kubelet[2130]: I0906 01:18:11.654872 2130 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:18:11.673328 kubelet[2130]: I0906 01:18:11.673256 2130 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:18:11.673328 kubelet[2130]: I0906 01:18:11.673302 2130 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:18:11.673861 kubelet[2130]: I0906 01:18:11.673704 2130 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:18:11.676659 kubelet[2130]: I0906 01:18:11.676614 2130 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:18:11.691660 kubelet[2130]: I0906 01:18:11.691618 2130 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:18:11.704945 sudo[2143]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 01:18:11.705633 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 01:18:11.714221 kubelet[2130]: E0906 01:18:11.713113 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:18:11.714221 kubelet[2130]: I0906 01:18:11.713171 2130 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:18:11.723585 kubelet[2130]: I0906 01:18:11.723525 2130 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:18:11.724360 kubelet[2130]: I0906 01:18:11.724337 2130 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:18:11.724772 kubelet[2130]: I0906 01:18:11.724721 2130 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:18:11.725278 kubelet[2130]: I0906 01:18:11.724894 2130 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rd74e.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:18:11.725512 kubelet[2130]: I0906 01:18:11.725489 2130 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:18:11.725645 kubelet[2130]: I0906 01:18:11.725624 2130 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:18:11.725790 kubelet[2130]: I0906 01:18:11.725768 2130 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:18:11.726112 kubelet[2130]: I0906 01:18:11.726092 2130 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:18:11.726343 kubelet[2130]: I0906 01:18:11.726322 2130 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:18:11.727032 kubelet[2130]: I0906 01:18:11.726492 2130 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:18:11.727167 kubelet[2130]: I0906 01:18:11.727146 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:18:11.731173 kubelet[2130]: I0906 01:18:11.731147 2130 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:18:11.731829 kubelet[2130]: I0906 01:18:11.731805 2130 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:18:11.732788 kubelet[2130]: I0906 01:18:11.732767 2130 server.go:1274] "Started kubelet" Sep 6 01:18:11.747759 kubelet[2130]: I0906 01:18:11.747714 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:18:11.750777 kubelet[2130]: I0906 01:18:11.750740 2130 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:18:11.795183 kubelet[2130]: I0906 01:18:11.794854 2130 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:18:11.797840 kubelet[2130]: I0906 01:18:11.750921 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:18:11.797840 kubelet[2130]: I0906 01:18:11.796991 2130 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:18:11.797840 kubelet[2130]: I0906 01:18:11.761764 2130 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:18:11.798669 kubelet[2130]: I0906 01:18:11.761687 2130 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:18:11.798760 kubelet[2130]: E0906 01:18:11.762751 2130 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-rd74e.gb1.brightbox.com\" not found" Sep 6 01:18:11.799541 kubelet[2130]: I0906 01:18:11.751569 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:18:11.801367 kubelet[2130]: I0906 01:18:11.800965 2130 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:18:11.802511 kubelet[2130]: I0906 01:18:11.802483 2130 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:18:11.806116 kubelet[2130]: I0906 01:18:11.802967 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:18:11.806116 kubelet[2130]: I0906 01:18:11.804539 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:18:11.806116 kubelet[2130]: I0906 01:18:11.804568 2130 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:18:11.806116 kubelet[2130]: I0906 01:18:11.804607 2130 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:18:11.806116 kubelet[2130]: E0906 01:18:11.804675 2130 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:18:11.806698 kubelet[2130]: I0906 01:18:11.806647 2130 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:18:11.814074 kubelet[2130]: E0906 01:18:11.813750 2130 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:18:11.826394 kubelet[2130]: I0906 01:18:11.826347 2130 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:18:11.906071 kubelet[2130]: E0906 01:18:11.905269 2130 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 01:18:11.915487 kubelet[2130]: I0906 01:18:11.915435 2130 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:18:11.915722 kubelet[2130]: I0906 01:18:11.915696 2130 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:18:11.915844 kubelet[2130]: I0906 01:18:11.915823 2130 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:18:11.916209 kubelet[2130]: I0906 01:18:11.916184 2130 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:18:11.916381 kubelet[2130]: I0906 01:18:11.916335 2130 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:18:11.916518 kubelet[2130]: I0906 01:18:11.916495 2130 policy_none.go:49] "None policy: Start" Sep 6 01:18:11.917561 kubelet[2130]: I0906 01:18:11.917529 2130 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:18:11.917759 kubelet[2130]: I0906 01:18:11.917737 2130 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:18:11.918171 kubelet[2130]: I0906 01:18:11.918000 2130 state_mem.go:75] "Updated machine memory state" Sep 6 01:18:11.920135 kubelet[2130]: I0906 01:18:11.920111 2130 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:18:11.920491 kubelet[2130]: I0906 01:18:11.920469 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:18:11.920632 kubelet[2130]: I0906 01:18:11.920590 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:18:11.922805 kubelet[2130]: I0906 01:18:11.922783 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:18:12.050471 kubelet[2130]: I0906 01:18:12.050350 2130 kubelet_node_status.go:72] "Attempting to register node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.066146 kubelet[2130]: I0906 01:18:12.066104 2130 kubelet_node_status.go:111] "Node was previously registered" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.066469 kubelet[2130]: I0906 01:18:12.066448 2130 kubelet_node_status.go:75] "Successfully registered node" node="srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.118610 kubelet[2130]: W0906 01:18:12.118545 2130 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:18:12.119303 kubelet[2130]: W0906 01:18:12.119270 2130 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:18:12.119497 kubelet[2130]: W0906 01:18:12.119461 2130 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:18:12.119773 kubelet[2130]: E0906 01:18:12.119743 2130 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.202142 kubelet[2130]: I0906 01:18:12.202098 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.202488 kubelet[2130]: I0906 01:18:12.202459 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-flexvolume-dir\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.202696 kubelet[2130]: I0906 01:18:12.202657 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.202879 kubelet[2130]: I0906 01:18:12.202835 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c45872385f25c5c189c597c4231e33f6-kubeconfig\") pod \"kube-scheduler-srv-rd74e.gb1.brightbox.com\" (UID: \"c45872385f25c5c189c597c4231e33f6\") " pod="kube-system/kube-scheduler-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.203097 kubelet[2130]: I0906 01:18:12.203060 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-ca-certs\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.203292 kubelet[2130]: I0906 01:18:12.203253 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1f4f12dca152784f017897ffc167601-k8s-certs\") pod \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" (UID: \"a1f4f12dca152784f017897ffc167601\") " pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.203480 kubelet[2130]: I0906 01:18:12.203446 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-ca-certs\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.203669 kubelet[2130]: I0906 01:18:12.203642 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-k8s-certs\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.203847 kubelet[2130]: I0906 01:18:12.203809 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c706ad4459a23d81a4c781634acf5258-kubeconfig\") pod \"kube-controller-manager-srv-rd74e.gb1.brightbox.com\" (UID: \"c706ad4459a23d81a4c781634acf5258\") " pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.632042 sudo[2143]: pam_unix(sudo:session): session closed for user root Sep 6 01:18:12.744986 kubelet[2130]: I0906 01:18:12.744896 2130 apiserver.go:52] "Watching apiserver" Sep 6 01:18:12.797637 kubelet[2130]: I0906 01:18:12.797574 2130 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:18:12.867761 kubelet[2130]: W0906 01:18:12.866240 2130 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:18:12.867761 kubelet[2130]: E0906 01:18:12.866323 2130 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-rd74e.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" Sep 6 01:18:12.884932 kubelet[2130]: I0906 01:18:12.884746 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-rd74e.gb1.brightbox.com" podStartSLOduration=0.884694042 podStartE2EDuration="884.694042ms" podCreationTimestamp="2025-09-06 01:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:12.884242276 +0000 UTC m=+1.323707235" watchObservedRunningTime="2025-09-06 01:18:12.884694042 +0000 UTC m=+1.324158988" Sep 6 01:18:12.907477 kubelet[2130]: I0906 01:18:12.907183 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-rd74e.gb1.brightbox.com" podStartSLOduration=0.907161871 podStartE2EDuration="907.161871ms" podCreationTimestamp="2025-09-06 01:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:12.895399186 +0000 UTC m=+1.334864162" watchObservedRunningTime="2025-09-06 01:18:12.907161871 +0000 UTC m=+1.346626828" Sep 6 01:18:12.935627 kubelet[2130]: I0906 01:18:12.935525 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-rd74e.gb1.brightbox.com" podStartSLOduration=4.935451648 podStartE2EDuration="4.935451648s" podCreationTimestamp="2025-09-06 01:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:12.908068644 +0000 UTC m=+1.347533609" watchObservedRunningTime="2025-09-06 01:18:12.935451648 +0000 UTC m=+1.374916599" Sep 6 01:18:14.659818 sudo[1483]: pam_unix(sudo:session): session closed for user root Sep 6 01:18:14.806933 sshd[1464]: pam_unix(sshd:session): session closed for user core Sep 6 01:18:14.810943 systemd[1]: sshd@6-10.230.51.142:22-139.178.89.65:45576.service: Deactivated successfully. Sep 6 01:18:14.812146 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:18:14.814226 systemd-logind[1289]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:18:14.817278 systemd-logind[1289]: Removed session 7. Sep 6 01:18:15.104415 kubelet[2130]: I0906 01:18:15.104344 2130 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:18:15.105111 env[1301]: time="2025-09-06T01:18:15.104807345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:18:15.105705 kubelet[2130]: I0906 01:18:15.105679 2130 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:18:16.292742 kubelet[2130]: I0906 01:18:16.292672 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4p4s\" (UniqueName: \"kubernetes.io/projected/ff9eb03d-08f3-4692-b100-678b586777b8-kube-api-access-m4p4s\") pod \"kube-proxy-kb4x2\" (UID: \"ff9eb03d-08f3-4692-b100-678b586777b8\") " pod="kube-system/kube-proxy-kb4x2" Sep 6 01:18:16.292742 kubelet[2130]: I0906 01:18:16.292745 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff9eb03d-08f3-4692-b100-678b586777b8-kube-proxy\") pod \"kube-proxy-kb4x2\" (UID: \"ff9eb03d-08f3-4692-b100-678b586777b8\") " pod="kube-system/kube-proxy-kb4x2" Sep 6 01:18:16.293667 kubelet[2130]: I0906 01:18:16.292798 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff9eb03d-08f3-4692-b100-678b586777b8-xtables-lock\") pod \"kube-proxy-kb4x2\" (UID: \"ff9eb03d-08f3-4692-b100-678b586777b8\") " pod="kube-system/kube-proxy-kb4x2" Sep 6 01:18:16.293667 kubelet[2130]: I0906 01:18:16.292827 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff9eb03d-08f3-4692-b100-678b586777b8-lib-modules\") pod \"kube-proxy-kb4x2\" (UID: \"ff9eb03d-08f3-4692-b100-678b586777b8\") " pod="kube-system/kube-proxy-kb4x2" Sep 6 01:18:16.393547 kubelet[2130]: I0906 01:18:16.393469 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-bpf-maps\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.393856 kubelet[2130]: I0906 01:18:16.393828 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d32a918-a107-48b7-9fdd-3249005ff46c-clustermesh-secrets\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394004 kubelet[2130]: I0906 01:18:16.393977 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-config-path\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394267 kubelet[2130]: I0906 01:18:16.394224 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-net\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394351 kubelet[2130]: I0906 01:18:16.394305 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-kernel\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394351 kubelet[2130]: I0906 01:18:16.394337 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q59kk\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-kube-api-access-q59kk\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394501 kubelet[2130]: I0906 01:18:16.394455 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-hubble-tls\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394576 kubelet[2130]: I0906 01:18:16.394513 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cni-path\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394576 kubelet[2130]: I0906 01:18:16.394540 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-lib-modules\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394689 kubelet[2130]: I0906 01:18:16.394570 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-cgroup\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394689 kubelet[2130]: I0906 01:18:16.394639 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-xtables-lock\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394689 kubelet[2130]: I0906 01:18:16.394666 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-etc-cni-netd\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394855 kubelet[2130]: I0906 01:18:16.394690 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdknj\" (UniqueName: \"kubernetes.io/projected/6d7ffeff-9c49-4c54-8066-0577dce67b70-kube-api-access-xdknj\") pod \"cilium-operator-5d85765b45-vj46w\" (UID: \"6d7ffeff-9c49-4c54-8066-0577dce67b70\") " pod="kube-system/cilium-operator-5d85765b45-vj46w" Sep 6 01:18:16.394855 kubelet[2130]: I0906 01:18:16.394720 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-run\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394855 kubelet[2130]: I0906 01:18:16.394744 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-hostproc\") pod \"cilium-pml2r\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " pod="kube-system/cilium-pml2r" Sep 6 01:18:16.394855 kubelet[2130]: I0906 01:18:16.394768 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d7ffeff-9c49-4c54-8066-0577dce67b70-cilium-config-path\") pod \"cilium-operator-5d85765b45-vj46w\" (UID: \"6d7ffeff-9c49-4c54-8066-0577dce67b70\") " pod="kube-system/cilium-operator-5d85765b45-vj46w" Sep 6 01:18:16.402503 kubelet[2130]: I0906 01:18:16.402459 2130 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:18:16.495896 env[1301]: time="2025-09-06T01:18:16.495198938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kb4x2,Uid:ff9eb03d-08f3-4692-b100-678b586777b8,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:16.537934 env[1301]: time="2025-09-06T01:18:16.536961734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:16.544591 env[1301]: time="2025-09-06T01:18:16.543568125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:16.544900 env[1301]: time="2025-09-06T01:18:16.544838405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:16.545897 env[1301]: time="2025-09-06T01:18:16.545846216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/068a71f82b3531fc29866f3047e4077abb2f2d6f4f8fef884dc5340817bc1d4d pid=2215 runtime=io.containerd.runc.v2 Sep 6 01:18:16.612733 env[1301]: time="2025-09-06T01:18:16.612652712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kb4x2,Uid:ff9eb03d-08f3-4692-b100-678b586777b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"068a71f82b3531fc29866f3047e4077abb2f2d6f4f8fef884dc5340817bc1d4d\"" Sep 6 01:18:16.617897 env[1301]: time="2025-09-06T01:18:16.617833312Z" level=info msg="CreateContainer within sandbox \"068a71f82b3531fc29866f3047e4077abb2f2d6f4f8fef884dc5340817bc1d4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:18:16.638461 env[1301]: time="2025-09-06T01:18:16.638357619Z" level=info msg="CreateContainer within sandbox \"068a71f82b3531fc29866f3047e4077abb2f2d6f4f8fef884dc5340817bc1d4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2569f9541d0e372a5124273800f36fdfba61ef4922eb098cda7a3364ed64ccbd\"" Sep 6 01:18:16.642924 env[1301]: time="2025-09-06T01:18:16.642870375Z" level=info msg="StartContainer for \"2569f9541d0e372a5124273800f36fdfba61ef4922eb098cda7a3364ed64ccbd\"" Sep 6 01:18:16.743299 env[1301]: time="2025-09-06T01:18:16.743238636Z" level=info msg="StartContainer for \"2569f9541d0e372a5124273800f36fdfba61ef4922eb098cda7a3364ed64ccbd\" returns successfully" Sep 6 01:18:16.799897 env[1301]: time="2025-09-06T01:18:16.799712840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pml2r,Uid:8d32a918-a107-48b7-9fdd-3249005ff46c,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:16.811879 env[1301]: time="2025-09-06T01:18:16.811813107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vj46w,Uid:6d7ffeff-9c49-4c54-8066-0577dce67b70,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:16.828430 env[1301]: time="2025-09-06T01:18:16.828296825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:16.828430 env[1301]: time="2025-09-06T01:18:16.828374574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:16.828430 env[1301]: time="2025-09-06T01:18:16.828392884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:16.829308 env[1301]: time="2025-09-06T01:18:16.829114790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023 pid=2292 runtime=io.containerd.runc.v2 Sep 6 01:18:16.849680 env[1301]: time="2025-09-06T01:18:16.849512179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:16.849680 env[1301]: time="2025-09-06T01:18:16.849575623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:16.849680 env[1301]: time="2025-09-06T01:18:16.849593782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:16.851254 env[1301]: time="2025-09-06T01:18:16.851055887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3 pid=2313 runtime=io.containerd.runc.v2 Sep 6 01:18:16.918721 kubelet[2130]: I0906 01:18:16.918629 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kb4x2" podStartSLOduration=0.918582531 podStartE2EDuration="918.582531ms" podCreationTimestamp="2025-09-06 01:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:16.918249929 +0000 UTC m=+5.357714905" watchObservedRunningTime="2025-09-06 01:18:16.918582531 +0000 UTC m=+5.358047483" Sep 6 01:18:16.955749 env[1301]: time="2025-09-06T01:18:16.955688168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pml2r,Uid:8d32a918-a107-48b7-9fdd-3249005ff46c,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\"" Sep 6 01:18:16.974042 env[1301]: time="2025-09-06T01:18:16.971365496Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:18:17.017898 env[1301]: time="2025-09-06T01:18:17.017840097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vj46w,Uid:6d7ffeff-9c49-4c54-8066-0577dce67b70,Namespace:kube-system,Attempt:0,} returns sandbox id \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\"" Sep 6 01:18:24.607785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3639844329.mount: Deactivated successfully. Sep 6 01:18:29.123784 env[1301]: time="2025-09-06T01:18:29.123688131Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:29.126719 env[1301]: time="2025-09-06T01:18:29.126663536Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:29.128931 env[1301]: time="2025-09-06T01:18:29.128900940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:29.129972 env[1301]: time="2025-09-06T01:18:29.129932062Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 01:18:29.132753 env[1301]: time="2025-09-06T01:18:29.132447760Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:18:29.134169 env[1301]: time="2025-09-06T01:18:29.134134834Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:18:29.154118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759938458.mount: Deactivated successfully. Sep 6 01:18:29.165540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562605283.mount: Deactivated successfully. Sep 6 01:18:29.170219 env[1301]: time="2025-09-06T01:18:29.170155145Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\"" Sep 6 01:18:29.173311 env[1301]: time="2025-09-06T01:18:29.172425025Z" level=info msg="StartContainer for \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\"" Sep 6 01:18:29.257273 env[1301]: time="2025-09-06T01:18:29.257113059Z" level=info msg="StartContainer for \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\" returns successfully" Sep 6 01:18:29.402877 env[1301]: time="2025-09-06T01:18:29.402155728Z" level=info msg="shim disconnected" id=e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da Sep 6 01:18:29.403267 env[1301]: time="2025-09-06T01:18:29.403226273Z" level=warning msg="cleaning up after shim disconnected" id=e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da namespace=k8s.io Sep 6 01:18:29.403403 env[1301]: time="2025-09-06T01:18:29.403374778Z" level=info msg="cleaning up dead shim" Sep 6 01:18:29.414266 env[1301]: time="2025-09-06T01:18:29.414188078Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:18:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2550 runtime=io.containerd.runc.v2\n" Sep 6 01:18:29.931116 env[1301]: time="2025-09-06T01:18:29.929838267Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:18:29.952413 env[1301]: time="2025-09-06T01:18:29.952315106Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\"" Sep 6 01:18:29.954052 env[1301]: time="2025-09-06T01:18:29.953585580Z" level=info msg="StartContainer for \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\"" Sep 6 01:18:30.028150 env[1301]: time="2025-09-06T01:18:30.028084932Z" level=info msg="StartContainer for \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\" returns successfully" Sep 6 01:18:30.045409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:18:30.046141 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:18:30.046529 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:18:30.050378 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:18:30.065148 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:18:30.082758 env[1301]: time="2025-09-06T01:18:30.082700048Z" level=info msg="shim disconnected" id=2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9 Sep 6 01:18:30.082758 env[1301]: time="2025-09-06T01:18:30.082758364Z" level=warning msg="cleaning up after shim disconnected" id=2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9 namespace=k8s.io Sep 6 01:18:30.083147 env[1301]: time="2025-09-06T01:18:30.082773444Z" level=info msg="cleaning up dead shim" Sep 6 01:18:30.094120 env[1301]: time="2025-09-06T01:18:30.094058051Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:18:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2616 runtime=io.containerd.runc.v2\n" Sep 6 01:18:30.149805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da-rootfs.mount: Deactivated successfully. Sep 6 01:18:30.951906 env[1301]: time="2025-09-06T01:18:30.951810427Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:18:30.984500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129541325.mount: Deactivated successfully. Sep 6 01:18:30.994182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393699740.mount: Deactivated successfully. Sep 6 01:18:31.005763 env[1301]: time="2025-09-06T01:18:31.005687635Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\"" Sep 6 01:18:31.009411 env[1301]: time="2025-09-06T01:18:31.009144955Z" level=info msg="StartContainer for \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\"" Sep 6 01:18:31.114139 env[1301]: time="2025-09-06T01:18:31.114004872Z" level=info msg="StartContainer for \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\" returns successfully" Sep 6 01:18:31.165696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8-rootfs.mount: Deactivated successfully. Sep 6 01:18:31.176867 env[1301]: time="2025-09-06T01:18:31.176809094Z" level=info msg="shim disconnected" id=5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8 Sep 6 01:18:31.177121 env[1301]: time="2025-09-06T01:18:31.176867940Z" level=warning msg="cleaning up after shim disconnected" id=5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8 namespace=k8s.io Sep 6 01:18:31.177121 env[1301]: time="2025-09-06T01:18:31.176885038Z" level=info msg="cleaning up dead shim" Sep 6 01:18:31.206784 env[1301]: time="2025-09-06T01:18:31.206095400Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2674 runtime=io.containerd.runc.v2\n" Sep 6 01:18:31.964265 env[1301]: time="2025-09-06T01:18:31.964204271Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:18:31.984472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872291487.mount: Deactivated successfully. Sep 6 01:18:32.001339 env[1301]: time="2025-09-06T01:18:32.001275480Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\"" Sep 6 01:18:32.003206 env[1301]: time="2025-09-06T01:18:32.003164254Z" level=info msg="StartContainer for \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\"" Sep 6 01:18:32.091787 env[1301]: time="2025-09-06T01:18:32.091733965Z" level=info msg="StartContainer for \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\" returns successfully" Sep 6 01:18:32.149442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737497781.mount: Deactivated successfully. Sep 6 01:18:32.244310 env[1301]: time="2025-09-06T01:18:32.243989997Z" level=info msg="shim disconnected" id=19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12 Sep 6 01:18:32.244310 env[1301]: time="2025-09-06T01:18:32.244213838Z" level=warning msg="cleaning up after shim disconnected" id=19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12 namespace=k8s.io Sep 6 01:18:32.244310 env[1301]: time="2025-09-06T01:18:32.244233054Z" level=info msg="cleaning up dead shim" Sep 6 01:18:32.270826 env[1301]: time="2025-09-06T01:18:32.270741800Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:18:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2728 runtime=io.containerd.runc.v2\n" Sep 6 01:18:32.275644 env[1301]: time="2025-09-06T01:18:32.275575142Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:32.277186 env[1301]: time="2025-09-06T01:18:32.277148874Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:32.279090 env[1301]: time="2025-09-06T01:18:32.279054369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:18:32.279922 env[1301]: time="2025-09-06T01:18:32.279881055Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 01:18:32.284831 env[1301]: time="2025-09-06T01:18:32.284451211Z" level=info msg="CreateContainer within sandbox \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:18:32.298003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695717636.mount: Deactivated successfully. Sep 6 01:18:32.312699 env[1301]: time="2025-09-06T01:18:32.312637610Z" level=info msg="CreateContainer within sandbox \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\"" Sep 6 01:18:32.314555 env[1301]: time="2025-09-06T01:18:32.314371722Z" level=info msg="StartContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\"" Sep 6 01:18:32.392888 env[1301]: time="2025-09-06T01:18:32.392827219Z" level=info msg="StartContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" returns successfully" Sep 6 01:18:32.957603 env[1301]: time="2025-09-06T01:18:32.957529982Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:18:32.979678 env[1301]: time="2025-09-06T01:18:32.979604150Z" level=info msg="CreateContainer within sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\"" Sep 6 01:18:32.980878 env[1301]: time="2025-09-06T01:18:32.980845338Z" level=info msg="StartContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\"" Sep 6 01:18:33.181933 env[1301]: time="2025-09-06T01:18:33.181863875Z" level=info msg="StartContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" returns successfully" Sep 6 01:18:33.203051 kubelet[2130]: I0906 01:18:33.200377 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vj46w" podStartSLOduration=1.939444402 podStartE2EDuration="17.200353499s" podCreationTimestamp="2025-09-06 01:18:16 +0000 UTC" firstStartedPulling="2025-09-06 01:18:17.020803168 +0000 UTC m=+5.460268120" lastFinishedPulling="2025-09-06 01:18:32.281712265 +0000 UTC m=+20.721177217" observedRunningTime="2025-09-06 01:18:33.083551175 +0000 UTC m=+21.523016138" watchObservedRunningTime="2025-09-06 01:18:33.200353499 +0000 UTC m=+21.639818450" Sep 6 01:18:33.242401 systemd[1]: run-containerd-runc-k8s.io-30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351-runc.az9fz7.mount: Deactivated successfully. Sep 6 01:18:33.710909 kubelet[2130]: I0906 01:18:33.710862 2130 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 01:18:33.860131 kubelet[2130]: I0906 01:18:33.860061 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61b9f66b-d75d-4f69-b7e1-de402ca1b73a-config-volume\") pod \"coredns-7c65d6cfc9-p9tfj\" (UID: \"61b9f66b-d75d-4f69-b7e1-de402ca1b73a\") " pod="kube-system/coredns-7c65d6cfc9-p9tfj" Sep 6 01:18:33.860439 kubelet[2130]: I0906 01:18:33.860403 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmtqt\" (UniqueName: \"kubernetes.io/projected/91c61cee-896b-4433-90f6-58c3b67af13b-kube-api-access-nmtqt\") pod \"coredns-7c65d6cfc9-vl8ng\" (UID: \"91c61cee-896b-4433-90f6-58c3b67af13b\") " pod="kube-system/coredns-7c65d6cfc9-vl8ng" Sep 6 01:18:33.860625 kubelet[2130]: I0906 01:18:33.860598 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5w9q\" (UniqueName: \"kubernetes.io/projected/61b9f66b-d75d-4f69-b7e1-de402ca1b73a-kube-api-access-w5w9q\") pod \"coredns-7c65d6cfc9-p9tfj\" (UID: \"61b9f66b-d75d-4f69-b7e1-de402ca1b73a\") " pod="kube-system/coredns-7c65d6cfc9-p9tfj" Sep 6 01:18:33.860814 kubelet[2130]: I0906 01:18:33.860750 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c61cee-896b-4433-90f6-58c3b67af13b-config-volume\") pod \"coredns-7c65d6cfc9-vl8ng\" (UID: \"91c61cee-896b-4433-90f6-58c3b67af13b\") " pod="kube-system/coredns-7c65d6cfc9-vl8ng" Sep 6 01:18:34.038224 kubelet[2130]: I0906 01:18:34.037972 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pml2r" podStartSLOduration=5.867317019 podStartE2EDuration="18.037951146s" podCreationTimestamp="2025-09-06 01:18:16 +0000 UTC" firstStartedPulling="2025-09-06 01:18:16.961341648 +0000 UTC m=+5.400806600" lastFinishedPulling="2025-09-06 01:18:29.131975766 +0000 UTC m=+17.571440727" observedRunningTime="2025-09-06 01:18:34.03524291 +0000 UTC m=+22.474707887" watchObservedRunningTime="2025-09-06 01:18:34.037951146 +0000 UTC m=+22.477416105" Sep 6 01:18:34.062068 env[1301]: time="2025-09-06T01:18:34.061290172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl8ng,Uid:91c61cee-896b-4433-90f6-58c3b67af13b,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:34.067435 env[1301]: time="2025-09-06T01:18:34.067399987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p9tfj,Uid:61b9f66b-d75d-4f69-b7e1-de402ca1b73a,Namespace:kube-system,Attempt:0,}" Sep 6 01:18:36.146063 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 01:18:36.149307 systemd-networkd[1084]: cilium_host: Link UP Sep 6 01:18:36.149546 systemd-networkd[1084]: cilium_net: Link UP Sep 6 01:18:36.155449 systemd-networkd[1084]: cilium_net: Gained carrier Sep 6 01:18:36.156113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:18:36.159562 systemd-networkd[1084]: cilium_host: Gained carrier Sep 6 01:18:36.160337 systemd-networkd[1084]: cilium_net: Gained IPv6LL Sep 6 01:18:36.161099 systemd-networkd[1084]: cilium_host: Gained IPv6LL Sep 6 01:18:36.316460 systemd-networkd[1084]: cilium_vxlan: Link UP Sep 6 01:18:36.316470 systemd-networkd[1084]: cilium_vxlan: Gained carrier Sep 6 01:18:36.874077 kernel: NET: Registered PF_ALG protocol family Sep 6 01:18:37.975599 systemd-networkd[1084]: lxc_health: Link UP Sep 6 01:18:37.986040 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:18:37.989401 systemd-networkd[1084]: lxc_health: Gained carrier Sep 6 01:18:38.099181 systemd-networkd[1084]: cilium_vxlan: Gained IPv6LL Sep 6 01:18:38.239171 systemd-networkd[1084]: lxca1c39cdaa193: Link UP Sep 6 01:18:38.244086 kernel: eth0: renamed from tmpcd8b2 Sep 6 01:18:38.251420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca1c39cdaa193: link becomes ready Sep 6 01:18:38.250640 systemd-networkd[1084]: lxca1c39cdaa193: Gained carrier Sep 6 01:18:38.252512 systemd-networkd[1084]: lxcf3b4ef7f9c3c: Link UP Sep 6 01:18:38.269063 kernel: eth0: renamed from tmp8a1c3 Sep 6 01:18:38.281123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf3b4ef7f9c3c: link becomes ready Sep 6 01:18:38.285617 systemd-networkd[1084]: lxcf3b4ef7f9c3c: Gained carrier Sep 6 01:18:39.315266 systemd-networkd[1084]: lxc_health: Gained IPv6LL Sep 6 01:18:39.699730 systemd-networkd[1084]: lxcf3b4ef7f9c3c: Gained IPv6LL Sep 6 01:18:40.166880 systemd-networkd[1084]: lxca1c39cdaa193: Gained IPv6LL Sep 6 01:18:43.826062 env[1301]: time="2025-09-06T01:18:43.823726437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:43.826062 env[1301]: time="2025-09-06T01:18:43.823873643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:43.826062 env[1301]: time="2025-09-06T01:18:43.823966888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:43.826062 env[1301]: time="2025-09-06T01:18:43.824258464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd8b26f4261d7223adb4e8adf3cd9e29f5289fc977e75a93f457b2f5d25f1050 pid=3309 runtime=io.containerd.runc.v2 Sep 6 01:18:43.838702 env[1301]: time="2025-09-06T01:18:43.838609266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:18:43.839002 env[1301]: time="2025-09-06T01:18:43.838960487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:18:43.839478 env[1301]: time="2025-09-06T01:18:43.839171293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:18:43.839659 env[1301]: time="2025-09-06T01:18:43.839380718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a1c34955ae2f4d6fbf4d19a4eec54c2b0e7d928f24abe6bb96780a3fe14dca4 pid=3324 runtime=io.containerd.runc.v2 Sep 6 01:18:43.921143 systemd[1]: run-containerd-runc-k8s.io-8a1c34955ae2f4d6fbf4d19a4eec54c2b0e7d928f24abe6bb96780a3fe14dca4-runc.Fbx6K0.mount: Deactivated successfully. Sep 6 01:18:44.029239 env[1301]: time="2025-09-06T01:18:44.029141732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vl8ng,Uid:91c61cee-896b-4433-90f6-58c3b67af13b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1c34955ae2f4d6fbf4d19a4eec54c2b0e7d928f24abe6bb96780a3fe14dca4\"" Sep 6 01:18:44.036125 env[1301]: time="2025-09-06T01:18:44.036081260Z" level=info msg="CreateContainer within sandbox \"8a1c34955ae2f4d6fbf4d19a4eec54c2b0e7d928f24abe6bb96780a3fe14dca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:18:44.065375 env[1301]: time="2025-09-06T01:18:44.065301887Z" level=info msg="CreateContainer within sandbox \"8a1c34955ae2f4d6fbf4d19a4eec54c2b0e7d928f24abe6bb96780a3fe14dca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eba380ffa837580edafc0d09144c41baed733fd229aa8621bd5ee666dc583f6f\"" Sep 6 01:18:44.066559 env[1301]: time="2025-09-06T01:18:44.066523293Z" level=info msg="StartContainer for \"eba380ffa837580edafc0d09144c41baed733fd229aa8621bd5ee666dc583f6f\"" Sep 6 01:18:44.069549 env[1301]: time="2025-09-06T01:18:44.069512862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p9tfj,Uid:61b9f66b-d75d-4f69-b7e1-de402ca1b73a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd8b26f4261d7223adb4e8adf3cd9e29f5289fc977e75a93f457b2f5d25f1050\"" Sep 6 01:18:44.075540 env[1301]: time="2025-09-06T01:18:44.075484006Z" level=info msg="CreateContainer within sandbox \"cd8b26f4261d7223adb4e8adf3cd9e29f5289fc977e75a93f457b2f5d25f1050\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:18:44.096771 env[1301]: time="2025-09-06T01:18:44.090207503Z" level=info msg="CreateContainer within sandbox \"cd8b26f4261d7223adb4e8adf3cd9e29f5289fc977e75a93f457b2f5d25f1050\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92a797fdec2ebb2444b35406ca5c2825983327cded35d7527962da2a1b1e8fbd\"" Sep 6 01:18:44.101447 env[1301]: time="2025-09-06T01:18:44.101378692Z" level=info msg="StartContainer for \"92a797fdec2ebb2444b35406ca5c2825983327cded35d7527962da2a1b1e8fbd\"" Sep 6 01:18:44.161706 env[1301]: time="2025-09-06T01:18:44.161632496Z" level=info msg="StartContainer for \"eba380ffa837580edafc0d09144c41baed733fd229aa8621bd5ee666dc583f6f\" returns successfully" Sep 6 01:18:44.202273 env[1301]: time="2025-09-06T01:18:44.202216240Z" level=info msg="StartContainer for \"92a797fdec2ebb2444b35406ca5c2825983327cded35d7527962da2a1b1e8fbd\" returns successfully" Sep 6 01:18:45.031511 kubelet[2130]: I0906 01:18:45.031393 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p9tfj" podStartSLOduration=29.031267007 podStartE2EDuration="29.031267007s" podCreationTimestamp="2025-09-06 01:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:45.030247329 +0000 UTC m=+33.469712286" watchObservedRunningTime="2025-09-06 01:18:45.031267007 +0000 UTC m=+33.470731971" Sep 6 01:18:45.051127 kubelet[2130]: I0906 01:18:45.051019 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vl8ng" podStartSLOduration=29.050943809 podStartE2EDuration="29.050943809s" podCreationTimestamp="2025-09-06 01:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:18:45.048178959 +0000 UTC m=+33.487643923" watchObservedRunningTime="2025-09-06 01:18:45.050943809 +0000 UTC m=+33.490408767" Sep 6 01:19:23.495513 systemd[1]: Started sshd@7-10.230.51.142:22-139.178.89.65:41590.service. Sep 6 01:19:24.471934 sshd[3471]: Accepted publickey for core from 139.178.89.65 port 41590 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:24.475462 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:24.489237 systemd[1]: Started session-8.scope. Sep 6 01:19:24.489542 systemd-logind[1289]: New session 8 of user core. Sep 6 01:19:25.366318 sshd[3471]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:25.371086 systemd[1]: sshd@7-10.230.51.142:22-139.178.89.65:41590.service: Deactivated successfully. Sep 6 01:19:25.373170 systemd-logind[1289]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:19:25.375591 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:19:25.377408 systemd-logind[1289]: Removed session 8. Sep 6 01:19:30.520749 systemd[1]: Started sshd@8-10.230.51.142:22-139.178.89.65:52720.service. Sep 6 01:19:31.473236 sshd[3485]: Accepted publickey for core from 139.178.89.65 port 52720 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:31.476524 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:31.485763 systemd[1]: Started session-9.scope. Sep 6 01:19:31.487204 systemd-logind[1289]: New session 9 of user core. Sep 6 01:19:32.235838 sshd[3485]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:32.240955 systemd[1]: sshd@8-10.230.51.142:22-139.178.89.65:52720.service: Deactivated successfully. Sep 6 01:19:32.242121 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:19:32.243289 systemd-logind[1289]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:19:32.244511 systemd-logind[1289]: Removed session 9. Sep 6 01:19:37.383620 systemd[1]: Started sshd@9-10.230.51.142:22-139.178.89.65:52726.service. Sep 6 01:19:38.280094 sshd[3498]: Accepted publickey for core from 139.178.89.65 port 52726 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:38.282835 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:38.289572 systemd-logind[1289]: New session 10 of user core. Sep 6 01:19:38.290312 systemd[1]: Started session-10.scope. Sep 6 01:19:38.994660 sshd[3498]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:38.998494 systemd-logind[1289]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:19:38.999014 systemd[1]: sshd@9-10.230.51.142:22-139.178.89.65:52726.service: Deactivated successfully. Sep 6 01:19:39.000217 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:19:39.001377 systemd-logind[1289]: Removed session 10. Sep 6 01:19:44.141243 systemd[1]: Started sshd@10-10.230.51.142:22-139.178.89.65:46502.service. Sep 6 01:19:45.033428 sshd[3512]: Accepted publickey for core from 139.178.89.65 port 46502 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:45.036507 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:45.043751 systemd-logind[1289]: New session 11 of user core. Sep 6 01:19:45.044773 systemd[1]: Started session-11.scope. Sep 6 01:19:45.743750 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:45.747903 systemd-logind[1289]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:19:45.748358 systemd[1]: sshd@10-10.230.51.142:22-139.178.89.65:46502.service: Deactivated successfully. Sep 6 01:19:45.749478 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:19:45.750767 systemd-logind[1289]: Removed session 11. Sep 6 01:19:50.893828 systemd[1]: Started sshd@11-10.230.51.142:22-139.178.89.65:48804.service. Sep 6 01:19:51.791170 sshd[3528]: Accepted publickey for core from 139.178.89.65 port 48804 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:51.793070 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:51.800385 systemd-logind[1289]: New session 12 of user core. Sep 6 01:19:51.802062 systemd[1]: Started session-12.scope. Sep 6 01:19:52.526079 sshd[3528]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:52.534695 systemd-logind[1289]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:19:52.536649 systemd[1]: sshd@11-10.230.51.142:22-139.178.89.65:48804.service: Deactivated successfully. Sep 6 01:19:52.537874 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:19:52.539374 systemd-logind[1289]: Removed session 12. Sep 6 01:19:52.690963 systemd[1]: Started sshd@12-10.230.51.142:22-139.178.89.65:48808.service. Sep 6 01:19:53.656008 sshd[3541]: Accepted publickey for core from 139.178.89.65 port 48808 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:53.657552 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:53.664910 systemd-logind[1289]: New session 13 of user core. Sep 6 01:19:53.665808 systemd[1]: Started session-13.scope. Sep 6 01:19:54.487412 sshd[3541]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:54.491033 systemd[1]: sshd@12-10.230.51.142:22-139.178.89.65:48808.service: Deactivated successfully. Sep 6 01:19:54.492436 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:19:54.492867 systemd-logind[1289]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:19:54.494072 systemd-logind[1289]: Removed session 13. Sep 6 01:19:54.626153 systemd[1]: Started sshd@13-10.230.51.142:22-139.178.89.65:48812.service. Sep 6 01:19:55.521745 sshd[3552]: Accepted publickey for core from 139.178.89.65 port 48812 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:19:55.524354 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:19:55.531975 systemd-logind[1289]: New session 14 of user core. Sep 6 01:19:55.532903 systemd[1]: Started session-14.scope. Sep 6 01:19:56.240470 sshd[3552]: pam_unix(sshd:session): session closed for user core Sep 6 01:19:56.243870 systemd[1]: sshd@13-10.230.51.142:22-139.178.89.65:48812.service: Deactivated successfully. Sep 6 01:19:56.245356 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:19:56.246469 systemd-logind[1289]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:19:56.247900 systemd-logind[1289]: Removed session 14. Sep 6 01:20:01.403659 systemd[1]: Started sshd@14-10.230.51.142:22-139.178.89.65:40666.service. Sep 6 01:20:02.320872 sshd[3565]: Accepted publickey for core from 139.178.89.65 port 40666 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:02.321623 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:02.330169 systemd[1]: Started session-15.scope. Sep 6 01:20:02.330440 systemd-logind[1289]: New session 15 of user core. Sep 6 01:20:03.042377 sshd[3565]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:03.046124 systemd[1]: sshd@14-10.230.51.142:22-139.178.89.65:40666.service: Deactivated successfully. Sep 6 01:20:03.047933 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:20:03.048360 systemd-logind[1289]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:20:03.049823 systemd-logind[1289]: Removed session 15. Sep 6 01:20:08.215069 systemd[1]: Started sshd@15-10.230.51.142:22-139.178.89.65:40668.service. Sep 6 01:20:09.171361 sshd[3579]: Accepted publickey for core from 139.178.89.65 port 40668 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:09.172183 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:09.178957 systemd-logind[1289]: New session 16 of user core. Sep 6 01:20:09.179747 systemd[1]: Started session-16.scope. Sep 6 01:20:09.920887 sshd[3579]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:09.924952 systemd-logind[1289]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:20:09.925659 systemd[1]: sshd@15-10.230.51.142:22-139.178.89.65:40668.service: Deactivated successfully. Sep 6 01:20:09.926715 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:20:09.928696 systemd-logind[1289]: Removed session 16. Sep 6 01:20:10.086105 systemd[1]: Started sshd@16-10.230.51.142:22-139.178.89.65:40670.service. Sep 6 01:20:11.099143 sshd[3591]: Accepted publickey for core from 139.178.89.65 port 40670 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:11.101051 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:11.107730 systemd-logind[1289]: New session 17 of user core. Sep 6 01:20:11.108787 systemd[1]: Started session-17.scope. Sep 6 01:20:12.263730 sshd[3591]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:12.272529 systemd[1]: sshd@16-10.230.51.142:22-139.178.89.65:40670.service: Deactivated successfully. Sep 6 01:20:12.274038 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:20:12.274502 systemd-logind[1289]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:20:12.275894 systemd-logind[1289]: Removed session 17. Sep 6 01:20:12.398036 systemd[1]: Started sshd@17-10.230.51.142:22-139.178.89.65:52002.service. Sep 6 01:20:13.674273 sshd[3603]: Accepted publickey for core from 139.178.89.65 port 52002 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:13.676940 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:13.684858 systemd[1]: Started session-18.scope. Sep 6 01:20:13.685159 systemd-logind[1289]: New session 18 of user core. Sep 6 01:20:16.100244 sshd[3603]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:16.104493 systemd[1]: sshd@17-10.230.51.142:22-139.178.89.65:52002.service: Deactivated successfully. Sep 6 01:20:16.106070 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:20:16.106124 systemd-logind[1289]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:20:16.108064 systemd-logind[1289]: Removed session 18. Sep 6 01:20:16.246618 systemd[1]: Started sshd@18-10.230.51.142:22-139.178.89.65:52004.service. Sep 6 01:20:17.146405 sshd[3623]: Accepted publickey for core from 139.178.89.65 port 52004 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:17.149460 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:17.161563 systemd-logind[1289]: New session 19 of user core. Sep 6 01:20:17.162327 systemd[1]: Started session-19.scope. Sep 6 01:20:18.148429 sshd[3623]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:18.152170 systemd[1]: sshd@18-10.230.51.142:22-139.178.89.65:52004.service: Deactivated successfully. Sep 6 01:20:18.153862 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:20:18.154316 systemd-logind[1289]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:20:18.155564 systemd-logind[1289]: Removed session 19. Sep 6 01:20:18.294896 systemd[1]: Started sshd@19-10.230.51.142:22-139.178.89.65:52012.service. Sep 6 01:20:19.254252 sshd[3636]: Accepted publickey for core from 139.178.89.65 port 52012 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:19.256242 sshd[3636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:19.263033 systemd-logind[1289]: New session 20 of user core. Sep 6 01:20:19.263812 systemd[1]: Started session-20.scope. Sep 6 01:20:20.049311 sshd[3636]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:20.053056 systemd-logind[1289]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:20:20.053548 systemd[1]: sshd@19-10.230.51.142:22-139.178.89.65:52012.service: Deactivated successfully. Sep 6 01:20:20.054573 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:20:20.056342 systemd-logind[1289]: Removed session 20. Sep 6 01:20:25.226002 systemd[1]: Started sshd@20-10.230.51.142:22-139.178.89.65:51314.service. Sep 6 01:20:26.234675 sshd[3649]: Accepted publickey for core from 139.178.89.65 port 51314 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:26.236589 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:26.243592 systemd-logind[1289]: New session 21 of user core. Sep 6 01:20:26.244263 systemd[1]: Started session-21.scope. Sep 6 01:20:27.012322 sshd[3649]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:27.015703 systemd[1]: sshd@20-10.230.51.142:22-139.178.89.65:51314.service: Deactivated successfully. Sep 6 01:20:27.016972 systemd-logind[1289]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:20:27.017098 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:20:27.018931 systemd-logind[1289]: Removed session 21. Sep 6 01:20:32.148443 systemd[1]: Started sshd@21-10.230.51.142:22-139.178.89.65:45984.service. Sep 6 01:20:33.040189 sshd[3665]: Accepted publickey for core from 139.178.89.65 port 45984 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:33.042159 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:33.049786 systemd[1]: Started session-22.scope. Sep 6 01:20:33.050246 systemd-logind[1289]: New session 22 of user core. Sep 6 01:20:33.738949 sshd[3665]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:33.742828 systemd[1]: sshd@21-10.230.51.142:22-139.178.89.65:45984.service: Deactivated successfully. Sep 6 01:20:33.744257 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:20:33.744723 systemd-logind[1289]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:20:33.746430 systemd-logind[1289]: Removed session 22. Sep 6 01:20:38.887166 systemd[1]: Started sshd@22-10.230.51.142:22-139.178.89.65:45990.service. Sep 6 01:20:39.781749 sshd[3678]: Accepted publickey for core from 139.178.89.65 port 45990 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:39.784124 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:39.791836 systemd[1]: Started session-23.scope. Sep 6 01:20:39.792445 systemd-logind[1289]: New session 23 of user core. Sep 6 01:20:40.495282 sshd[3678]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:40.499055 systemd[1]: sshd@22-10.230.51.142:22-139.178.89.65:45990.service: Deactivated successfully. Sep 6 01:20:40.500498 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:20:40.500936 systemd-logind[1289]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:20:40.502061 systemd-logind[1289]: Removed session 23. Sep 6 01:20:40.641203 systemd[1]: Started sshd@23-10.230.51.142:22-139.178.89.65:44196.service. Sep 6 01:20:41.532485 sshd[3691]: Accepted publickey for core from 139.178.89.65 port 44196 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:41.535290 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:41.544151 systemd[1]: Started session-24.scope. Sep 6 01:20:41.544459 systemd-logind[1289]: New session 24 of user core. Sep 6 01:20:44.141414 env[1301]: time="2025-09-06T01:20:44.141339281Z" level=info msg="StopContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" with timeout 30 (s)" Sep 6 01:20:44.142694 env[1301]: time="2025-09-06T01:20:44.142413784Z" level=info msg="Stop container \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" with signal terminated" Sep 6 01:20:44.173280 systemd[1]: run-containerd-runc-k8s.io-30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351-runc.Ct9dtz.mount: Deactivated successfully. Sep 6 01:20:44.218615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf-rootfs.mount: Deactivated successfully. Sep 6 01:20:44.221926 env[1301]: time="2025-09-06T01:20:44.221853889Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:20:44.228879 env[1301]: time="2025-09-06T01:20:44.228845351Z" level=info msg="StopContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" with timeout 2 (s)" Sep 6 01:20:44.229528 env[1301]: time="2025-09-06T01:20:44.229488551Z" level=info msg="Stop container \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" with signal terminated" Sep 6 01:20:44.230602 env[1301]: time="2025-09-06T01:20:44.229968239Z" level=info msg="shim disconnected" id=575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf Sep 6 01:20:44.230602 env[1301]: time="2025-09-06T01:20:44.230005825Z" level=warning msg="cleaning up after shim disconnected" id=575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf namespace=k8s.io Sep 6 01:20:44.230602 env[1301]: time="2025-09-06T01:20:44.230050149Z" level=info msg="cleaning up dead shim" Sep 6 01:20:44.248544 systemd-networkd[1084]: lxc_health: Link DOWN Sep 6 01:20:44.248557 systemd-networkd[1084]: lxc_health: Lost carrier Sep 6 01:20:44.287430 env[1301]: time="2025-09-06T01:20:44.283285741Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3740 runtime=io.containerd.runc.v2\n" Sep 6 01:20:44.288522 env[1301]: time="2025-09-06T01:20:44.288466797Z" level=info msg="StopContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" returns successfully" Sep 6 01:20:44.289688 env[1301]: time="2025-09-06T01:20:44.289653157Z" level=info msg="StopPodSandbox for \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\"" Sep 6 01:20:44.290062 env[1301]: time="2025-09-06T01:20:44.290005244Z" level=info msg="Container to stop \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.293249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3-shm.mount: Deactivated successfully. Sep 6 01:20:44.345794 env[1301]: time="2025-09-06T01:20:44.345708425Z" level=info msg="shim disconnected" id=30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351 Sep 6 01:20:44.346225 env[1301]: time="2025-09-06T01:20:44.346183761Z" level=warning msg="cleaning up after shim disconnected" id=30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351 namespace=k8s.io Sep 6 01:20:44.346410 env[1301]: time="2025-09-06T01:20:44.346380645Z" level=info msg="cleaning up dead shim" Sep 6 01:20:44.348767 env[1301]: time="2025-09-06T01:20:44.348721292Z" level=info msg="shim disconnected" id=6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3 Sep 6 01:20:44.348857 env[1301]: time="2025-09-06T01:20:44.348767704Z" level=warning msg="cleaning up after shim disconnected" id=6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3 namespace=k8s.io Sep 6 01:20:44.348857 env[1301]: time="2025-09-06T01:20:44.348783188Z" level=info msg="cleaning up dead shim" Sep 6 01:20:44.364090 env[1301]: time="2025-09-06T01:20:44.364005818Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3794 runtime=io.containerd.runc.v2\n" Sep 6 01:20:44.366512 env[1301]: time="2025-09-06T01:20:44.365812506Z" level=info msg="StopContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" returns successfully" Sep 6 01:20:44.367406 env[1301]: time="2025-09-06T01:20:44.367372385Z" level=info msg="StopPodSandbox for \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\"" Sep 6 01:20:44.367771 env[1301]: time="2025-09-06T01:20:44.367730012Z" level=info msg="Container to stop \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.367936 env[1301]: time="2025-09-06T01:20:44.367896827Z" level=info msg="Container to stop \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.368084 env[1301]: time="2025-09-06T01:20:44.368051193Z" level=info msg="Container to stop \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.368227 env[1301]: time="2025-09-06T01:20:44.368195636Z" level=info msg="Container to stop \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.368375 env[1301]: time="2025-09-06T01:20:44.368342917Z" level=info msg="Container to stop \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:44.371933 env[1301]: time="2025-09-06T01:20:44.371901593Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" Sep 6 01:20:44.373200 env[1301]: time="2025-09-06T01:20:44.373158210Z" level=info msg="TearDown network for sandbox \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\" successfully" Sep 6 01:20:44.373367 env[1301]: time="2025-09-06T01:20:44.373333592Z" level=info msg="StopPodSandbox for \"6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3\" returns successfully" Sep 6 01:20:44.427191 env[1301]: time="2025-09-06T01:20:44.425461885Z" level=info msg="shim disconnected" id=baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023 Sep 6 01:20:44.427191 env[1301]: time="2025-09-06T01:20:44.425525537Z" level=warning msg="cleaning up after shim disconnected" id=baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023 namespace=k8s.io Sep 6 01:20:44.427191 env[1301]: time="2025-09-06T01:20:44.425550786Z" level=info msg="cleaning up dead shim" Sep 6 01:20:44.441029 env[1301]: time="2025-09-06T01:20:44.440947488Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" Sep 6 01:20:44.441490 env[1301]: time="2025-09-06T01:20:44.441445120Z" level=info msg="TearDown network for sandbox \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" successfully" Sep 6 01:20:44.441579 env[1301]: time="2025-09-06T01:20:44.441486586Z" level=info msg="StopPodSandbox for \"baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023\" returns successfully" Sep 6 01:20:44.537040 kubelet[2130]: I0906 01:20:44.536927 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d7ffeff-9c49-4c54-8066-0577dce67b70-cilium-config-path\") pod \"6d7ffeff-9c49-4c54-8066-0577dce67b70\" (UID: \"6d7ffeff-9c49-4c54-8066-0577dce67b70\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537064 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-cgroup\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537094 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-hostproc\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537201 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-lib-modules\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537227 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-kernel\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537335 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q59kk\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-kube-api-access-q59kk\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.537774 kubelet[2130]: I0906 01:20:44.537380 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cni-path\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537409 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-config-path\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537492 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-bpf-maps\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537519 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-run\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537783 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdknj\" (UniqueName: \"kubernetes.io/projected/6d7ffeff-9c49-4c54-8066-0577dce67b70-kube-api-access-xdknj\") pod \"6d7ffeff-9c49-4c54-8066-0577dce67b70\" (UID: \"6d7ffeff-9c49-4c54-8066-0577dce67b70\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537850 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-hubble-tls\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538166 kubelet[2130]: I0906 01:20:44.537876 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-etc-cni-netd\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538512 kubelet[2130]: I0906 01:20:44.537918 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-xtables-lock\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.538512 kubelet[2130]: I0906 01:20:44.537950 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d32a918-a107-48b7-9fdd-3249005ff46c-clustermesh-secrets\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.543389 kubelet[2130]: I0906 01:20:44.540997 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.543547 kubelet[2130]: I0906 01:20:44.541063 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.543720 kubelet[2130]: I0906 01:20:44.543688 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.543864 kubelet[2130]: I0906 01:20:44.543691 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.544001 kubelet[2130]: I0906 01:20:44.543974 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.555165 kubelet[2130]: I0906 01:20:44.555126 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.555357 kubelet[2130]: I0906 01:20:44.555323 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.555856 kubelet[2130]: I0906 01:20:44.555829 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.556026 kubelet[2130]: I0906 01:20:44.555981 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.560996 kubelet[2130]: I0906 01:20:44.560957 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d32a918-a107-48b7-9fdd-3249005ff46c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:20:44.562461 kubelet[2130]: I0906 01:20:44.562424 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d7ffeff-9c49-4c54-8066-0577dce67b70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d7ffeff-9c49-4c54-8066-0577dce67b70" (UID: "6d7ffeff-9c49-4c54-8066-0577dce67b70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:20:44.562699 kubelet[2130]: I0906 01:20:44.562667 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:20:44.564496 kubelet[2130]: I0906 01:20:44.564460 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:20:44.566952 kubelet[2130]: I0906 01:20:44.566906 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-kube-api-access-q59kk" (OuterVolumeSpecName: "kube-api-access-q59kk") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "kube-api-access-q59kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:20:44.567615 kubelet[2130]: I0906 01:20:44.567584 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d7ffeff-9c49-4c54-8066-0577dce67b70-kube-api-access-xdknj" (OuterVolumeSpecName: "kube-api-access-xdknj") pod "6d7ffeff-9c49-4c54-8066-0577dce67b70" (UID: "6d7ffeff-9c49-4c54-8066-0577dce67b70"). InnerVolumeSpecName "kube-api-access-xdknj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:20:44.638814 kubelet[2130]: I0906 01:20:44.638743 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-net\") pod \"8d32a918-a107-48b7-9fdd-3249005ff46c\" (UID: \"8d32a918-a107-48b7-9fdd-3249005ff46c\") " Sep 6 01:20:44.639372 kubelet[2130]: I0906 01:20:44.639342 2130 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-bpf-maps\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.639622 kubelet[2130]: I0906 01:20:44.639595 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-config-path\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.640099 kubelet[2130]: I0906 01:20:44.640073 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-run\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.640240 kubelet[2130]: I0906 01:20:44.640217 2130 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdknj\" (UniqueName: \"kubernetes.io/projected/6d7ffeff-9c49-4c54-8066-0577dce67b70-kube-api-access-xdknj\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.640398 kubelet[2130]: I0906 01:20:44.640373 2130 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-hubble-tls\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.640571 kubelet[2130]: I0906 01:20:44.640547 2130 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-etc-cni-netd\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.640736 kubelet[2130]: I0906 01:20:44.640712 2130 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-xtables-lock\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641038 kubelet[2130]: I0906 01:20:44.640982 2130 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d32a918-a107-48b7-9fdd-3249005ff46c-clustermesh-secrets\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641178 kubelet[2130]: I0906 01:20:44.641154 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d7ffeff-9c49-4c54-8066-0577dce67b70-cilium-config-path\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641320 kubelet[2130]: I0906 01:20:44.641286 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cilium-cgroup\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641452 kubelet[2130]: I0906 01:20:44.641429 2130 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-hostproc\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641643 kubelet[2130]: I0906 01:20:44.641619 2130 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-lib-modules\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641778 kubelet[2130]: I0906 01:20:44.641742 2130 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-kernel\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.641930 kubelet[2130]: I0906 01:20:44.641905 2130 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q59kk\" (UniqueName: \"kubernetes.io/projected/8d32a918-a107-48b7-9fdd-3249005ff46c-kube-api-access-q59kk\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.642079 kubelet[2130]: I0906 01:20:44.642052 2130 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-cni-path\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:44.642199 kubelet[2130]: I0906 01:20:44.639113 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d32a918-a107-48b7-9fdd-3249005ff46c" (UID: "8d32a918-a107-48b7-9fdd-3249005ff46c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:44.744055 kubelet[2130]: I0906 01:20:44.743075 2130 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d32a918-a107-48b7-9fdd-3249005ff46c-host-proc-sys-net\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:45.158239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351-rootfs.mount: Deactivated successfully. Sep 6 01:20:45.158889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6453f142c7949154a00bd6b2ec6d4dfcaaf19c5b17b72e8ab03582e194d2bff3-rootfs.mount: Deactivated successfully. Sep 6 01:20:45.159218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023-rootfs.mount: Deactivated successfully. Sep 6 01:20:45.159532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-baa0f1d978d7ad061c9c36465429802d047e2a75c8223042a131af3d03ef9023-shm.mount: Deactivated successfully. Sep 6 01:20:45.159816 systemd[1]: var-lib-kubelet-pods-8d32a918\x2da107\x2d48b7\x2d9fdd\x2d3249005ff46c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq59kk.mount: Deactivated successfully. Sep 6 01:20:45.160127 systemd[1]: var-lib-kubelet-pods-6d7ffeff\x2d9c49\x2d4c54\x2d8066\x2d0577dce67b70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdknj.mount: Deactivated successfully. Sep 6 01:20:45.160453 systemd[1]: var-lib-kubelet-pods-8d32a918\x2da107\x2d48b7\x2d9fdd\x2d3249005ff46c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:20:45.160846 systemd[1]: var-lib-kubelet-pods-8d32a918\x2da107\x2d48b7\x2d9fdd\x2d3249005ff46c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:20:45.346900 kubelet[2130]: I0906 01:20:45.346853 2130 scope.go:117] "RemoveContainer" containerID="30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351" Sep 6 01:20:45.354374 env[1301]: time="2025-09-06T01:20:45.354129241Z" level=info msg="RemoveContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\"" Sep 6 01:20:45.364025 env[1301]: time="2025-09-06T01:20:45.363964397Z" level=info msg="RemoveContainer for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" returns successfully" Sep 6 01:20:45.367001 kubelet[2130]: I0906 01:20:45.366970 2130 scope.go:117] "RemoveContainer" containerID="19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12" Sep 6 01:20:45.386325 env[1301]: time="2025-09-06T01:20:45.385796656Z" level=info msg="RemoveContainer for \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\"" Sep 6 01:20:45.393903 env[1301]: time="2025-09-06T01:20:45.393845914Z" level=info msg="RemoveContainer for \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\" returns successfully" Sep 6 01:20:45.394386 kubelet[2130]: I0906 01:20:45.394356 2130 scope.go:117] "RemoveContainer" containerID="5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8" Sep 6 01:20:45.396646 env[1301]: time="2025-09-06T01:20:45.396592836Z" level=info msg="RemoveContainer for \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\"" Sep 6 01:20:45.399545 env[1301]: time="2025-09-06T01:20:45.399511408Z" level=info msg="RemoveContainer for \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\" returns successfully" Sep 6 01:20:45.399856 kubelet[2130]: I0906 01:20:45.399812 2130 scope.go:117] "RemoveContainer" containerID="2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9" Sep 6 01:20:45.402453 env[1301]: time="2025-09-06T01:20:45.401936615Z" level=info msg="RemoveContainer for \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\"" Sep 6 01:20:45.405245 env[1301]: time="2025-09-06T01:20:45.405213951Z" level=info msg="RemoveContainer for \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\" returns successfully" Sep 6 01:20:45.405691 kubelet[2130]: I0906 01:20:45.405647 2130 scope.go:117] "RemoveContainer" containerID="e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da" Sep 6 01:20:45.407381 env[1301]: time="2025-09-06T01:20:45.407329975Z" level=info msg="RemoveContainer for \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\"" Sep 6 01:20:45.410958 env[1301]: time="2025-09-06T01:20:45.410844185Z" level=info msg="RemoveContainer for \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\" returns successfully" Sep 6 01:20:45.411373 kubelet[2130]: I0906 01:20:45.411348 2130 scope.go:117] "RemoveContainer" containerID="30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351" Sep 6 01:20:45.411958 env[1301]: time="2025-09-06T01:20:45.411767960Z" level=error msg="ContainerStatus for \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\": not found" Sep 6 01:20:45.417351 kubelet[2130]: E0906 01:20:45.417278 2130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\": not found" containerID="30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351" Sep 6 01:20:45.418346 kubelet[2130]: I0906 01:20:45.418172 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351"} err="failed to get container status \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\": rpc error: code = NotFound desc = an error occurred when try to find container \"30fd438904eda1e3125b0302a8cb0f1f19741399f16e013608cd9386b94b8351\": not found" Sep 6 01:20:45.418500 kubelet[2130]: I0906 01:20:45.418474 2130 scope.go:117] "RemoveContainer" containerID="19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12" Sep 6 01:20:45.419058 env[1301]: time="2025-09-06T01:20:45.418931509Z" level=error msg="ContainerStatus for \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\": not found" Sep 6 01:20:45.421066 kubelet[2130]: E0906 01:20:45.421034 2130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\": not found" containerID="19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12" Sep 6 01:20:45.421208 kubelet[2130]: I0906 01:20:45.421072 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12"} err="failed to get container status \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\": rpc error: code = NotFound desc = an error occurred when try to find container \"19c3d5058a7c5694251b01c3583bcb8b9824d74a920a182c7cc0e54a7ee2bb12\": not found" Sep 6 01:20:45.421208 kubelet[2130]: I0906 01:20:45.421095 2130 scope.go:117] "RemoveContainer" containerID="5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8" Sep 6 01:20:45.422436 env[1301]: time="2025-09-06T01:20:45.422230585Z" level=error msg="ContainerStatus for \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\": not found" Sep 6 01:20:45.422887 kubelet[2130]: E0906 01:20:45.422857 2130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\": not found" containerID="5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8" Sep 6 01:20:45.422992 kubelet[2130]: I0906 01:20:45.422930 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8"} err="failed to get container status \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"5146f4032d30ad3ec9618d692220414a8ac454a90f15962c7f6e506d814ef4c8\": not found" Sep 6 01:20:45.422992 kubelet[2130]: I0906 01:20:45.422967 2130 scope.go:117] "RemoveContainer" containerID="2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9" Sep 6 01:20:45.423542 env[1301]: time="2025-09-06T01:20:45.423387217Z" level=error msg="ContainerStatus for \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\": not found" Sep 6 01:20:45.423972 kubelet[2130]: E0906 01:20:45.423943 2130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\": not found" containerID="2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9" Sep 6 01:20:45.423972 kubelet[2130]: I0906 01:20:45.423974 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9"} err="failed to get container status \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2aa93fdad8de6fc9bc2293d1458b70d1d7f81c87002ee0ca1591fb5e072697a9\": not found" Sep 6 01:20:45.424144 kubelet[2130]: I0906 01:20:45.423994 2130 scope.go:117] "RemoveContainer" containerID="e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da" Sep 6 01:20:45.424507 env[1301]: time="2025-09-06T01:20:45.424418184Z" level=error msg="ContainerStatus for \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\": not found" Sep 6 01:20:45.424801 kubelet[2130]: E0906 01:20:45.424770 2130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\": not found" containerID="e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da" Sep 6 01:20:45.424899 kubelet[2130]: I0906 01:20:45.424805 2130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da"} err="failed to get container status \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8f59f2b33e9d9fe19bb5487fec5ea3012adae19f6a10a24bec73f13a036e1da\": not found" Sep 6 01:20:45.424899 kubelet[2130]: I0906 01:20:45.424827 2130 scope.go:117] "RemoveContainer" containerID="575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf" Sep 6 01:20:45.426376 env[1301]: time="2025-09-06T01:20:45.426326157Z" level=info msg="RemoveContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\"" Sep 6 01:20:45.437364 env[1301]: time="2025-09-06T01:20:45.437319814Z" level=info msg="RemoveContainer for \"575d7b5761af1b0997a8b3233a8940a2aebbe1cda735ce598426a76823c2d7cf\" returns successfully" Sep 6 01:20:45.808919 kubelet[2130]: I0906 01:20:45.808804 2130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d7ffeff-9c49-4c54-8066-0577dce67b70" path="/var/lib/kubelet/pods/6d7ffeff-9c49-4c54-8066-0577dce67b70/volumes" Sep 6 01:20:45.810043 kubelet[2130]: I0906 01:20:45.809984 2130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" path="/var/lib/kubelet/pods/8d32a918-a107-48b7-9fdd-3249005ff46c/volumes" Sep 6 01:20:46.186167 sshd[3691]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:46.190786 systemd[1]: sshd@23-10.230.51.142:22-139.178.89.65:44196.service: Deactivated successfully. Sep 6 01:20:46.192469 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:20:46.192489 systemd-logind[1289]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:20:46.195317 systemd-logind[1289]: Removed session 24. Sep 6 01:20:46.333842 systemd[1]: Started sshd@24-10.230.51.142:22-139.178.89.65:44200.service. Sep 6 01:20:46.968771 kubelet[2130]: E0906 01:20:46.968682 2130 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:20:47.246704 sshd[3859]: Accepted publickey for core from 139.178.89.65 port 44200 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:47.248888 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:47.257076 systemd[1]: Started session-25.scope. Sep 6 01:20:47.257775 systemd-logind[1289]: New session 25 of user core. Sep 6 01:20:48.566826 kubelet[2130]: E0906 01:20:48.566784 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="apply-sysctl-overwrites" Sep 6 01:20:48.567618 kubelet[2130]: E0906 01:20:48.567593 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="clean-cilium-state" Sep 6 01:20:48.567909 kubelet[2130]: E0906 01:20:48.567743 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d7ffeff-9c49-4c54-8066-0577dce67b70" containerName="cilium-operator" Sep 6 01:20:48.568072 kubelet[2130]: E0906 01:20:48.568048 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="mount-cgroup" Sep 6 01:20:48.568240 kubelet[2130]: E0906 01:20:48.568202 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="mount-bpf-fs" Sep 6 01:20:48.568378 kubelet[2130]: E0906 01:20:48.568355 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="cilium-agent" Sep 6 01:20:48.568682 kubelet[2130]: I0906 01:20:48.568655 2130 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d7ffeff-9c49-4c54-8066-0577dce67b70" containerName="cilium-operator" Sep 6 01:20:48.568820 kubelet[2130]: I0906 01:20:48.568798 2130 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d32a918-a107-48b7-9fdd-3249005ff46c" containerName="cilium-agent" Sep 6 01:20:48.675119 kubelet[2130]: I0906 01:20:48.675050 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-config-path\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.675425 kubelet[2130]: I0906 01:20:48.675387 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-run\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.675632 kubelet[2130]: I0906 01:20:48.675598 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-ipsec-secrets\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.675825 kubelet[2130]: I0906 01:20:48.675790 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wts8r\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-kube-api-access-wts8r\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676029 kubelet[2130]: I0906 01:20:48.675969 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-lib-modules\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676213 kubelet[2130]: I0906 01:20:48.676179 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-xtables-lock\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676400 kubelet[2130]: I0906 01:20:48.676366 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-kernel\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676564 kubelet[2130]: I0906 01:20:48.676531 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-cgroup\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676746 kubelet[2130]: I0906 01:20:48.676712 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cni-path\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.676929 kubelet[2130]: I0906 01:20:48.676883 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-bpf-maps\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.677120 kubelet[2130]: I0906 01:20:48.677085 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-clustermesh-secrets\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.677306 kubelet[2130]: I0906 01:20:48.677273 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-etc-cni-netd\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.677896 kubelet[2130]: I0906 01:20:48.677479 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-net\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.677896 kubelet[2130]: I0906 01:20:48.677529 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-hubble-tls\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.677896 kubelet[2130]: I0906 01:20:48.677563 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-hostproc\") pod \"cilium-4cxx5\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " pod="kube-system/cilium-4cxx5" Sep 6 01:20:48.686069 sshd[3859]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:48.689357 systemd[1]: sshd@24-10.230.51.142:22-139.178.89.65:44200.service: Deactivated successfully. Sep 6 01:20:48.690681 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:20:48.690697 systemd-logind[1289]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:20:48.692032 systemd-logind[1289]: Removed session 25. Sep 6 01:20:48.831419 systemd[1]: Started sshd@25-10.230.51.142:22-139.178.89.65:44202.service. Sep 6 01:20:48.883459 env[1301]: time="2025-09-06T01:20:48.883400355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cxx5,Uid:981648f8-7757-4d8c-bf94-edad33e0ba73,Namespace:kube-system,Attempt:0,}" Sep 6 01:20:48.900349 env[1301]: time="2025-09-06T01:20:48.899776221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:20:48.900349 env[1301]: time="2025-09-06T01:20:48.899860171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:20:48.900349 env[1301]: time="2025-09-06T01:20:48.899918591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:20:48.900597 env[1301]: time="2025-09-06T01:20:48.900369437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2 pid=3886 runtime=io.containerd.runc.v2 Sep 6 01:20:48.958241 env[1301]: time="2025-09-06T01:20:48.957762395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4cxx5,Uid:981648f8-7757-4d8c-bf94-edad33e0ba73,Namespace:kube-system,Attempt:0,} returns sandbox id \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\"" Sep 6 01:20:48.963495 env[1301]: time="2025-09-06T01:20:48.963432794Z" level=info msg="CreateContainer within sandbox \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:20:48.978523 env[1301]: time="2025-09-06T01:20:48.978453797Z" level=info msg="CreateContainer within sandbox \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\"" Sep 6 01:20:48.981187 env[1301]: time="2025-09-06T01:20:48.981149538Z" level=info msg="StartContainer for \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\"" Sep 6 01:20:49.056632 env[1301]: time="2025-09-06T01:20:49.056481901Z" level=info msg="StartContainer for \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\" returns successfully" Sep 6 01:20:49.109612 env[1301]: time="2025-09-06T01:20:49.109000869Z" level=info msg="shim disconnected" id=0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c Sep 6 01:20:49.109612 env[1301]: time="2025-09-06T01:20:49.109117595Z" level=warning msg="cleaning up after shim disconnected" id=0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c namespace=k8s.io Sep 6 01:20:49.109612 env[1301]: time="2025-09-06T01:20:49.109132685Z" level=info msg="cleaning up dead shim" Sep 6 01:20:49.120605 env[1301]: time="2025-09-06T01:20:49.120543648Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Sep 6 01:20:49.371454 env[1301]: time="2025-09-06T01:20:49.371324050Z" level=info msg="CreateContainer within sandbox \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:20:49.384551 env[1301]: time="2025-09-06T01:20:49.384461649Z" level=info msg="CreateContainer within sandbox \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\"" Sep 6 01:20:49.388248 env[1301]: time="2025-09-06T01:20:49.388186876Z" level=info msg="StartContainer for \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\"" Sep 6 01:20:49.460072 env[1301]: time="2025-09-06T01:20:49.459417694Z" level=info msg="StartContainer for \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\" returns successfully" Sep 6 01:20:49.492861 env[1301]: time="2025-09-06T01:20:49.492776476Z" level=info msg="shim disconnected" id=c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc Sep 6 01:20:49.493156 env[1301]: time="2025-09-06T01:20:49.492866345Z" level=warning msg="cleaning up after shim disconnected" id=c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc namespace=k8s.io Sep 6 01:20:49.493156 env[1301]: time="2025-09-06T01:20:49.492885567Z" level=info msg="cleaning up dead shim" Sep 6 01:20:49.503857 env[1301]: time="2025-09-06T01:20:49.503796249Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4034 runtime=io.containerd.runc.v2\n" Sep 6 01:20:49.726464 sshd[3877]: Accepted publickey for core from 139.178.89.65 port 44202 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:49.728761 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:49.736098 systemd-logind[1289]: New session 26 of user core. Sep 6 01:20:49.736785 systemd[1]: Started session-26.scope. Sep 6 01:20:50.384434 env[1301]: time="2025-09-06T01:20:50.376040773Z" level=info msg="StopPodSandbox for \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\"" Sep 6 01:20:50.384434 env[1301]: time="2025-09-06T01:20:50.376132623Z" level=info msg="Container to stop \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:50.384434 env[1301]: time="2025-09-06T01:20:50.376159558Z" level=info msg="Container to stop \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:20:50.379495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2-shm.mount: Deactivated successfully. Sep 6 01:20:50.420329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2-rootfs.mount: Deactivated successfully. Sep 6 01:20:50.426850 env[1301]: time="2025-09-06T01:20:50.426791573Z" level=info msg="shim disconnected" id=a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2 Sep 6 01:20:50.427076 env[1301]: time="2025-09-06T01:20:50.426854577Z" level=warning msg="cleaning up after shim disconnected" id=a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2 namespace=k8s.io Sep 6 01:20:50.427076 env[1301]: time="2025-09-06T01:20:50.426871249Z" level=info msg="cleaning up dead shim" Sep 6 01:20:50.440809 env[1301]: time="2025-09-06T01:20:50.440756371Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" Sep 6 01:20:50.441487 env[1301]: time="2025-09-06T01:20:50.441445378Z" level=info msg="TearDown network for sandbox \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" successfully" Sep 6 01:20:50.441623 env[1301]: time="2025-09-06T01:20:50.441590899Z" level=info msg="StopPodSandbox for \"a343d71bdd12591319089f10fd5127bbcdb1916c0981c67c0414398ab70719d2\" returns successfully" Sep 6 01:20:50.487184 sshd[3877]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:50.491710 systemd[1]: sshd@25-10.230.51.142:22-139.178.89.65:44202.service: Deactivated successfully. Sep 6 01:20:50.492178 kubelet[2130]: I0906 01:20:50.492144 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-cgroup\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.492952 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 01:20:50.493120 kubelet[2130]: I0906 01:20:50.492924 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-xtables-lock\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.494256 kubelet[2130]: I0906 01:20:50.493987 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-run\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.494447 kubelet[2130]: I0906 01:20:50.494411 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-config-path\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.495098 systemd-logind[1289]: Session 26 logged out. Waiting for processes to exit. Sep 6 01:20:50.496550 systemd-logind[1289]: Removed session 26. Sep 6 01:20:50.498115 kubelet[2130]: I0906 01:20:50.498090 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-net\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.498338 kubelet[2130]: I0906 01:20:50.493885 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.498475 kubelet[2130]: I0906 01:20:50.493918 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.498871 kubelet[2130]: I0906 01:20:50.494191 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.498871 kubelet[2130]: I0906 01:20:50.498273 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.498871 kubelet[2130]: I0906 01:20:50.498745 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-kernel\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.498871 kubelet[2130]: I0906 01:20:50.498778 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-etc-cni-netd\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.499221 kubelet[2130]: I0906 01:20:50.499173 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.499440 kubelet[2130]: I0906 01:20:50.499414 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.502066 kubelet[2130]: I0906 01:20:50.502036 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-lib-modules\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502173 kubelet[2130]: I0906 01:20:50.502093 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-clustermesh-secrets\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502173 kubelet[2130]: I0906 01:20:50.502124 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-bpf-maps\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502173 kubelet[2130]: I0906 01:20:50.502155 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-ipsec-secrets\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502342 kubelet[2130]: I0906 01:20:50.502180 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cni-path\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502342 kubelet[2130]: I0906 01:20:50.502234 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-hostproc\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502342 kubelet[2130]: I0906 01:20:50.502264 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wts8r\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-kube-api-access-wts8r\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502342 kubelet[2130]: I0906 01:20:50.502291 2130 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-hubble-tls\") pod \"981648f8-7757-4d8c-bf94-edad33e0ba73\" (UID: \"981648f8-7757-4d8c-bf94-edad33e0ba73\") " Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502343 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-cgroup\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502363 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-run\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502377 2130 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-xtables-lock\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502392 2130 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-net\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502410 2130 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-host-proc-sys-kernel\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.502579 kubelet[2130]: I0906 01:20:50.502426 2130 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-etc-cni-netd\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.503192 kubelet[2130]: I0906 01:20:50.503163 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:20:50.503732 kubelet[2130]: I0906 01:20:50.503690 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cni-path" (OuterVolumeSpecName: "cni-path") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.507876 systemd[1]: var-lib-kubelet-pods-981648f8\x2d7757\x2d4d8c\x2dbf94\x2dedad33e0ba73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:20:50.509778 kubelet[2130]: I0906 01:20:50.503872 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-hostproc" (OuterVolumeSpecName: "hostproc") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.509924 kubelet[2130]: I0906 01:20:50.504536 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.510442 kubelet[2130]: I0906 01:20:50.510408 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:20:50.510538 kubelet[2130]: I0906 01:20:50.510463 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:20:50.514234 systemd[1]: var-lib-kubelet-pods-981648f8\x2d7757\x2d4d8c\x2dbf94\x2dedad33e0ba73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwts8r.mount: Deactivated successfully. Sep 6 01:20:50.516304 kubelet[2130]: I0906 01:20:50.516271 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-kube-api-access-wts8r" (OuterVolumeSpecName: "kube-api-access-wts8r") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "kube-api-access-wts8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:20:50.517756 kubelet[2130]: I0906 01:20:50.517721 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:20:50.519437 kubelet[2130]: I0906 01:20:50.519408 2130 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "981648f8-7757-4d8c-bf94-edad33e0ba73" (UID: "981648f8-7757-4d8c-bf94-edad33e0ba73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:20:50.602936 kubelet[2130]: I0906 01:20:50.602861 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-config-path\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.602936 kubelet[2130]: I0906 01:20:50.602924 2130 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-lib-modules\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.602936 kubelet[2130]: I0906 01:20:50.602944 2130 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-bpf-maps\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.602961 2130 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-clustermesh-secrets\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.602977 2130 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/981648f8-7757-4d8c-bf94-edad33e0ba73-cilium-ipsec-secrets\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.602993 2130 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-cni-path\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.603028 2130 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/981648f8-7757-4d8c-bf94-edad33e0ba73-hostproc\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.603046 2130 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wts8r\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-kube-api-access-wts8r\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.603355 kubelet[2130]: I0906 01:20:50.603061 2130 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/981648f8-7757-4d8c-bf94-edad33e0ba73-hubble-tls\") on node \"srv-rd74e.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:20:50.633950 systemd[1]: Started sshd@26-10.230.51.142:22-139.178.89.65:38860.service. Sep 6 01:20:50.792115 systemd[1]: var-lib-kubelet-pods-981648f8\x2d7757\x2d4d8c\x2dbf94\x2dedad33e0ba73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:20:50.792362 systemd[1]: var-lib-kubelet-pods-981648f8\x2d7757\x2d4d8c\x2dbf94\x2dedad33e0ba73-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:20:51.379600 kubelet[2130]: I0906 01:20:51.379556 2130 scope.go:117] "RemoveContainer" containerID="c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc" Sep 6 01:20:51.396758 env[1301]: time="2025-09-06T01:20:51.396687517Z" level=info msg="RemoveContainer for \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\"" Sep 6 01:20:51.403209 env[1301]: time="2025-09-06T01:20:51.402120307Z" level=info msg="RemoveContainer for \"c76d4b19f469bd5f421d25cfa744fa4bbf160e054cf510cdee621099dfea3bdc\" returns successfully" Sep 6 01:20:51.403314 kubelet[2130]: I0906 01:20:51.402572 2130 scope.go:117] "RemoveContainer" containerID="0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c" Sep 6 01:20:51.405002 env[1301]: time="2025-09-06T01:20:51.404957282Z" level=info msg="RemoveContainer for \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\"" Sep 6 01:20:51.407648 env[1301]: time="2025-09-06T01:20:51.407590196Z" level=info msg="RemoveContainer for \"0b4dc248bb12f273b21fa36fb0647235d0f580f5d22af45c47bb9e23470b504c\" returns successfully" Sep 6 01:20:51.467088 kubelet[2130]: E0906 01:20:51.467031 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="981648f8-7757-4d8c-bf94-edad33e0ba73" containerName="mount-cgroup" Sep 6 01:20:51.467088 kubelet[2130]: E0906 01:20:51.467070 2130 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="981648f8-7757-4d8c-bf94-edad33e0ba73" containerName="apply-sysctl-overwrites" Sep 6 01:20:51.467088 kubelet[2130]: I0906 01:20:51.467101 2130 memory_manager.go:354] "RemoveStaleState removing state" podUID="981648f8-7757-4d8c-bf94-edad33e0ba73" containerName="apply-sysctl-overwrites" Sep 6 01:20:51.510159 kubelet[2130]: I0906 01:20:51.510110 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-bpf-maps\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510159 kubelet[2130]: I0906 01:20:51.510167 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-lib-modules\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510215 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-cni-path\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510243 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-host-proc-sys-kernel\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510270 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/483deedc-15d4-4519-96a1-ba5aebf80f18-cilium-ipsec-secrets\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510296 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-cilium-run\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510322 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-cilium-cgroup\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.510825 kubelet[2130]: I0906 01:20:51.510349 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-etc-cni-netd\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510375 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-xtables-lock\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510401 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/483deedc-15d4-4519-96a1-ba5aebf80f18-clustermesh-secrets\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510432 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/483deedc-15d4-4519-96a1-ba5aebf80f18-cilium-config-path\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510460 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjnmd\" (UniqueName: \"kubernetes.io/projected/483deedc-15d4-4519-96a1-ba5aebf80f18-kube-api-access-gjnmd\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510484 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/483deedc-15d4-4519-96a1-ba5aebf80f18-hubble-tls\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511561 kubelet[2130]: I0906 01:20:51.510538 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-hostproc\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.511908 kubelet[2130]: I0906 01:20:51.510571 2130 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/483deedc-15d4-4519-96a1-ba5aebf80f18-host-proc-sys-net\") pod \"cilium-vwzzr\" (UID: \"483deedc-15d4-4519-96a1-ba5aebf80f18\") " pod="kube-system/cilium-vwzzr" Sep 6 01:20:51.525265 sshd[4094]: Accepted publickey for core from 139.178.89.65 port 38860 ssh2: RSA SHA256:Sw8a149JbJBnpb9RHpWnXqtnC5gRSoBTmkCiSXsMrm4 Sep 6 01:20:51.527226 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:51.534860 systemd[1]: Started session-27.scope. Sep 6 01:20:51.536176 systemd-logind[1289]: New session 27 of user core. Sep 6 01:20:51.778057 env[1301]: time="2025-09-06T01:20:51.777913373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwzzr,Uid:483deedc-15d4-4519-96a1-ba5aebf80f18,Namespace:kube-system,Attempt:0,}" Sep 6 01:20:51.797801 env[1301]: time="2025-09-06T01:20:51.797576517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:20:51.797801 env[1301]: time="2025-09-06T01:20:51.797624423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:20:51.797801 env[1301]: time="2025-09-06T01:20:51.797641426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:20:51.798366 env[1301]: time="2025-09-06T01:20:51.798250406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa pid=4111 runtime=io.containerd.runc.v2 Sep 6 01:20:51.809164 kubelet[2130]: I0906 01:20:51.808568 2130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981648f8-7757-4d8c-bf94-edad33e0ba73" path="/var/lib/kubelet/pods/981648f8-7757-4d8c-bf94-edad33e0ba73/volumes" Sep 6 01:20:51.864123 env[1301]: time="2025-09-06T01:20:51.864054727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwzzr,Uid:483deedc-15d4-4519-96a1-ba5aebf80f18,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\"" Sep 6 01:20:51.871321 env[1301]: time="2025-09-06T01:20:51.871276857Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:20:51.885538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846691811.mount: Deactivated successfully. Sep 6 01:20:51.889910 env[1301]: time="2025-09-06T01:20:51.889857654Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b873c55683b1a6dc82ebe6b4194e5542eeeb691b868565d65efc04a18f7d301\"" Sep 6 01:20:51.890893 env[1301]: time="2025-09-06T01:20:51.890859017Z" level=info msg="StartContainer for \"5b873c55683b1a6dc82ebe6b4194e5542eeeb691b868565d65efc04a18f7d301\"" Sep 6 01:20:51.960510 env[1301]: time="2025-09-06T01:20:51.960462271Z" level=info msg="StartContainer for \"5b873c55683b1a6dc82ebe6b4194e5542eeeb691b868565d65efc04a18f7d301\" returns successfully" Sep 6 01:20:51.970408 kubelet[2130]: E0906 01:20:51.970239 2130 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:20:52.004075 env[1301]: time="2025-09-06T01:20:51.995354206Z" level=info msg="shim disconnected" id=5b873c55683b1a6dc82ebe6b4194e5542eeeb691b868565d65efc04a18f7d301 Sep 6 01:20:52.004075 env[1301]: time="2025-09-06T01:20:51.995418222Z" level=warning msg="cleaning up after shim disconnected" id=5b873c55683b1a6dc82ebe6b4194e5542eeeb691b868565d65efc04a18f7d301 namespace=k8s.io Sep 6 01:20:52.004075 env[1301]: time="2025-09-06T01:20:51.995446425Z" level=info msg="cleaning up dead shim" Sep 6 01:20:52.007355 env[1301]: time="2025-09-06T01:20:52.007300932Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4195 runtime=io.containerd.runc.v2\n" Sep 6 01:20:52.386650 env[1301]: time="2025-09-06T01:20:52.386589814Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:20:52.399789 env[1301]: time="2025-09-06T01:20:52.399701158Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9b0a3a03a5d389df2b71a4ca277a9617b6bd8b308fa4b4f1db4e5a0ad24fb913\"" Sep 6 01:20:52.401398 env[1301]: time="2025-09-06T01:20:52.401364072Z" level=info msg="StartContainer for \"9b0a3a03a5d389df2b71a4ca277a9617b6bd8b308fa4b4f1db4e5a0ad24fb913\"" Sep 6 01:20:52.475775 env[1301]: time="2025-09-06T01:20:52.475521792Z" level=info msg="StartContainer for \"9b0a3a03a5d389df2b71a4ca277a9617b6bd8b308fa4b4f1db4e5a0ad24fb913\" returns successfully" Sep 6 01:20:52.505453 env[1301]: time="2025-09-06T01:20:52.505396476Z" level=info msg="shim disconnected" id=9b0a3a03a5d389df2b71a4ca277a9617b6bd8b308fa4b4f1db4e5a0ad24fb913 Sep 6 01:20:52.505775 env[1301]: time="2025-09-06T01:20:52.505744017Z" level=warning msg="cleaning up after shim disconnected" id=9b0a3a03a5d389df2b71a4ca277a9617b6bd8b308fa4b4f1db4e5a0ad24fb913 namespace=k8s.io Sep 6 01:20:52.505914 env[1301]: time="2025-09-06T01:20:52.505886748Z" level=info msg="cleaning up dead shim" Sep 6 01:20:52.542501 env[1301]: time="2025-09-06T01:20:52.542443320Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4262 runtime=io.containerd.runc.v2\n" Sep 6 01:20:53.394590 env[1301]: time="2025-09-06T01:20:53.394508324Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:20:53.429471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281914497.mount: Deactivated successfully. Sep 6 01:20:53.443562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886258315.mount: Deactivated successfully. Sep 6 01:20:53.447747 env[1301]: time="2025-09-06T01:20:53.447686230Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"015cd2c70687d36f168919a0b6a35d99b914b63368459390ed5bc06dd0ed996d\"" Sep 6 01:20:53.450413 env[1301]: time="2025-09-06T01:20:53.448625005Z" level=info msg="StartContainer for \"015cd2c70687d36f168919a0b6a35d99b914b63368459390ed5bc06dd0ed996d\"" Sep 6 01:20:53.517478 env[1301]: time="2025-09-06T01:20:53.517424884Z" level=info msg="StartContainer for \"015cd2c70687d36f168919a0b6a35d99b914b63368459390ed5bc06dd0ed996d\" returns successfully" Sep 6 01:20:53.547861 env[1301]: time="2025-09-06T01:20:53.547804755Z" level=info msg="shim disconnected" id=015cd2c70687d36f168919a0b6a35d99b914b63368459390ed5bc06dd0ed996d Sep 6 01:20:53.548245 env[1301]: time="2025-09-06T01:20:53.548194817Z" level=warning msg="cleaning up after shim disconnected" id=015cd2c70687d36f168919a0b6a35d99b914b63368459390ed5bc06dd0ed996d namespace=k8s.io Sep 6 01:20:53.548375 env[1301]: time="2025-09-06T01:20:53.548348550Z" level=info msg="cleaning up dead shim" Sep 6 01:20:53.558986 env[1301]: time="2025-09-06T01:20:53.558946072Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4320 runtime=io.containerd.runc.v2\n" Sep 6 01:20:54.401672 env[1301]: time="2025-09-06T01:20:54.401470632Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:20:54.417172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411158114.mount: Deactivated successfully. Sep 6 01:20:54.430479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052338963.mount: Deactivated successfully. Sep 6 01:20:54.432041 env[1301]: time="2025-09-06T01:20:54.431567422Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b1101eae130a5a3eca3e0094530a45349b9ffe6ebd66c932f6f9e52b63726a2\"" Sep 6 01:20:54.438137 env[1301]: time="2025-09-06T01:20:54.438083836Z" level=info msg="StartContainer for \"6b1101eae130a5a3eca3e0094530a45349b9ffe6ebd66c932f6f9e52b63726a2\"" Sep 6 01:20:54.510146 env[1301]: time="2025-09-06T01:20:54.510093022Z" level=info msg="StartContainer for \"6b1101eae130a5a3eca3e0094530a45349b9ffe6ebd66c932f6f9e52b63726a2\" returns successfully" Sep 6 01:20:54.536458 env[1301]: time="2025-09-06T01:20:54.536394326Z" level=info msg="shim disconnected" id=6b1101eae130a5a3eca3e0094530a45349b9ffe6ebd66c932f6f9e52b63726a2 Sep 6 01:20:54.536458 env[1301]: time="2025-09-06T01:20:54.536456805Z" level=warning msg="cleaning up after shim disconnected" id=6b1101eae130a5a3eca3e0094530a45349b9ffe6ebd66c932f6f9e52b63726a2 namespace=k8s.io Sep 6 01:20:54.536793 env[1301]: time="2025-09-06T01:20:54.536473569Z" level=info msg="cleaning up dead shim" Sep 6 01:20:54.547694 env[1301]: time="2025-09-06T01:20:54.547624834Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:20:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4375 runtime=io.containerd.runc.v2\n" Sep 6 01:20:54.621537 kubelet[2130]: I0906 01:20:54.619699 2130 setters.go:600] "Node became not ready" node="srv-rd74e.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:20:54Z","lastTransitionTime":"2025-09-06T01:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:20:55.403901 env[1301]: time="2025-09-06T01:20:55.403189984Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:20:55.419217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262683338.mount: Deactivated successfully. Sep 6 01:20:55.431942 env[1301]: time="2025-09-06T01:20:55.431035959Z" level=info msg="CreateContainer within sandbox \"b7f07bad6a4582e0c3329eddf93f92ec7699f2becfb8f9d5f70ab6f3b7cc9aaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d\"" Sep 6 01:20:55.436894 env[1301]: time="2025-09-06T01:20:55.436859964Z" level=info msg="StartContainer for \"4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d\"" Sep 6 01:20:55.512883 env[1301]: time="2025-09-06T01:20:55.512757933Z" level=info msg="StartContainer for \"4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d\" returns successfully" Sep 6 01:20:56.226183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 01:20:56.434308 kubelet[2130]: I0906 01:20:56.434164 2130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vwzzr" podStartSLOduration=5.434124941 podStartE2EDuration="5.434124941s" podCreationTimestamp="2025-09-06 01:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:20:56.428158625 +0000 UTC m=+164.867623588" watchObservedRunningTime="2025-09-06 01:20:56.434124941 +0000 UTC m=+164.873589892" Sep 6 01:20:58.300337 systemd[1]: run-containerd-runc-k8s.io-4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d-runc.diIMcw.mount: Deactivated successfully. Sep 6 01:20:59.844722 systemd-networkd[1084]: lxc_health: Link UP Sep 6 01:20:59.860159 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:20:59.860536 systemd-networkd[1084]: lxc_health: Gained carrier Sep 6 01:21:00.541984 systemd[1]: run-containerd-runc-k8s.io-4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d-runc.8Q3zj3.mount: Deactivated successfully. Sep 6 01:21:01.011317 systemd-networkd[1084]: lxc_health: Gained IPv6LL Sep 6 01:21:02.869612 systemd[1]: run-containerd-runc-k8s.io-4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d-runc.4px2VR.mount: Deactivated successfully. Sep 6 01:21:05.101804 systemd[1]: run-containerd-runc-k8s.io-4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d-runc.sly4BX.mount: Deactivated successfully. Sep 6 01:21:07.298978 systemd[1]: run-containerd-runc-k8s.io-4f108770fa3724ddb98052ae112316e2011898b9eeb6aa8be92de0b13860e16d-runc.FzAGu3.mount: Deactivated successfully. Sep 6 01:21:07.521764 sshd[4094]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:07.525851 systemd[1]: sshd@26-10.230.51.142:22-139.178.89.65:38860.service: Deactivated successfully. Sep 6 01:21:07.527000 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 01:21:07.528037 systemd-logind[1289]: Session 27 logged out. Waiting for processes to exit. Sep 6 01:21:07.529539 systemd-logind[1289]: Removed session 27.