Aug 13 04:16:01.980067 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 04:16:01.980114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 04:16:01.980134 kernel: BIOS-provided physical RAM map: Aug 13 04:16:01.980145 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 04:16:01.980155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 04:16:01.980165 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 04:16:01.980176 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 13 04:16:01.980187 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 13 04:16:01.980197 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 04:16:01.980207 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 04:16:01.980221 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 04:16:01.980232 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 04:16:01.980242 kernel: NX (Execute Disable) protection: active Aug 13 04:16:01.980252 kernel: SMBIOS 2.8 present. Aug 13 04:16:01.980283 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Aug 13 04:16:01.980295 kernel: Hypervisor detected: KVM Aug 13 04:16:01.980319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 04:16:01.980330 kernel: kvm-clock: cpu 0, msr 6a19e001, primary cpu clock Aug 13 04:16:01.980341 kernel: kvm-clock: using sched offset of 4839986948 cycles Aug 13 04:16:01.980353 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 04:16:01.980371 kernel: tsc: Detected 2499.998 MHz processor Aug 13 04:16:01.980383 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 04:16:01.980394 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 04:16:01.980405 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 13 04:16:01.980416 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 04:16:01.983495 kernel: Using GB pages for direct mapping Aug 13 04:16:01.983512 kernel: ACPI: Early table checksum verification disabled Aug 13 04:16:01.983524 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Aug 13 04:16:01.983536 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983547 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983559 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983570 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 13 04:16:01.983581 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983592 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983610 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983622 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 04:16:01.983633 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 13 04:16:01.983644 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 13 04:16:01.983655 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 13 04:16:01.983666 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 13 04:16:01.983684 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 13 04:16:01.983700 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 13 04:16:01.983711 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 13 04:16:01.983723 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 04:16:01.983735 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 04:16:01.983747 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 04:16:01.983759 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 13 04:16:01.983771 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 04:16:01.983787 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 13 04:16:01.983799 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 04:16:01.983811 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 13 04:16:01.983822 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 04:16:01.983834 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 13 04:16:01.983846 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 04:16:01.983857 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 13 04:16:01.983869 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 04:16:01.983881 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 13 04:16:01.983892 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 04:16:01.983908 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 13 04:16:01.983920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 04:16:01.983932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 04:16:01.983944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 13 04:16:01.983956 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 13 04:16:01.983968 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 13 04:16:01.983980 kernel: Zone ranges: Aug 13 04:16:01.983992 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 04:16:01.984017 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 13 04:16:01.984035 kernel: Normal empty Aug 13 04:16:01.984047 kernel: Movable zone start for each node Aug 13 04:16:01.984059 kernel: Early memory node ranges Aug 13 04:16:01.984070 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 04:16:01.984082 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 13 04:16:01.984094 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 13 04:16:01.984105 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 04:16:01.984117 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 04:16:01.984129 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 13 04:16:01.984145 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 04:16:01.984157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 04:16:01.984169 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 04:16:01.984180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 04:16:01.984192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 04:16:01.984204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 04:16:01.984215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 04:16:01.984227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 04:16:01.984239 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 04:16:01.984255 kernel: TSC deadline timer available Aug 13 04:16:01.984267 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 13 04:16:01.984278 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 04:16:01.984290 kernel: Booting paravirtualized kernel on KVM Aug 13 04:16:01.984302 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 04:16:01.984314 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Aug 13 04:16:01.984326 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 04:16:01.984338 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 04:16:01.984349 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 04:16:01.984365 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Aug 13 04:16:01.984377 kernel: kvm-guest: PV spinlocks enabled Aug 13 04:16:01.984388 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 04:16:01.984401 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 13 04:16:01.984412 kernel: Policy zone: DMA32 Aug 13 04:16:01.986474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 04:16:01.986491 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 04:16:01.986515 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 04:16:01.986532 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 04:16:01.986544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 04:16:01.986556 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 192524K reserved, 0K cma-reserved) Aug 13 04:16:01.986567 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 04:16:01.986578 kernel: Kernel/User page tables isolation: enabled Aug 13 04:16:01.986589 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 04:16:01.986601 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 04:16:01.986624 kernel: rcu: Hierarchical RCU implementation. Aug 13 04:16:01.986637 kernel: rcu: RCU event tracing is enabled. Aug 13 04:16:01.986653 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 04:16:01.986665 kernel: Rude variant of Tasks RCU enabled. Aug 13 04:16:01.986689 kernel: Tracing variant of Tasks RCU enabled. Aug 13 04:16:01.986701 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 04:16:01.986713 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 04:16:01.986725 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 13 04:16:01.986737 kernel: random: crng init done Aug 13 04:16:01.986763 kernel: Console: colour VGA+ 80x25 Aug 13 04:16:01.986776 kernel: printk: console [tty0] enabled Aug 13 04:16:01.986788 kernel: printk: console [ttyS0] enabled Aug 13 04:16:01.986801 kernel: ACPI: Core revision 20210730 Aug 13 04:16:01.986814 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 04:16:01.986830 kernel: x2apic enabled Aug 13 04:16:01.986843 kernel: Switched APIC routing to physical x2apic. Aug 13 04:16:01.986855 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 04:16:01.986868 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Aug 13 04:16:01.986881 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 04:16:01.986898 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 04:16:01.986911 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 04:16:01.986923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 04:16:01.986935 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 04:16:01.986948 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 04:16:01.986960 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 04:16:01.986973 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 04:16:01.986985 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 04:16:01.986998 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 04:16:01.987023 kernel: MMIO Stale Data: Unknown: No mitigations Aug 13 04:16:01.987035 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 04:16:01.987053 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 04:16:01.987065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 04:16:01.987078 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 04:16:01.987090 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 04:16:01.987102 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 04:16:01.987115 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 04:16:01.987127 kernel: Freeing SMP alternatives memory: 32K Aug 13 04:16:01.987139 kernel: pid_max: default: 32768 minimum: 301 Aug 13 04:16:01.987151 kernel: LSM: Security Framework initializing Aug 13 04:16:01.987164 kernel: SELinux: Initializing. Aug 13 04:16:01.987176 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 04:16:01.987193 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 04:16:01.987206 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 13 04:16:01.987218 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 13 04:16:01.987231 kernel: signal: max sigframe size: 1776 Aug 13 04:16:01.987243 kernel: rcu: Hierarchical SRCU implementation. Aug 13 04:16:01.987256 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 04:16:01.987281 kernel: smp: Bringing up secondary CPUs ... Aug 13 04:16:01.987293 kernel: x86: Booting SMP configuration: Aug 13 04:16:01.987305 kernel: .... node #0, CPUs: #1 Aug 13 04:16:01.987322 kernel: kvm-clock: cpu 1, msr 6a19e041, secondary cpu clock Aug 13 04:16:01.987347 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 04:16:01.987359 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Aug 13 04:16:01.987372 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 04:16:01.987384 kernel: smpboot: Max logical packages: 16 Aug 13 04:16:01.987397 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Aug 13 04:16:01.987409 kernel: devtmpfs: initialized Aug 13 04:16:01.987422 kernel: x86/mm: Memory block size: 128MB Aug 13 04:16:01.987434 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 04:16:01.987460 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 04:16:01.987479 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 04:16:01.987491 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 04:16:01.987504 kernel: audit: initializing netlink subsys (disabled) Aug 13 04:16:01.987516 kernel: audit: type=2000 audit(1755058561.089:1): state=initialized audit_enabled=0 res=1 Aug 13 04:16:01.987529 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 04:16:01.987541 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 04:16:01.987554 kernel: cpuidle: using governor menu Aug 13 04:16:01.987566 kernel: ACPI: bus type PCI registered Aug 13 04:16:01.987579 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 04:16:01.987596 kernel: dca service started, version 1.12.1 Aug 13 04:16:01.987608 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 04:16:01.987621 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 04:16:01.987634 kernel: PCI: Using configuration type 1 for base access Aug 13 04:16:01.987646 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 04:16:01.987659 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 04:16:01.987671 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 04:16:01.987684 kernel: ACPI: Added _OSI(Module Device) Aug 13 04:16:01.987700 kernel: ACPI: Added _OSI(Processor Device) Aug 13 04:16:01.987713 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 04:16:01.987726 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 04:16:01.987738 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 04:16:01.987751 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 04:16:01.987764 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 04:16:01.987776 kernel: ACPI: Interpreter enabled Aug 13 04:16:01.987789 kernel: ACPI: PM: (supports S0 S5) Aug 13 04:16:01.987801 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 04:16:01.987814 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 04:16:01.987830 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 04:16:01.987843 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 04:16:01.988118 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 04:16:01.988282 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 04:16:01.991512 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 04:16:01.991545 kernel: PCI host bridge to bus 0000:00 Aug 13 04:16:01.991705 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 04:16:01.991880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 04:16:01.992039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 04:16:01.992182 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 04:16:01.992323 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 04:16:01.992493 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 13 04:16:01.992628 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 04:16:01.992812 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 04:16:01.993017 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 13 04:16:01.993189 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 13 04:16:01.993359 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 13 04:16:01.993538 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 13 04:16:01.993707 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 04:16:01.993914 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.994117 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 13 04:16:01.994295 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.994472 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 13 04:16:01.994641 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.994809 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 13 04:16:01.994984 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.995166 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 13 04:16:01.995332 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.999552 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 13 04:16:01.999737 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:01.999920 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 13 04:16:02.000123 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:02.000294 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 13 04:16:02.000498 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 04:16:02.000662 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 13 04:16:02.000829 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 04:16:02.000989 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 04:16:02.001165 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 13 04:16:02.001323 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 13 04:16:02.001513 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 13 04:16:02.001706 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 13 04:16:02.001863 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 04:16:02.002033 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 13 04:16:02.002191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 13 04:16:02.002358 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 04:16:02.002531 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 04:16:02.002713 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 04:16:02.002871 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Aug 13 04:16:02.003042 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 13 04:16:02.003209 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 04:16:02.003364 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 04:16:02.003551 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 13 04:16:02.003725 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 13 04:16:02.003899 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 04:16:02.004082 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 04:16:02.004243 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 04:16:02.004433 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 04:16:02.004622 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 13 04:16:02.004806 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 13 04:16:02.004972 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 04:16:02.005157 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 04:16:02.005356 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 04:16:02.012629 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 13 04:16:02.012840 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 04:16:02.013028 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 04:16:02.013201 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 04:16:02.013382 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 04:16:02.013572 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 13 04:16:02.013740 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 04:16:02.013902 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 04:16:02.014075 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 04:16:02.014235 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 04:16:02.014393 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 04:16:02.014571 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 04:16:02.014733 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 04:16:02.014888 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 04:16:02.015058 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 04:16:02.015219 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 04:16:02.015376 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 04:16:02.015547 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 04:16:02.015707 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 04:16:02.015870 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 04:16:02.016058 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 04:16:02.016217 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 04:16:02.016375 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 04:16:02.016568 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 04:16:02.016588 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 04:16:02.016602 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 04:16:02.016615 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 04:16:02.016635 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 04:16:02.016648 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 04:16:02.016660 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 04:16:02.016673 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 04:16:02.016686 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 04:16:02.016698 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 04:16:02.016711 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 04:16:02.016724 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 04:16:02.016736 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 04:16:02.016754 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 04:16:02.016766 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 04:16:02.016779 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 04:16:02.016791 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 04:16:02.016804 kernel: iommu: Default domain type: Translated Aug 13 04:16:02.016817 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 04:16:02.017008 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 04:16:02.017172 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 04:16:02.017336 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 04:16:02.017355 kernel: vgaarb: loaded Aug 13 04:16:02.017368 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 04:16:02.017382 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 04:16:02.017395 kernel: PTP clock support registered Aug 13 04:16:02.017407 kernel: PCI: Using ACPI for IRQ routing Aug 13 04:16:02.017431 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 04:16:02.017445 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 04:16:02.017458 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 13 04:16:02.017476 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 04:16:02.017489 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 04:16:02.017503 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 04:16:02.017515 kernel: pnp: PnP ACPI init Aug 13 04:16:02.017702 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 04:16:02.017723 kernel: pnp: PnP ACPI: found 5 devices Aug 13 04:16:02.017737 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 04:16:02.017749 kernel: NET: Registered PF_INET protocol family Aug 13 04:16:02.017768 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 04:16:02.017781 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 04:16:02.017794 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 04:16:02.017807 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 04:16:02.017819 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 04:16:02.017832 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 04:16:02.017845 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 04:16:02.017857 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 04:16:02.017870 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 04:16:02.017887 kernel: NET: Registered PF_XDP protocol family Aug 13 04:16:02.018058 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Aug 13 04:16:02.018218 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 04:16:02.018375 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 04:16:02.018547 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 04:16:02.018707 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 04:16:02.018872 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 04:16:02.019044 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 04:16:02.019203 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 04:16:02.019358 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 04:16:02.023649 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 04:16:02.023837 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 04:16:02.024011 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 04:16:02.024182 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 04:16:02.024342 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 04:16:02.024522 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 04:16:02.024680 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 04:16:02.024847 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 04:16:02.025020 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 04:16:02.025178 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 04:16:02.025334 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 04:16:02.025524 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 04:16:02.025684 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 04:16:02.025846 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 04:16:02.026036 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 04:16:02.026193 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 04:16:02.026348 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 04:16:02.026526 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 04:16:02.026695 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 04:16:02.026851 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 04:16:02.027019 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 04:16:02.027184 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 04:16:02.027340 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 04:16:02.027511 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 04:16:02.027667 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 04:16:02.027822 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 04:16:02.027980 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 04:16:02.028151 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 04:16:02.028314 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 04:16:02.028485 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 04:16:02.028644 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 04:16:02.028802 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 04:16:02.028959 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 04:16:02.029131 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 04:16:02.029296 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 04:16:02.029480 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 04:16:02.029639 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 04:16:02.029804 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 04:16:02.029971 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 04:16:02.030144 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 04:16:02.030311 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 04:16:02.041249 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 04:16:02.041443 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 04:16:02.041592 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 04:16:02.041737 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 04:16:02.041881 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 04:16:02.042043 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 13 04:16:02.042211 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 04:16:02.042373 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 13 04:16:02.042542 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 04:16:02.042714 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 13 04:16:02.042877 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Aug 13 04:16:02.043043 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 13 04:16:02.043195 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 04:16:02.043355 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Aug 13 04:16:02.043531 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 13 04:16:02.043683 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 04:16:02.043874 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 04:16:02.044059 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 13 04:16:02.044225 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 04:16:02.044399 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Aug 13 04:16:02.044591 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 13 04:16:02.044753 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 04:16:02.044935 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Aug 13 04:16:02.045119 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 13 04:16:02.045284 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 04:16:02.045478 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Aug 13 04:16:02.045637 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 13 04:16:02.045813 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 04:16:02.045995 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Aug 13 04:16:02.046166 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 13 04:16:02.046328 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 04:16:02.046349 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 04:16:02.046364 kernel: PCI: CLS 0 bytes, default 64 Aug 13 04:16:02.046377 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 04:16:02.046391 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 13 04:16:02.046411 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 04:16:02.046442 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 04:16:02.046457 kernel: Initialise system trusted keyrings Aug 13 04:16:02.046471 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 04:16:02.046484 kernel: Key type asymmetric registered Aug 13 04:16:02.046497 kernel: Asymmetric key parser 'x509' registered Aug 13 04:16:02.046510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 04:16:02.046524 kernel: io scheduler mq-deadline registered Aug 13 04:16:02.046537 kernel: io scheduler kyber registered Aug 13 04:16:02.046556 kernel: io scheduler bfq registered Aug 13 04:16:02.046731 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 13 04:16:02.046907 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 13 04:16:02.047094 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.047269 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 13 04:16:02.054505 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 13 04:16:02.054703 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.054884 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 13 04:16:02.055065 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 13 04:16:02.055224 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.055393 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 13 04:16:02.055570 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 13 04:16:02.055729 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.055899 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 13 04:16:02.056069 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 13 04:16:02.056226 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.056398 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 13 04:16:02.056572 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 13 04:16:02.056736 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.056906 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 13 04:16:02.057084 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 13 04:16:02.057241 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.057400 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 13 04:16:02.057569 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 13 04:16:02.057737 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 04:16:02.057764 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 04:16:02.057787 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 04:16:02.057801 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 04:16:02.057814 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 04:16:02.057828 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 04:16:02.057848 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 04:16:02.057861 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 04:16:02.057875 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 04:16:02.058064 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 04:16:02.058086 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 04:16:02.058232 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 04:16:02.058391 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T04:16:01 UTC (1755058561) Aug 13 04:16:02.058560 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 04:16:02.058580 kernel: intel_pstate: CPU model not supported Aug 13 04:16:02.058594 kernel: NET: Registered PF_INET6 protocol family Aug 13 04:16:02.058613 kernel: Segment Routing with IPv6 Aug 13 04:16:02.058634 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 04:16:02.058648 kernel: NET: Registered PF_PACKET protocol family Aug 13 04:16:02.058661 kernel: Key type dns_resolver registered Aug 13 04:16:02.058675 kernel: IPI shorthand broadcast: enabled Aug 13 04:16:02.058688 kernel: sched_clock: Marking stable (1003613121, 228093480)->(1523982648, -292276047) Aug 13 04:16:02.058702 kernel: registered taskstats version 1 Aug 13 04:16:02.058715 kernel: Loading compiled-in X.509 certificates Aug 13 04:16:02.058728 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 04:16:02.058746 kernel: Key type .fscrypt registered Aug 13 04:16:02.058759 kernel: Key type fscrypt-provisioning registered Aug 13 04:16:02.058772 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 04:16:02.058786 kernel: ima: Allocated hash algorithm: sha1 Aug 13 04:16:02.058800 kernel: ima: No architecture policies found Aug 13 04:16:02.058813 kernel: clk: Disabling unused clocks Aug 13 04:16:02.058826 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 04:16:02.058839 kernel: Write protecting the kernel read-only data: 28672k Aug 13 04:16:02.058852 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 04:16:02.058870 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 04:16:02.058883 kernel: Run /init as init process Aug 13 04:16:02.058901 kernel: with arguments: Aug 13 04:16:02.058914 kernel: /init Aug 13 04:16:02.058927 kernel: with environment: Aug 13 04:16:02.058940 kernel: HOME=/ Aug 13 04:16:02.058953 kernel: TERM=linux Aug 13 04:16:02.058966 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 04:16:02.058991 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 04:16:02.059027 systemd[1]: Detected virtualization kvm. Aug 13 04:16:02.059042 systemd[1]: Detected architecture x86-64. Aug 13 04:16:02.059055 systemd[1]: Running in initrd. Aug 13 04:16:02.059069 systemd[1]: No hostname configured, using default hostname. Aug 13 04:16:02.059083 systemd[1]: Hostname set to . Aug 13 04:16:02.059097 systemd[1]: Initializing machine ID from VM UUID. Aug 13 04:16:02.059111 systemd[1]: Queued start job for default target initrd.target. Aug 13 04:16:02.059129 systemd[1]: Started systemd-ask-password-console.path. Aug 13 04:16:02.059144 systemd[1]: Reached target cryptsetup.target. Aug 13 04:16:02.059158 systemd[1]: Reached target paths.target. Aug 13 04:16:02.059171 systemd[1]: Reached target slices.target. Aug 13 04:16:02.059185 systemd[1]: Reached target swap.target. Aug 13 04:16:02.059199 systemd[1]: Reached target timers.target. Aug 13 04:16:02.059214 systemd[1]: Listening on iscsid.socket. Aug 13 04:16:02.059228 systemd[1]: Listening on iscsiuio.socket. Aug 13 04:16:02.059246 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 04:16:02.059260 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 04:16:02.059274 systemd[1]: Listening on systemd-journald.socket. Aug 13 04:16:02.059288 systemd[1]: Listening on systemd-networkd.socket. Aug 13 04:16:02.059302 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 04:16:02.059321 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 04:16:02.059335 systemd[1]: Reached target sockets.target. Aug 13 04:16:02.059349 systemd[1]: Starting kmod-static-nodes.service... Aug 13 04:16:02.059364 systemd[1]: Finished network-cleanup.service. Aug 13 04:16:02.059382 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 04:16:02.059396 systemd[1]: Starting systemd-journald.service... Aug 13 04:16:02.059411 systemd[1]: Starting systemd-modules-load.service... Aug 13 04:16:02.059442 systemd[1]: Starting systemd-resolved.service... Aug 13 04:16:02.059458 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 04:16:02.059472 systemd[1]: Finished kmod-static-nodes.service. Aug 13 04:16:02.059486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 04:16:02.059512 systemd-journald[200]: Journal started Aug 13 04:16:02.059596 systemd-journald[200]: Runtime Journal (/run/log/journal/74cfd39f6f054f99a61ddecc0952e2c2) is 4.7M, max 38.1M, 33.3M free. Aug 13 04:16:01.982163 systemd-modules-load[201]: Inserted module 'overlay' Aug 13 04:16:02.088286 kernel: Bridge firewalling registered Aug 13 04:16:02.088322 systemd[1]: Started systemd-resolved.service. Aug 13 04:16:02.088346 kernel: audit: type=1130 audit(1755058562.080:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.040455 systemd-resolved[202]: Positive Trust Anchors: Aug 13 04:16:02.104528 systemd[1]: Started systemd-journald.service. Aug 13 04:16:02.104565 kernel: audit: type=1130 audit(1755058562.088:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.104587 kernel: SCSI subsystem initialized Aug 13 04:16:02.104605 kernel: audit: type=1130 audit(1755058562.096:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.104633 kernel: audit: type=1130 audit(1755058562.097:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.104651 kernel: audit: type=1130 audit(1755058562.098:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.040477 systemd-resolved[202]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 04:16:02.124106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 04:16:02.124144 kernel: device-mapper: uevent: version 1.0.3 Aug 13 04:16:02.124163 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 04:16:02.040525 systemd-resolved[202]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 04:16:02.044592 systemd-resolved[202]: Defaulting to hostname 'linux'. Aug 13 04:16:02.065950 systemd-modules-load[201]: Inserted module 'br_netfilter' Aug 13 04:16:02.097491 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 04:16:02.098327 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 04:16:02.099097 systemd[1]: Reached target nss-lookup.target. Aug 13 04:16:02.122156 systemd-modules-load[201]: Inserted module 'dm_multipath' Aug 13 04:16:02.123641 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 04:16:02.148790 kernel: audit: type=1130 audit(1755058562.143:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.128197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 04:16:02.154799 kernel: audit: type=1130 audit(1755058562.149:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.139976 systemd[1]: Finished systemd-modules-load.service. Aug 13 04:16:02.163656 kernel: audit: type=1130 audit(1755058562.155:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.143902 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 04:16:02.149681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 04:16:02.156853 systemd[1]: Starting dracut-cmdline.service... Aug 13 04:16:02.163152 systemd[1]: Starting systemd-sysctl.service... Aug 13 04:16:02.173965 dracut-cmdline[221]: dracut-dracut-053 Aug 13 04:16:02.176517 systemd[1]: Finished systemd-sysctl.service. Aug 13 04:16:02.198038 kernel: audit: type=1130 audit(1755058562.191:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.198151 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 04:16:02.276491 kernel: Loading iSCSI transport class v2.0-870. Aug 13 04:16:02.298452 kernel: iscsi: registered transport (tcp) Aug 13 04:16:02.327577 kernel: iscsi: registered transport (qla4xxx) Aug 13 04:16:02.327628 kernel: QLogic iSCSI HBA Driver Aug 13 04:16:02.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.375127 systemd[1]: Finished dracut-cmdline.service. Aug 13 04:16:02.377041 systemd[1]: Starting dracut-pre-udev.service... Aug 13 04:16:02.436482 kernel: raid6: sse2x4 gen() 13385 MB/s Aug 13 04:16:02.454494 kernel: raid6: sse2x4 xor() 7761 MB/s Aug 13 04:16:02.472461 kernel: raid6: sse2x2 gen() 9052 MB/s Aug 13 04:16:02.490511 kernel: raid6: sse2x2 xor() 7919 MB/s Aug 13 04:16:02.508518 kernel: raid6: sse2x1 gen() 9678 MB/s Aug 13 04:16:02.527120 kernel: raid6: sse2x1 xor() 7075 MB/s Aug 13 04:16:02.527168 kernel: raid6: using algorithm sse2x4 gen() 13385 MB/s Aug 13 04:16:02.527187 kernel: raid6: .... xor() 7761 MB/s, rmw enabled Aug 13 04:16:02.528436 kernel: raid6: using ssse3x2 recovery algorithm Aug 13 04:16:02.545454 kernel: xor: automatically using best checksumming function avx Aug 13 04:16:02.662475 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 04:16:02.676767 systemd[1]: Finished dracut-pre-udev.service. Aug 13 04:16:02.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.677000 audit: BPF prog-id=7 op=LOAD Aug 13 04:16:02.677000 audit: BPF prog-id=8 op=LOAD Aug 13 04:16:02.678746 systemd[1]: Starting systemd-udevd.service... Aug 13 04:16:02.696890 systemd-udevd[401]: Using default interface naming scheme 'v252'. Aug 13 04:16:02.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.705822 systemd[1]: Started systemd-udevd.service. Aug 13 04:16:02.708453 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 04:16:02.728177 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 04:16:02.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.769262 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 04:16:02.771025 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 04:16:02.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:02.864168 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 04:16:02.975657 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 04:16:03.049917 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 04:16:03.049951 kernel: ACPI: bus type USB registered Aug 13 04:16:03.049971 kernel: usbcore: registered new interface driver usbfs Aug 13 04:16:03.050004 kernel: usbcore: registered new interface driver hub Aug 13 04:16:03.050040 kernel: AVX version of gcm_enc/dec engaged. Aug 13 04:16:03.050059 kernel: AES CTR mode by8 optimization enabled Aug 13 04:16:03.050076 kernel: usbcore: registered new device driver usb Aug 13 04:16:03.050092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 04:16:03.050115 kernel: GPT:17805311 != 125829119 Aug 13 04:16:03.050133 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 04:16:03.050149 kernel: GPT:17805311 != 125829119 Aug 13 04:16:03.050165 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 04:16:03.050181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 04:16:03.050208 kernel: libata version 3.00 loaded. Aug 13 04:16:03.050227 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 04:16:03.050444 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 13 04:16:03.050628 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 04:16:03.050805 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 04:16:03.050999 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 13 04:16:03.051176 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 13 04:16:03.051349 kernel: hub 1-0:1.0: USB hub found Aug 13 04:16:03.051610 kernel: hub 1-0:1.0: 4 ports detected Aug 13 04:16:03.051805 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 04:16:03.052096 kernel: hub 2-0:1.0: USB hub found Aug 13 04:16:03.052303 kernel: hub 2-0:1.0: 4 ports detected Aug 13 04:16:03.082298 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 04:16:03.115774 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (447) Aug 13 04:16:03.115812 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 04:16:03.115832 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 04:16:03.116059 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 04:16:03.116233 kernel: scsi host0: ahci Aug 13 04:16:03.116486 kernel: scsi host1: ahci Aug 13 04:16:03.116675 kernel: scsi host2: ahci Aug 13 04:16:03.116860 kernel: scsi host3: ahci Aug 13 04:16:03.117064 kernel: scsi host4: ahci Aug 13 04:16:03.117245 kernel: scsi host5: ahci Aug 13 04:16:03.117464 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Aug 13 04:16:03.117485 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Aug 13 04:16:03.117501 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Aug 13 04:16:03.117516 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Aug 13 04:16:03.117532 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Aug 13 04:16:03.117554 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Aug 13 04:16:03.102929 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 04:16:03.210473 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 04:16:03.211331 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 04:16:03.221320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 04:16:03.226829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 04:16:03.229009 systemd[1]: Starting disk-uuid.service... Aug 13 04:16:03.236828 disk-uuid[534]: Primary Header is updated. Aug 13 04:16:03.236828 disk-uuid[534]: Secondary Entries is updated. Aug 13 04:16:03.236828 disk-uuid[534]: Secondary Header is updated. Aug 13 04:16:03.242465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 04:16:03.249378 kernel: GPT:disk_guids don't match. Aug 13 04:16:03.249415 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 04:16:03.249448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 04:16:03.287676 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 04:16:03.427559 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.427665 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.430722 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.433690 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.433732 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 04:16:03.433779 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.436234 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 04:16:03.450754 kernel: usbcore: registered new interface driver usbhid Aug 13 04:16:03.450809 kernel: usbhid: USB HID core driver Aug 13 04:16:03.461000 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Aug 13 04:16:03.461041 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 13 04:16:04.256810 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 04:16:04.256920 disk-uuid[535]: The operation has completed successfully. Aug 13 04:16:04.317694 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 04:16:04.317849 systemd[1]: Finished disk-uuid.service. Aug 13 04:16:04.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.323600 systemd[1]: Starting verity-setup.service... Aug 13 04:16:04.343465 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 13 04:16:04.396847 systemd[1]: Found device dev-mapper-usr.device. Aug 13 04:16:04.400128 systemd[1]: Mounting sysusr-usr.mount... Aug 13 04:16:04.401990 systemd[1]: Finished verity-setup.service. Aug 13 04:16:04.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.498467 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 04:16:04.499053 systemd[1]: Mounted sysusr-usr.mount. Aug 13 04:16:04.500617 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 04:16:04.502528 systemd[1]: Starting ignition-setup.service... Aug 13 04:16:04.504065 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 04:16:04.522467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 04:16:04.522516 kernel: BTRFS info (device vda6): using free space tree Aug 13 04:16:04.522535 kernel: BTRFS info (device vda6): has skinny extents Aug 13 04:16:04.538935 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 04:16:04.546542 systemd[1]: Finished ignition-setup.service. Aug 13 04:16:04.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.548317 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 04:16:04.666577 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 04:16:04.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.668000 audit: BPF prog-id=9 op=LOAD Aug 13 04:16:04.669581 systemd[1]: Starting systemd-networkd.service... Aug 13 04:16:04.710241 systemd-networkd[711]: lo: Link UP Aug 13 04:16:04.710268 systemd-networkd[711]: lo: Gained carrier Aug 13 04:16:04.711272 systemd-networkd[711]: Enumeration completed Aug 13 04:16:04.711666 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 04:16:04.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.713517 systemd-networkd[711]: eth0: Link UP Aug 13 04:16:04.713524 systemd-networkd[711]: eth0: Gained carrier Aug 13 04:16:04.714216 systemd[1]: Started systemd-networkd.service. Aug 13 04:16:04.715117 systemd[1]: Reached target network.target. Aug 13 04:16:04.732962 systemd[1]: Starting iscsiuio.service... Aug 13 04:16:04.733811 ignition[625]: Ignition 2.14.0 Aug 13 04:16:04.733834 ignition[625]: Stage: fetch-offline Aug 13 04:16:04.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.740256 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 04:16:04.733946 ignition[625]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:04.742323 systemd[1]: Starting ignition-fetch.service... Aug 13 04:16:04.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.734027 ignition[625]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:04.743811 systemd[1]: Started iscsiuio.service. Aug 13 04:16:04.735319 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:04.747079 systemd[1]: Starting iscsid.service... Aug 13 04:16:04.735511 ignition[625]: parsed url from cmdline: "" Aug 13 04:16:04.735519 ignition[625]: no config URL provided Aug 13 04:16:04.735530 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 04:16:04.735547 ignition[625]: no config at "/usr/lib/ignition/user.ign" Aug 13 04:16:04.755002 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 04:16:04.755002 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 04:16:04.755002 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 04:16:04.755002 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 04:16:04.755002 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 04:16:04.755002 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 04:16:04.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.735560 ignition[625]: failed to fetch config: resource requires networking Aug 13 04:16:04.755995 systemd[1]: Started iscsid.service. Aug 13 04:16:04.735886 ignition[625]: Ignition finished successfully Aug 13 04:16:04.761894 systemd[1]: Starting dracut-initqueue.service... Aug 13 04:16:04.763133 ignition[716]: Ignition 2.14.0 Aug 13 04:16:04.763145 ignition[716]: Stage: fetch Aug 13 04:16:04.763365 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:04.763401 ignition[716]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:04.771583 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:04.771749 ignition[716]: parsed url from cmdline: "" Aug 13 04:16:04.776805 systemd[1]: Finished dracut-initqueue.service. Aug 13 04:16:04.771760 ignition[716]: no config URL provided Aug 13 04:16:04.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.771771 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 04:16:04.779817 systemd[1]: Reached target remote-fs-pre.target. Aug 13 04:16:04.771788 ignition[716]: no config at "/usr/lib/ignition/user.ign" Aug 13 04:16:04.781370 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 04:16:04.778011 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 13 04:16:04.783570 systemd[1]: Reached target remote-fs.target. Aug 13 04:16:04.778057 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 13 04:16:04.784629 systemd-networkd[711]: eth0: DHCPv4 address 10.244.14.178/30, gateway 10.244.14.177 acquired from 10.244.14.177 Aug 13 04:16:04.778108 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 13 04:16:04.788109 systemd[1]: Starting dracut-pre-mount.service... Aug 13 04:16:04.782775 ignition[716]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 04:16:04.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:04.800657 systemd[1]: Finished dracut-pre-mount.service. Aug 13 04:16:04.983645 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Aug 13 04:16:05.001258 ignition[716]: GET result: OK Aug 13 04:16:05.001913 ignition[716]: parsing config with SHA512: 55007f43b203a2b193e2042302dfc1d4dfc4d07ecc80be35ced2f9ecfb0fa2499c41de43d15da9699dc28fd2769ad0e3388522b8c3d3b2c35eadd1d71792cab4 Aug 13 04:16:05.012538 unknown[716]: fetched base config from "system" Aug 13 04:16:05.012565 unknown[716]: fetched base config from "system" Aug 13 04:16:05.013303 ignition[716]: fetch: fetch complete Aug 13 04:16:05.012594 unknown[716]: fetched user config from "openstack" Aug 13 04:16:05.013311 ignition[716]: fetch: fetch passed Aug 13 04:16:05.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.014997 systemd[1]: Finished ignition-fetch.service. Aug 13 04:16:05.013374 ignition[716]: Ignition finished successfully Aug 13 04:16:05.017493 systemd[1]: Starting ignition-kargs.service... Aug 13 04:16:05.031202 ignition[737]: Ignition 2.14.0 Aug 13 04:16:05.032209 ignition[737]: Stage: kargs Aug 13 04:16:05.033121 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:05.034124 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:05.035495 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:05.038092 ignition[737]: kargs: kargs passed Aug 13 04:16:05.038856 ignition[737]: Ignition finished successfully Aug 13 04:16:05.040435 systemd[1]: Finished ignition-kargs.service. Aug 13 04:16:05.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.042301 systemd[1]: Starting ignition-disks.service... Aug 13 04:16:05.052495 ignition[742]: Ignition 2.14.0 Aug 13 04:16:05.053543 ignition[742]: Stage: disks Aug 13 04:16:05.054388 ignition[742]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:05.055389 ignition[742]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:05.056722 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:05.059391 ignition[742]: disks: disks passed Aug 13 04:16:05.060155 ignition[742]: Ignition finished successfully Aug 13 04:16:05.061891 systemd[1]: Finished ignition-disks.service. Aug 13 04:16:05.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.062781 systemd[1]: Reached target initrd-root-device.target. Aug 13 04:16:05.063881 systemd[1]: Reached target local-fs-pre.target. Aug 13 04:16:05.065142 systemd[1]: Reached target local-fs.target. Aug 13 04:16:05.066390 systemd[1]: Reached target sysinit.target. Aug 13 04:16:05.067585 systemd[1]: Reached target basic.target. Aug 13 04:16:05.069981 systemd[1]: Starting systemd-fsck-root.service... Aug 13 04:16:05.089547 systemd-fsck[749]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 04:16:05.095048 systemd[1]: Finished systemd-fsck-root.service. Aug 13 04:16:05.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.096900 systemd[1]: Mounting sysroot.mount... Aug 13 04:16:05.109455 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 04:16:05.110166 systemd[1]: Mounted sysroot.mount. Aug 13 04:16:05.111000 systemd[1]: Reached target initrd-root-fs.target. Aug 13 04:16:05.113647 systemd[1]: Mounting sysroot-usr.mount... Aug 13 04:16:05.114844 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 04:16:05.115740 systemd[1]: Starting flatcar-openstack-hostname.service... Aug 13 04:16:05.116493 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 04:16:05.116550 systemd[1]: Reached target ignition-diskful.target. Aug 13 04:16:05.121493 systemd[1]: Mounted sysroot-usr.mount. Aug 13 04:16:05.124461 systemd[1]: Starting initrd-setup-root.service... Aug 13 04:16:05.132456 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 04:16:05.149900 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Aug 13 04:16:05.158064 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 04:16:05.167634 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 04:16:05.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.223310 systemd[1]: Finished initrd-setup-root.service. Aug 13 04:16:05.225380 systemd[1]: Starting ignition-mount.service... Aug 13 04:16:05.228798 systemd[1]: Starting sysroot-boot.service... Aug 13 04:16:05.240663 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 04:16:05.256284 ignition[804]: INFO : Ignition 2.14.0 Aug 13 04:16:05.256284 ignition[804]: INFO : Stage: mount Aug 13 04:16:05.257994 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:05.257994 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:05.257994 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:05.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.262326 ignition[804]: INFO : mount: mount passed Aug 13 04:16:05.262326 ignition[804]: INFO : Ignition finished successfully Aug 13 04:16:05.260703 systemd[1]: Finished ignition-mount.service. Aug 13 04:16:05.274018 coreos-metadata[755]: Aug 13 04:16:05.273 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 04:16:05.279235 systemd[1]: Finished sysroot-boot.service. Aug 13 04:16:05.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.303434 coreos-metadata[755]: Aug 13 04:16:05.303 INFO Fetch successful Aug 13 04:16:05.318411 coreos-metadata[755]: Aug 13 04:16:05.304 INFO wrote hostname srv-h1d3j.gb1.brightbox.com to /sysroot/etc/hostname Aug 13 04:16:05.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:05.311091 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 13 04:16:05.311292 systemd[1]: Finished flatcar-openstack-hostname.service. Aug 13 04:16:05.421982 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 04:16:05.434469 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (812) Aug 13 04:16:05.438966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 04:16:05.439005 kernel: BTRFS info (device vda6): using free space tree Aug 13 04:16:05.439044 kernel: BTRFS info (device vda6): has skinny extents Aug 13 04:16:05.445963 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 04:16:05.447735 systemd[1]: Starting ignition-files.service... Aug 13 04:16:05.469104 ignition[832]: INFO : Ignition 2.14.0 Aug 13 04:16:05.469104 ignition[832]: INFO : Stage: files Aug 13 04:16:05.470971 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:05.470971 ignition[832]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:05.470971 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:05.476762 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Aug 13 04:16:05.477738 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 04:16:05.477738 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 04:16:05.481956 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 04:16:05.483004 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 04:16:05.483923 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 04:16:05.483117 unknown[832]: wrote ssh authorized keys file for user: core Aug 13 04:16:05.486855 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 04:16:05.486855 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 04:16:05.486855 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 04:16:05.486855 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 04:16:05.684188 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 04:16:05.921839 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 04:16:05.923498 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 04:16:05.923498 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 04:16:05.988102 systemd-networkd[711]: eth0: Gained IPv6LL Aug 13 04:16:06.195893 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 04:16:06.686048 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 04:16:06.687855 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 04:16:06.688965 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 04:16:06.688965 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 04:16:06.691143 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 04:16:06.978109 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 04:16:07.499074 systemd-networkd[711]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3ac:24:19ff:fef4:eb2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3ac:24:19ff:fef4:eb2/64 assigned by NDisc. Aug 13 04:16:07.499088 systemd-networkd[711]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 04:16:09.067049 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 04:16:09.067049 ignition[832]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 04:16:09.067049 ignition[832]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 04:16:09.067049 ignition[832]: INFO : files: op(e): [started] processing unit "containerd.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(e): [finished] processing unit "containerd.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Aug 13 04:16:09.071948 ignition[832]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 04:16:09.095489 kernel: kauditd_printk_skb: 28 callbacks suppressed Aug 13 04:16:09.095541 kernel: audit: type=1130 audit(1755058569.083:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.080348 systemd[1]: Finished ignition-files.service. Aug 13 04:16:09.096555 ignition[832]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 04:16:09.096555 ignition[832]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 04:16:09.096555 ignition[832]: INFO : files: files passed Aug 13 04:16:09.096555 ignition[832]: INFO : Ignition finished successfully Aug 13 04:16:09.085525 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 04:16:09.113625 kernel: audit: type=1130 audit(1755058569.101:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.113688 kernel: audit: type=1131 audit(1755058569.101:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.094624 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 04:16:09.095842 systemd[1]: Starting ignition-quench.service... Aug 13 04:16:09.116069 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 04:16:09.122833 kernel: audit: type=1130 audit(1755058569.116:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.100534 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 04:16:09.100678 systemd[1]: Finished ignition-quench.service. Aug 13 04:16:09.115386 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 04:16:09.117017 systemd[1]: Reached target ignition-complete.target. Aug 13 04:16:09.124471 systemd[1]: Starting initrd-parse-etc.service... Aug 13 04:16:09.143543 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 04:16:09.143690 systemd[1]: Finished initrd-parse-etc.service. Aug 13 04:16:09.169699 kernel: audit: type=1130 audit(1755058569.158:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.169733 kernel: audit: type=1131 audit(1755058569.158:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.159512 systemd[1]: Reached target initrd-fs.target. Aug 13 04:16:09.170288 systemd[1]: Reached target initrd.target. Aug 13 04:16:09.171577 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 04:16:09.173375 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 04:16:09.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.191516 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 04:16:09.197983 kernel: audit: type=1130 audit(1755058569.191:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.198411 systemd[1]: Starting initrd-cleanup.service... Aug 13 04:16:09.211005 systemd[1]: Stopped target nss-lookup.target. Aug 13 04:16:09.212528 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 04:16:09.214065 systemd[1]: Stopped target timers.target. Aug 13 04:16:09.215489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 04:16:09.216440 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 04:16:09.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.218038 systemd[1]: Stopped target initrd.target. Aug 13 04:16:09.223573 kernel: audit: type=1131 audit(1755058569.217:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.224234 systemd[1]: Stopped target basic.target. Aug 13 04:16:09.225057 systemd[1]: Stopped target ignition-complete.target. Aug 13 04:16:09.226286 systemd[1]: Stopped target ignition-diskful.target. Aug 13 04:16:09.227580 systemd[1]: Stopped target initrd-root-device.target. Aug 13 04:16:09.228841 systemd[1]: Stopped target remote-fs.target. Aug 13 04:16:09.230067 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 04:16:09.231273 systemd[1]: Stopped target sysinit.target. Aug 13 04:16:09.232521 systemd[1]: Stopped target local-fs.target. Aug 13 04:16:09.233709 systemd[1]: Stopped target local-fs-pre.target. Aug 13 04:16:09.234999 systemd[1]: Stopped target swap.target. Aug 13 04:16:09.236116 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 04:16:09.242589 kernel: audit: type=1131 audit(1755058569.236:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.236343 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 04:16:09.237523 systemd[1]: Stopped target cryptsetup.target. Aug 13 04:16:09.249805 kernel: audit: type=1131 audit(1755058569.244:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.243309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 04:16:09.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.243551 systemd[1]: Stopped dracut-initqueue.service. Aug 13 04:16:09.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.244698 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 04:16:09.244936 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 04:16:09.250748 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 04:16:09.250983 systemd[1]: Stopped ignition-files.service. Aug 13 04:16:09.253464 systemd[1]: Stopping ignition-mount.service... Aug 13 04:16:09.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.271026 ignition[870]: INFO : Ignition 2.14.0 Aug 13 04:16:09.271026 ignition[870]: INFO : Stage: umount Aug 13 04:16:09.271026 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 04:16:09.271026 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Aug 13 04:16:09.271026 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 04:16:09.271026 ignition[870]: INFO : umount: umount passed Aug 13 04:16:09.271026 ignition[870]: INFO : Ignition finished successfully Aug 13 04:16:09.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.257289 systemd[1]: Stopping iscsid.service... Aug 13 04:16:09.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.285809 iscsid[718]: iscsid shutting down. Aug 13 04:16:09.257887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 04:16:09.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.258075 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 04:16:09.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.264928 systemd[1]: Stopping sysroot-boot.service... Aug 13 04:16:09.266180 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 04:16:09.266385 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 04:16:09.271623 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 04:16:09.271791 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 04:16:09.278163 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 04:16:09.278313 systemd[1]: Stopped iscsid.service. Aug 13 04:16:09.280454 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 04:16:09.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.280584 systemd[1]: Stopped ignition-mount.service. Aug 13 04:16:09.283102 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 04:16:09.283215 systemd[1]: Stopped ignition-disks.service. Aug 13 04:16:09.284202 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 04:16:09.284265 systemd[1]: Stopped ignition-kargs.service. Aug 13 04:16:09.284984 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 04:16:09.285048 systemd[1]: Stopped ignition-fetch.service. Aug 13 04:16:09.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.288165 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 04:16:09.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.288233 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 04:16:09.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.289195 systemd[1]: Stopped target paths.target. Aug 13 04:16:09.289791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 04:16:09.293517 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 04:16:09.294264 systemd[1]: Stopped target slices.target. Aug 13 04:16:09.294886 systemd[1]: Stopped target sockets.target. Aug 13 04:16:09.296265 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 04:16:09.296336 systemd[1]: Closed iscsid.socket. Aug 13 04:16:09.297458 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 04:16:09.297526 systemd[1]: Stopped ignition-setup.service. Aug 13 04:16:09.299968 systemd[1]: Stopping iscsiuio.service... Aug 13 04:16:09.304558 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 04:16:09.305304 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 04:16:09.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.305452 systemd[1]: Stopped iscsiuio.service. Aug 13 04:16:09.306936 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 04:16:09.307060 systemd[1]: Finished initrd-cleanup.service. Aug 13 04:16:09.308148 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 04:16:09.308273 systemd[1]: Stopped sysroot-boot.service. Aug 13 04:16:09.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.310376 systemd[1]: Stopped target network.target. Aug 13 04:16:09.316901 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 04:16:09.316970 systemd[1]: Closed iscsiuio.socket. Aug 13 04:16:09.318051 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 04:16:09.318129 systemd[1]: Stopped initrd-setup-root.service. Aug 13 04:16:09.319632 systemd[1]: Stopping systemd-networkd.service... Aug 13 04:16:09.321738 systemd[1]: Stopping systemd-resolved.service... Aug 13 04:16:09.324472 systemd-networkd[711]: eth0: DHCPv6 lease lost Aug 13 04:16:09.335000 audit: BPF prog-id=9 op=UNLOAD Aug 13 04:16:09.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.326036 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 04:16:09.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.326195 systemd[1]: Stopped systemd-networkd.service. Aug 13 04:16:09.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.328325 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 04:16:09.328395 systemd[1]: Closed systemd-networkd.socket. Aug 13 04:16:09.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.330892 systemd[1]: Stopping network-cleanup.service... Aug 13 04:16:09.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.335446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 04:16:09.335552 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 04:16:09.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.367000 audit: BPF prog-id=6 op=UNLOAD Aug 13 04:16:09.336921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 04:16:09.336993 systemd[1]: Stopped systemd-sysctl.service. Aug 13 04:16:09.338525 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 04:16:09.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.338595 systemd[1]: Stopped systemd-modules-load.service. Aug 13 04:16:09.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.339709 systemd[1]: Stopping systemd-udevd.service... Aug 13 04:16:09.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.347100 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 04:16:09.347893 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 04:16:09.348044 systemd[1]: Stopped systemd-resolved.service. Aug 13 04:16:09.364114 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 04:16:09.365222 systemd[1]: Stopped systemd-udevd.service. Aug 13 04:16:09.366682 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 04:16:09.366826 systemd[1]: Stopped network-cleanup.service. Aug 13 04:16:09.368490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 04:16:09.368560 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 04:16:09.369344 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 04:16:09.369395 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 04:16:09.370660 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 04:16:09.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.370725 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 04:16:09.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:09.371889 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 04:16:09.371950 systemd[1]: Stopped dracut-cmdline.service. Aug 13 04:16:09.373260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 04:16:09.373319 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 04:16:09.375522 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 04:16:09.385629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 04:16:09.385714 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 04:16:09.387681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 04:16:09.387828 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 04:16:09.388696 systemd[1]: Reached target initrd-switch-root.target. Aug 13 04:16:09.390755 systemd[1]: Starting initrd-switch-root.service... Aug 13 04:16:09.401380 systemd[1]: Switching root. Aug 13 04:16:09.403000 audit: BPF prog-id=8 op=UNLOAD Aug 13 04:16:09.403000 audit: BPF prog-id=7 op=UNLOAD Aug 13 04:16:09.407000 audit: BPF prog-id=5 op=UNLOAD Aug 13 04:16:09.407000 audit: BPF prog-id=4 op=UNLOAD Aug 13 04:16:09.407000 audit: BPF prog-id=3 op=UNLOAD Aug 13 04:16:09.426764 systemd-journald[200]: Journal stopped Aug 13 04:16:13.568224 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Aug 13 04:16:13.568391 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 04:16:13.568455 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 04:16:13.568484 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 04:16:13.568518 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 04:16:13.568548 kernel: SELinux: policy capability open_perms=1 Aug 13 04:16:13.568585 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 04:16:13.568613 kernel: SELinux: policy capability always_check_network=0 Aug 13 04:16:13.568643 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 04:16:13.568669 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 04:16:13.568696 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 04:16:13.568722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 04:16:13.568760 systemd[1]: Successfully loaded SELinux policy in 60.304ms. Aug 13 04:16:13.568829 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.305ms. Aug 13 04:16:13.568869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 04:16:13.568893 systemd[1]: Detected virtualization kvm. Aug 13 04:16:13.568929 systemd[1]: Detected architecture x86-64. Aug 13 04:16:13.568951 systemd[1]: Detected first boot. Aug 13 04:16:13.568978 systemd[1]: Hostname set to . Aug 13 04:16:13.569000 systemd[1]: Initializing machine ID from VM UUID. Aug 13 04:16:13.569026 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 04:16:13.569054 systemd[1]: Populated /etc with preset unit settings. Aug 13 04:16:13.569108 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 04:16:13.569146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 04:16:13.569184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 04:16:13.573201 systemd[1]: Queued start job for default target multi-user.target. Aug 13 04:16:13.573232 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 04:16:13.573256 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 04:16:13.573296 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 04:16:13.573320 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 04:16:13.573352 systemd[1]: Created slice system-getty.slice. Aug 13 04:16:13.573374 systemd[1]: Created slice system-modprobe.slice. Aug 13 04:16:13.573402 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 04:16:13.573454 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 04:16:13.573487 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 04:16:13.573524 systemd[1]: Created slice user.slice. Aug 13 04:16:13.573554 systemd[1]: Started systemd-ask-password-console.path. Aug 13 04:16:13.573582 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 04:16:13.573617 systemd[1]: Set up automount boot.automount. Aug 13 04:16:13.573639 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 04:16:13.573667 systemd[1]: Reached target integritysetup.target. Aug 13 04:16:13.573696 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 04:16:13.573727 systemd[1]: Reached target remote-fs.target. Aug 13 04:16:13.573755 systemd[1]: Reached target slices.target. Aug 13 04:16:13.573788 systemd[1]: Reached target swap.target. Aug 13 04:16:13.573826 systemd[1]: Reached target torcx.target. Aug 13 04:16:13.573856 systemd[1]: Reached target veritysetup.target. Aug 13 04:16:13.573878 systemd[1]: Listening on systemd-coredump.socket. Aug 13 04:16:13.573905 systemd[1]: Listening on systemd-initctl.socket. Aug 13 04:16:13.573933 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 04:16:13.573955 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 04:16:13.573982 systemd[1]: Listening on systemd-journald.socket. Aug 13 04:16:13.574003 systemd[1]: Listening on systemd-networkd.socket. Aug 13 04:16:13.574030 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 04:16:13.574065 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 04:16:13.574093 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 04:16:13.574124 systemd[1]: Mounting dev-hugepages.mount... Aug 13 04:16:13.574153 systemd[1]: Mounting dev-mqueue.mount... Aug 13 04:16:13.574175 systemd[1]: Mounting media.mount... Aug 13 04:16:13.574207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:13.574234 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 04:16:13.574256 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 04:16:13.574283 systemd[1]: Mounting tmp.mount... Aug 13 04:16:13.574322 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 04:16:13.574352 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 04:16:13.574374 systemd[1]: Starting kmod-static-nodes.service... Aug 13 04:16:13.574394 systemd[1]: Starting modprobe@configfs.service... Aug 13 04:16:13.574416 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 04:16:13.574451 systemd[1]: Starting modprobe@drm.service... Aug 13 04:16:13.574472 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 04:16:13.574493 systemd[1]: Starting modprobe@fuse.service... Aug 13 04:16:13.574526 systemd[1]: Starting modprobe@loop.service... Aug 13 04:16:13.574562 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 04:16:13.574586 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 04:16:13.574614 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 04:16:13.574642 systemd[1]: Starting systemd-journald.service... Aug 13 04:16:13.574670 systemd[1]: Starting systemd-modules-load.service... Aug 13 04:16:13.574692 systemd[1]: Starting systemd-network-generator.service... Aug 13 04:16:13.574714 systemd[1]: Starting systemd-remount-fs.service... Aug 13 04:16:13.574741 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 04:16:13.574777 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:13.574816 systemd[1]: Mounted dev-hugepages.mount. Aug 13 04:16:13.574838 systemd[1]: Mounted dev-mqueue.mount. Aug 13 04:16:13.574866 systemd[1]: Mounted media.mount. Aug 13 04:16:13.574893 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 04:16:13.574923 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 04:16:13.574945 systemd[1]: Mounted tmp.mount. Aug 13 04:16:13.574975 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 04:16:13.575003 systemd[1]: Finished kmod-static-nodes.service. Aug 13 04:16:13.575038 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 04:16:13.575061 systemd[1]: Finished modprobe@configfs.service. Aug 13 04:16:13.575081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 04:16:13.575112 systemd-journald[1020]: Journal started Aug 13 04:16:13.575210 systemd-journald[1020]: Runtime Journal (/run/log/journal/74cfd39f6f054f99a61ddecc0952e2c2) is 4.7M, max 38.1M, 33.3M free. Aug 13 04:16:13.342000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 04:16:13.562000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 04:16:13.562000 audit[1020]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdd0022ac0 a2=4000 a3=7ffdd0022b5c items=0 ppid=1 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 04:16:13.562000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 04:16:13.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.578639 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 04:16:13.583722 systemd[1]: Started systemd-journald.service. Aug 13 04:16:13.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.585494 kernel: fuse: init (API version 7.34) Aug 13 04:16:13.583976 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 04:16:13.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.585812 systemd[1]: Finished modprobe@drm.service. Aug 13 04:16:13.587049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 04:16:13.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.587770 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 04:16:13.588972 systemd[1]: Finished systemd-modules-load.service. Aug 13 04:16:13.590054 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 04:16:13.590543 systemd[1]: Finished modprobe@fuse.service. Aug 13 04:16:13.594559 systemd[1]: Finished systemd-network-generator.service. Aug 13 04:16:13.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.595753 systemd[1]: Finished systemd-remount-fs.service. Aug 13 04:16:13.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.596969 systemd[1]: Reached target network-pre.target. Aug 13 04:16:13.598469 kernel: loop: module loaded Aug 13 04:16:13.601804 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 04:16:13.609281 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 04:16:13.613681 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 04:16:13.619721 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 04:16:13.622098 systemd[1]: Starting systemd-journal-flush.service... Aug 13 04:16:13.622876 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 04:16:13.624643 systemd[1]: Starting systemd-random-seed.service... Aug 13 04:16:13.627451 systemd[1]: Starting systemd-sysctl.service... Aug 13 04:16:13.639334 systemd[1]: Starting systemd-sysusers.service... Aug 13 04:16:13.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.646851 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 04:16:13.649688 systemd[1]: Finished modprobe@loop.service. Aug 13 04:16:13.650722 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 04:16:13.656826 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 04:16:13.657116 systemd-journald[1020]: Time spent on flushing to /var/log/journal/74cfd39f6f054f99a61ddecc0952e2c2 is 99.946ms for 1235 entries. Aug 13 04:16:13.657116 systemd-journald[1020]: System Journal (/var/log/journal/74cfd39f6f054f99a61ddecc0952e2c2) is 8.0M, max 584.8M, 576.8M free. Aug 13 04:16:13.775658 systemd-journald[1020]: Received client request to flush runtime journal. Aug 13 04:16:13.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.660695 systemd[1]: Finished systemd-random-seed.service. Aug 13 04:16:13.662888 systemd[1]: Reached target first-boot-complete.target. Aug 13 04:16:13.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.663973 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 04:16:13.685654 systemd[1]: Finished systemd-sysctl.service. Aug 13 04:16:13.710172 systemd[1]: Finished systemd-sysusers.service. Aug 13 04:16:13.713246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 04:16:13.773145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 04:16:13.776717 systemd[1]: Finished systemd-journal-flush.service. Aug 13 04:16:13.792991 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 04:16:13.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:13.795612 systemd[1]: Starting systemd-udev-settle.service... Aug 13 04:16:13.807280 udevadm[1065]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 04:16:14.338657 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 04:16:14.346606 kernel: kauditd_printk_skb: 77 callbacks suppressed Aug 13 04:16:14.346693 kernel: audit: type=1130 audit(1755058574.339:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.341342 systemd[1]: Starting systemd-udevd.service... Aug 13 04:16:14.372732 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Aug 13 04:16:14.404255 systemd[1]: Started systemd-udevd.service. Aug 13 04:16:14.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.410351 systemd[1]: Starting systemd-networkd.service... Aug 13 04:16:14.413466 kernel: audit: type=1130 audit(1755058574.406:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.430319 systemd[1]: Starting systemd-userdbd.service... Aug 13 04:16:14.487272 systemd[1]: Found device dev-ttyS0.device. Aug 13 04:16:14.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.498906 systemd[1]: Started systemd-userdbd.service. Aug 13 04:16:14.505447 kernel: audit: type=1130 audit(1755058574.499:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.642260 systemd-networkd[1069]: lo: Link UP Aug 13 04:16:14.642274 systemd-networkd[1069]: lo: Gained carrier Aug 13 04:16:14.643165 systemd-networkd[1069]: Enumeration completed Aug 13 04:16:14.643319 systemd[1]: Started systemd-networkd.service. Aug 13 04:16:14.645524 systemd-networkd[1069]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 04:16:14.650483 kernel: audit: type=1130 audit(1755058574.643:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.652252 systemd-networkd[1069]: eth0: Link UP Aug 13 04:16:14.652264 systemd-networkd[1069]: eth0: Gained carrier Aug 13 04:16:14.655462 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 04:16:14.661452 kernel: ACPI: button: Power Button [PWRF] Aug 13 04:16:14.672444 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 04:16:14.674689 systemd-networkd[1069]: eth0: DHCPv4 address 10.244.14.178/30, gateway 10.244.14.177 acquired from 10.244.14.177 Aug 13 04:16:14.685308 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 04:16:14.721000 audit[1078]: AVC avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 04:16:14.735488 kernel: audit: type=1400 audit(1755058574.721:121): avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 04:16:14.721000 audit[1078]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5568dfc1e100 a1=338ac a2=7f4be1ed3bc5 a3=5 items=110 ppid=1067 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 04:16:14.746566 kernel: audit: type=1300 audit(1755058574.721:121): arch=c000003e syscall=175 success=yes exit=0 a0=5568dfc1e100 a1=338ac a2=7f4be1ed3bc5 a3=5 items=110 ppid=1067 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 04:16:14.721000 audit: CWD cwd="/" Aug 13 04:16:14.721000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.754236 kernel: audit: type=1307 audit(1755058574.721:121): cwd="/" Aug 13 04:16:14.754305 kernel: audit: type=1302 audit(1755058574.721:121): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=1 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.762500 kernel: audit: type=1302 audit(1755058574.721:121): item=1 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=2 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.770442 kernel: audit: type=1302 audit(1755058574.721:121): item=2 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=3 name=(null) inode=15614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=4 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=5 name=(null) inode=15615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=6 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=7 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=8 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=9 name=(null) inode=15617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=10 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=11 name=(null) inode=15618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=12 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=13 name=(null) inode=15619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=14 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=15 name=(null) inode=15620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=16 name=(null) inode=15616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=17 name=(null) inode=15621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=18 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=19 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=20 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=21 name=(null) inode=15623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=22 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=23 name=(null) inode=15624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=24 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=25 name=(null) inode=15625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=26 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=27 name=(null) inode=15626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=28 name=(null) inode=15622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=29 name=(null) inode=15627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=30 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=31 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=32 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=33 name=(null) inode=15629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=34 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=35 name=(null) inode=15630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=36 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=37 name=(null) inode=15631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=38 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=39 name=(null) inode=15632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=40 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=41 name=(null) inode=15633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=42 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=43 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=44 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=45 name=(null) inode=15635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=46 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=47 name=(null) inode=15636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=48 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=49 name=(null) inode=15637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=50 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=51 name=(null) inode=15638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=52 name=(null) inode=15634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=53 name=(null) inode=15639 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=55 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=56 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=57 name=(null) inode=15641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=58 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=59 name=(null) inode=15642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=60 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=61 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=62 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=63 name=(null) inode=15644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=64 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=65 name=(null) inode=15645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=66 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=67 name=(null) inode=15646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=68 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=69 name=(null) inode=15647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=70 name=(null) inode=15643 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=71 name=(null) inode=15648 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=72 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=73 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=74 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=75 name=(null) inode=15650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=76 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=77 name=(null) inode=15651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=78 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=79 name=(null) inode=15652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=80 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=81 name=(null) inode=15653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=82 name=(null) inode=15649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=83 name=(null) inode=15654 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=84 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=85 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=86 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=87 name=(null) inode=15656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=88 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=89 name=(null) inode=15657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=90 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=91 name=(null) inode=15658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=92 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=93 name=(null) inode=15659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=94 name=(null) inode=15655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=95 name=(null) inode=15660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=96 name=(null) inode=15640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=97 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=98 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=99 name=(null) inode=15662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=100 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=101 name=(null) inode=15663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=102 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=103 name=(null) inode=15664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=104 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=105 name=(null) inode=15665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=106 name=(null) inode=15661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=107 name=(null) inode=15666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PATH item=109 name=(null) inode=15667 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 04:16:14.721000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 04:16:14.800450 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Aug 13 04:16:14.809453 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 04:16:14.837730 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 04:16:14.838010 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 04:16:14.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:14.989651 systemd[1]: Finished systemd-udev-settle.service. Aug 13 04:16:14.992607 systemd[1]: Starting lvm2-activation-early.service... Aug 13 04:16:15.019301 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 04:16:15.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.053278 systemd[1]: Finished lvm2-activation-early.service. Aug 13 04:16:15.054196 systemd[1]: Reached target cryptsetup.target. Aug 13 04:16:15.056813 systemd[1]: Starting lvm2-activation.service... Aug 13 04:16:15.064241 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 04:16:15.092050 systemd[1]: Finished lvm2-activation.service. Aug 13 04:16:15.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.092981 systemd[1]: Reached target local-fs-pre.target. Aug 13 04:16:15.093683 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 04:16:15.093735 systemd[1]: Reached target local-fs.target. Aug 13 04:16:15.094365 systemd[1]: Reached target machines.target. Aug 13 04:16:15.097066 systemd[1]: Starting ldconfig.service... Aug 13 04:16:15.099007 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.099104 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:15.100856 systemd[1]: Starting systemd-boot-update.service... Aug 13 04:16:15.103286 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 04:16:15.106191 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 04:16:15.115578 systemd[1]: Starting systemd-sysext.service... Aug 13 04:16:15.119717 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Aug 13 04:16:15.121596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 04:16:15.140989 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 04:16:15.148065 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 04:16:15.148457 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 04:16:15.261474 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 04:16:15.282100 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 04:16:15.284185 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 04:16:15.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.297391 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 04:16:15.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.310459 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 04:16:15.331459 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 04:16:15.347977 (sd-sysext)[1119]: Using extensions 'kubernetes'. Aug 13 04:16:15.350312 (sd-sysext)[1119]: Merged extensions into '/usr'. Aug 13 04:16:15.378599 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:15.382651 systemd[1]: Mounting usr-share-oem.mount... Aug 13 04:16:15.384049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.386740 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 04:16:15.389081 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 04:16:15.394466 systemd[1]: Starting modprobe@loop.service... Aug 13 04:16:15.395215 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.395467 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:15.395700 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:15.398687 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Aug 13 04:16:15.398687 systemd-fsck[1115]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 04:16:15.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.402136 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 04:16:15.406004 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 04:16:15.408398 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 04:16:15.410586 systemd[1]: Finished modprobe@loop.service. Aug 13 04:16:15.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.419125 systemd[1]: Mounted usr-share-oem.mount. Aug 13 04:16:15.420531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 04:16:15.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.420850 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 04:16:15.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.427986 systemd[1]: Finished systemd-sysext.service. Aug 13 04:16:15.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.430390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 04:16:15.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.435459 systemd[1]: Mounting boot.mount... Aug 13 04:16:15.441409 systemd[1]: Starting ensure-sysext.service... Aug 13 04:16:15.449126 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 04:16:15.449248 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.451898 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 04:16:15.463405 systemd[1]: Reloading. Aug 13 04:16:15.476029 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 04:16:15.480451 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 04:16:15.489926 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 04:16:15.625689 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-08-13T04:16:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 04:16:15.626953 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-08-13T04:16:15Z" level=info msg="torcx already run" Aug 13 04:16:15.688466 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 04:16:15.771810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 04:16:15.771850 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 04:16:15.802538 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 04:16:15.897077 systemd[1]: Finished ldconfig.service. Aug 13 04:16:15.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.900797 systemd[1]: Mounted boot.mount. Aug 13 04:16:15.917373 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.919884 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 04:16:15.922718 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 04:16:15.925563 systemd[1]: Starting modprobe@loop.service... Aug 13 04:16:15.926516 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.927013 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:15.929204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 04:16:15.929685 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 04:16:15.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.936614 systemd[1]: Finished systemd-boot-update.service. Aug 13 04:16:15.937901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 04:16:15.938118 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 04:16:15.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.941282 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 04:16:15.944282 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.946976 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 04:16:15.968695 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 04:16:15.969586 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.969866 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:15.971568 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 04:16:15.971837 systemd[1]: Finished modprobe@loop.service. Aug 13 04:16:15.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.973209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 04:16:15.973461 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 04:16:15.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:15.974686 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.979384 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.981294 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 04:16:15.984811 systemd[1]: Starting modprobe@drm.service... Aug 13 04:16:15.990775 systemd[1]: Starting modprobe@loop.service... Aug 13 04:16:15.994681 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 04:16:15.994927 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:15.997737 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 04:16:16.001051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 04:16:16.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.001326 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 04:16:16.002742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 04:16:16.002977 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 04:16:16.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.005215 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 04:16:16.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.005560 systemd[1]: Finished modprobe@drm.service. Aug 13 04:16:16.006931 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 04:16:16.008607 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 04:16:16.008851 systemd[1]: Finished modprobe@loop.service. Aug 13 04:16:16.010018 systemd[1]: Finished ensure-sysext.service. Aug 13 04:16:16.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.012107 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 04:16:16.102401 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 04:16:16.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.105185 systemd[1]: Starting audit-rules.service... Aug 13 04:16:16.107883 systemd[1]: Starting clean-ca-certificates.service... Aug 13 04:16:16.110640 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 04:16:16.114056 systemd[1]: Starting systemd-resolved.service... Aug 13 04:16:16.124361 systemd[1]: Starting systemd-timesyncd.service... Aug 13 04:16:16.127150 systemd[1]: Starting systemd-update-utmp.service... Aug 13 04:16:16.133282 systemd[1]: Finished clean-ca-certificates.service. Aug 13 04:16:16.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.136960 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 04:16:16.146489 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:16.146000 audit[1244]: SYSTEM_BOOT pid=1244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.146573 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 04:16:16.151281 systemd[1]: Finished systemd-update-utmp.service. Aug 13 04:16:16.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.174468 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 04:16:16.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.177267 systemd[1]: Starting systemd-update-done.service... Aug 13 04:16:16.191253 systemd[1]: Finished systemd-update-done.service. Aug 13 04:16:16.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 04:16:16.202543 augenrules[1258]: No rules Aug 13 04:16:16.201000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 04:16:16.201000 audit[1258]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1b29f750 a2=420 a3=0 items=0 ppid=1233 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 04:16:16.201000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 04:16:16.203990 systemd[1]: Finished audit-rules.service. Aug 13 04:16:16.262836 systemd[1]: Started systemd-timesyncd.service. Aug 13 04:16:16.263810 systemd[1]: Reached target time-set.target. Aug 13 04:16:16.267609 systemd-resolved[1237]: Positive Trust Anchors: Aug 13 04:16:16.267635 systemd-resolved[1237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 04:16:16.267673 systemd-resolved[1237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 04:16:16.275546 systemd-resolved[1237]: Using system hostname 'srv-h1d3j.gb1.brightbox.com'. Aug 13 04:16:16.278268 systemd[1]: Started systemd-resolved.service. Aug 13 04:16:16.279065 systemd[1]: Reached target network.target. Aug 13 04:16:16.279713 systemd[1]: Reached target nss-lookup.target. Aug 13 04:16:16.280360 systemd[1]: Reached target sysinit.target. Aug 13 04:16:16.281099 systemd[1]: Started motdgen.path. Aug 13 04:16:16.281756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 04:16:16.282702 systemd[1]: Started logrotate.timer. Aug 13 04:16:16.283443 systemd[1]: Started mdadm.timer. Aug 13 04:16:16.284015 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 04:16:16.285243 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 04:16:16.285314 systemd[1]: Reached target paths.target. Aug 13 04:16:16.286108 systemd[1]: Reached target timers.target. Aug 13 04:16:16.287298 systemd[1]: Listening on dbus.socket. Aug 13 04:16:16.290048 systemd[1]: Starting docker.socket... Aug 13 04:16:16.293392 systemd[1]: Listening on sshd.socket. Aug 13 04:16:16.294122 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:16.294601 systemd[1]: Listening on docker.socket. Aug 13 04:16:16.295269 systemd[1]: Reached target sockets.target. Aug 13 04:16:16.296065 systemd[1]: Reached target basic.target. Aug 13 04:16:16.296905 systemd[1]: System is tainted: cgroupsv1 Aug 13 04:16:16.296972 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 04:16:16.297014 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 04:16:16.298795 systemd[1]: Starting containerd.service... Aug 13 04:16:16.300883 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 04:16:16.303700 systemd[1]: Starting dbus.service... Aug 13 04:16:16.307027 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 04:16:16.311668 systemd[1]: Starting extend-filesystems.service... Aug 13 04:16:16.318894 jq[1271]: false Aug 13 04:16:16.319280 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 04:16:16.321967 systemd[1]: Starting motdgen.service... Aug 13 04:16:16.326136 systemd[1]: Starting prepare-helm.service... Aug 13 04:16:16.330563 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 04:16:16.334071 systemd[1]: Starting sshd-keygen.service... Aug 13 04:16:16.345857 systemd[1]: Starting systemd-logind.service... Aug 13 04:16:16.346704 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 04:16:16.346936 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 04:16:16.349593 systemd[1]: Starting update-engine.service... Aug 13 04:16:16.357296 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 04:16:16.362310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 04:16:16.365013 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 04:16:16.366879 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 04:16:16.367243 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 04:16:16.408334 jq[1286]: true Aug 13 04:16:16.409752 jq[1296]: true Aug 13 04:16:16.423264 tar[1291]: linux-amd64/helm Aug 13 04:16:16.413221 systemd[1]: Started dbus.service. Aug 13 04:16:16.412958 dbus-daemon[1270]: [system] SELinux support is enabled Aug 13 04:16:16.417146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 04:16:16.417190 systemd[1]: Reached target system-config.target. Aug 13 04:16:16.417901 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 04:16:16.417929 systemd[1]: Reached target user-config.target. Aug 13 04:16:16.420177 systemd-networkd[1069]: eth0: Gained IPv6LL Aug 13 04:16:16.422666 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 04:16:16.423451 systemd[1]: Reached target network-online.target. Aug 13 04:16:16.426628 systemd[1]: Starting kubelet.service... Aug 13 04:16:16.451059 dbus-daemon[1270]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1069 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 04:16:16.456934 systemd[1]: Starting systemd-hostnamed.service... Aug 13 04:16:16.484547 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 04:16:16.484934 systemd[1]: Finished motdgen.service. Aug 13 04:16:16.506942 extend-filesystems[1272]: Found loop1 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda1 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda2 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda3 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found usr Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda4 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda6 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda7 Aug 13 04:16:16.509572 extend-filesystems[1272]: Found vda9 Aug 13 04:16:16.509572 extend-filesystems[1272]: Checking size of /dev/vda9 Aug 13 04:16:16.546070 systemd[1]: Started update-engine.service. Aug 13 04:16:16.553751 update_engine[1283]: I0813 04:16:16.541110 1283 main.cc:92] Flatcar Update Engine starting Aug 13 04:16:16.553751 update_engine[1283]: I0813 04:16:16.553576 1283 update_check_scheduler.cc:74] Next update check in 9m21s Aug 13 04:16:16.550059 systemd[1]: Started locksmithd.service. Aug 13 04:16:16.560983 extend-filesystems[1272]: Resized partition /dev/vda9 Aug 13 04:16:16.568298 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 04:16:16.578330 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 13 04:16:16.621886 env[1293]: time="2025-08-13T04:16:16.621770728Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 04:16:16.649961 bash[1336]: Updated "/home/core/.ssh/authorized_keys" Aug 13 04:16:16.650867 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 04:16:17.684404 systemd-resolved[1237]: Clock change detected. Flushing caches. Aug 13 04:16:17.684888 systemd-timesyncd[1243]: Contacted time server 185.57.191.229:123 (0.flatcar.pool.ntp.org). Aug 13 04:16:17.685175 systemd-timesyncd[1243]: Initial clock synchronization to Wed 2025-08-13 04:16:17.684333 UTC. Aug 13 04:16:17.764553 env[1293]: time="2025-08-13T04:16:17.764449882Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 04:16:17.765041 env[1293]: time="2025-08-13T04:16:17.765008362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.767612 env[1293]: time="2025-08-13T04:16:17.767568744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 04:16:17.767748 env[1293]: time="2025-08-13T04:16:17.767716945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.768226 env[1293]: time="2025-08-13T04:16:17.768189930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 04:16:17.768348 env[1293]: time="2025-08-13T04:16:17.768317799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.768529 env[1293]: time="2025-08-13T04:16:17.768443689Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 04:16:17.768654 env[1293]: time="2025-08-13T04:16:17.768624554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.768904 env[1293]: time="2025-08-13T04:16:17.768874557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.770750 env[1293]: time="2025-08-13T04:16:17.770716894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 04:16:17.773799 env[1293]: time="2025-08-13T04:16:17.773740853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 04:16:17.775156 env[1293]: time="2025-08-13T04:16:17.775123397Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 04:16:17.775390 env[1293]: time="2025-08-13T04:16:17.775341252Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 04:16:17.775544 env[1293]: time="2025-08-13T04:16:17.775514337Z" level=info msg="metadata content store policy set" policy=shared Aug 13 04:16:17.792240 systemd-logind[1282]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 04:16:17.796128 systemd-logind[1282]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 04:16:17.799547 systemd-logind[1282]: New seat seat0. Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.801743417Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.801820134Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.801847182Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.801939745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802051910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802086627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802108608Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802138750Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802161363Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802186198Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802221845Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802256161Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802434864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 04:16:17.802752 env[1293]: time="2025-08-13T04:16:17.802619987Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 04:16:17.803826 env[1293]: time="2025-08-13T04:16:17.803793376Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 04:16:17.804036 env[1293]: time="2025-08-13T04:16:17.804004043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.804532 env[1293]: time="2025-08-13T04:16:17.804499339Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 04:16:17.804740 env[1293]: time="2025-08-13T04:16:17.804709071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.804883 env[1293]: time="2025-08-13T04:16:17.804851209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.805021 env[1293]: time="2025-08-13T04:16:17.804990853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.805156 env[1293]: time="2025-08-13T04:16:17.805126129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.805318 env[1293]: time="2025-08-13T04:16:17.805288324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.805444 env[1293]: time="2025-08-13T04:16:17.805415562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.805626 env[1293]: time="2025-08-13T04:16:17.805596607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.805727618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.805763169Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.805994990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806022442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806042736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806061901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806085422Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806103104Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806156338Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 04:16:17.807487 env[1293]: time="2025-08-13T04:16:17.806233285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 04:16:17.807946 env[1293]: time="2025-08-13T04:16:17.806528061Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 04:16:17.807946 env[1293]: time="2025-08-13T04:16:17.806636412Z" level=info msg="Connect containerd service" Aug 13 04:16:17.807946 env[1293]: time="2025-08-13T04:16:17.806710984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 04:16:17.816818 env[1293]: time="2025-08-13T04:16:17.816758943Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 04:16:17.818949 systemd[1]: Started systemd-logind.service. Aug 13 04:16:17.819479 env[1293]: time="2025-08-13T04:16:17.819420897Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 04:16:17.819667 env[1293]: time="2025-08-13T04:16:17.819637753Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 04:16:17.821805 env[1293]: time="2025-08-13T04:16:17.821762229Z" level=info msg="containerd successfully booted in 0.191609s" Aug 13 04:16:17.822257 systemd[1]: Started containerd.service. Aug 13 04:16:17.849494 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.821621973Z" level=info msg="Start subscribing containerd event" Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.851688159Z" level=info msg="Start recovering state" Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.851855628Z" level=info msg="Start event monitor" Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.851899621Z" level=info msg="Start snapshots syncer" Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.851925905Z" level=info msg="Start cni network conf syncer for default" Aug 13 04:16:17.869920 env[1293]: time="2025-08-13T04:16:17.851941193Z" level=info msg="Start streaming server" Aug 13 04:16:17.873188 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 04:16:17.873188 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 04:16:17.873188 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 04:16:17.876655 extend-filesystems[1272]: Resized filesystem in /dev/vda9 Aug 13 04:16:17.874371 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 04:16:17.874876 systemd[1]: Finished extend-filesystems.service. Aug 13 04:16:17.895108 dbus-daemon[1270]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 04:16:17.895323 systemd[1]: Started systemd-hostnamed.service. Aug 13 04:16:17.897160 dbus-daemon[1270]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1312 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 04:16:17.901929 systemd[1]: Starting polkit.service... Aug 13 04:16:17.925247 polkitd[1347]: Started polkitd version 121 Aug 13 04:16:17.974630 polkitd[1347]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 04:16:17.975035 polkitd[1347]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 04:16:17.976662 polkitd[1347]: Finished loading, compiling and executing 2 rules Aug 13 04:16:17.977888 dbus-daemon[1270]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 04:16:17.978119 systemd[1]: Started polkit.service. Aug 13 04:16:17.978547 polkitd[1347]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 04:16:18.001959 systemd-hostnamed[1312]: Hostname set to (static) Aug 13 04:16:18.016557 systemd-networkd[1069]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:3ac:24:19ff:fef4:eb2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:3ac:24:19ff:fef4:eb2/64 assigned by NDisc. Aug 13 04:16:18.016570 systemd-networkd[1069]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 04:16:18.258686 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 04:16:18.620536 tar[1291]: linux-amd64/LICENSE Aug 13 04:16:18.621649 tar[1291]: linux-amd64/README.md Aug 13 04:16:18.633510 systemd[1]: Finished prepare-helm.service. Aug 13 04:16:19.153663 systemd[1]: Started kubelet.service. Aug 13 04:16:19.386587 sshd_keygen[1307]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 04:16:19.417082 systemd[1]: Finished sshd-keygen.service. Aug 13 04:16:19.420603 systemd[1]: Starting issuegen.service... Aug 13 04:16:19.430300 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 04:16:19.430656 systemd[1]: Finished issuegen.service. Aug 13 04:16:19.433764 systemd[1]: Starting systemd-user-sessions.service... Aug 13 04:16:19.445145 systemd[1]: Finished systemd-user-sessions.service. Aug 13 04:16:19.448045 systemd[1]: Started getty@tty1.service. Aug 13 04:16:19.450984 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 04:16:19.452905 systemd[1]: Reached target getty.target. Aug 13 04:16:19.861787 kubelet[1367]: E0813 04:16:19.861615 1367 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 04:16:19.864123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 04:16:19.864410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 04:16:24.619918 coreos-metadata[1268]: Aug 13 04:16:24.619 WARN failed to locate config-drive, using the metadata service API instead Aug 13 04:16:24.676711 coreos-metadata[1268]: Aug 13 04:16:24.676 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 13 04:16:24.706715 coreos-metadata[1268]: Aug 13 04:16:24.706 INFO Fetch successful Aug 13 04:16:24.707049 coreos-metadata[1268]: Aug 13 04:16:24.706 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 04:16:24.734836 coreos-metadata[1268]: Aug 13 04:16:24.734 INFO Fetch successful Aug 13 04:16:24.737555 unknown[1268]: wrote ssh authorized keys file for user: core Aug 13 04:16:24.750919 update-ssh-keys[1395]: Updated "/home/core/.ssh/authorized_keys" Aug 13 04:16:24.751593 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 04:16:24.752127 systemd[1]: Reached target multi-user.target. Aug 13 04:16:24.754630 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 04:16:24.766204 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 04:16:24.766571 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 04:16:24.767179 systemd[1]: Startup finished in 9.075s (kernel) + 14.166s (userspace) = 23.241s. Aug 13 04:16:27.027985 systemd[1]: Created slice system-sshd.slice. Aug 13 04:16:27.030194 systemd[1]: Started sshd@0-10.244.14.178:22-139.178.89.65:40092.service. Aug 13 04:16:27.954948 sshd[1400]: Accepted publickey for core from 139.178.89.65 port 40092 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:16:27.959545 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:27.977520 systemd[1]: Created slice user-500.slice. Aug 13 04:16:27.979395 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 04:16:27.986376 systemd-logind[1282]: New session 1 of user core. Aug 13 04:16:27.994717 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 04:16:27.996721 systemd[1]: Starting user@500.service... Aug 13 04:16:28.006496 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:28.114289 systemd[1405]: Queued start job for default target default.target. Aug 13 04:16:28.115894 systemd[1405]: Reached target paths.target. Aug 13 04:16:28.116106 systemd[1405]: Reached target sockets.target. Aug 13 04:16:28.116263 systemd[1405]: Reached target timers.target. Aug 13 04:16:28.116430 systemd[1405]: Reached target basic.target. Aug 13 04:16:28.116794 systemd[1]: Started user@500.service. Aug 13 04:16:28.118291 systemd[1]: Started session-1.scope. Aug 13 04:16:28.118572 systemd[1405]: Reached target default.target. Aug 13 04:16:28.118660 systemd[1405]: Startup finished in 103ms. Aug 13 04:16:28.774102 systemd[1]: Started sshd@1-10.244.14.178:22-139.178.89.65:40098.service. Aug 13 04:16:29.728966 sshd[1414]: Accepted publickey for core from 139.178.89.65 port 40098 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:16:29.731353 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:29.740801 systemd-logind[1282]: New session 2 of user core. Aug 13 04:16:29.741721 systemd[1]: Started session-2.scope. Aug 13 04:16:30.115912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 04:16:30.116215 systemd[1]: Stopped kubelet.service. Aug 13 04:16:30.118770 systemd[1]: Starting kubelet.service... Aug 13 04:16:30.294284 systemd[1]: Started kubelet.service. Aug 13 04:16:30.394285 sshd[1414]: pam_unix(sshd:session): session closed for user core Aug 13 04:16:30.397798 systemd[1]: sshd@1-10.244.14.178:22-139.178.89.65:40098.service: Deactivated successfully. Aug 13 04:16:30.399275 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 04:16:30.399325 systemd-logind[1282]: Session 2 logged out. Waiting for processes to exit. Aug 13 04:16:30.401252 systemd-logind[1282]: Removed session 2. Aug 13 04:16:30.438230 kubelet[1427]: E0813 04:16:30.438166 1427 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 04:16:30.442086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 04:16:30.442368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 04:16:30.528738 systemd[1]: Started sshd@2-10.244.14.178:22-139.178.89.65:59402.service. Aug 13 04:16:31.419927 sshd[1436]: Accepted publickey for core from 139.178.89.65 port 59402 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:16:31.422653 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:31.433194 systemd-logind[1282]: New session 3 of user core. Aug 13 04:16:31.434179 systemd[1]: Started session-3.scope. Aug 13 04:16:32.033992 sshd[1436]: pam_unix(sshd:session): session closed for user core Aug 13 04:16:32.038521 systemd[1]: sshd@2-10.244.14.178:22-139.178.89.65:59402.service: Deactivated successfully. Aug 13 04:16:32.040586 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 04:16:32.041355 systemd-logind[1282]: Session 3 logged out. Waiting for processes to exit. Aug 13 04:16:32.043627 systemd-logind[1282]: Removed session 3. Aug 13 04:16:32.181417 systemd[1]: Started sshd@3-10.244.14.178:22-139.178.89.65:59412.service. Aug 13 04:16:33.081210 sshd[1443]: Accepted publickey for core from 139.178.89.65 port 59412 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:16:33.083246 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:33.089826 systemd-logind[1282]: New session 4 of user core. Aug 13 04:16:33.090572 systemd[1]: Started session-4.scope. Aug 13 04:16:33.706829 sshd[1443]: pam_unix(sshd:session): session closed for user core Aug 13 04:16:33.711062 systemd[1]: sshd@3-10.244.14.178:22-139.178.89.65:59412.service: Deactivated successfully. Aug 13 04:16:33.712110 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 04:16:33.713875 systemd-logind[1282]: Session 4 logged out. Waiting for processes to exit. Aug 13 04:16:33.715237 systemd-logind[1282]: Removed session 4. Aug 13 04:16:33.853450 systemd[1]: Started sshd@4-10.244.14.178:22-139.178.89.65:59424.service. Aug 13 04:16:34.751225 sshd[1450]: Accepted publickey for core from 139.178.89.65 port 59424 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:16:34.753235 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:16:34.760222 systemd-logind[1282]: New session 5 of user core. Aug 13 04:16:34.760978 systemd[1]: Started session-5.scope. Aug 13 04:16:35.242225 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 04:16:35.242660 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 04:16:35.285264 systemd[1]: Starting docker.service... Aug 13 04:16:35.352550 env[1464]: time="2025-08-13T04:16:35.352408198Z" level=info msg="Starting up" Aug 13 04:16:35.355004 env[1464]: time="2025-08-13T04:16:35.354767150Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 04:16:35.355004 env[1464]: time="2025-08-13T04:16:35.354793489Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 04:16:35.355004 env[1464]: time="2025-08-13T04:16:35.354819754Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 04:16:35.355004 env[1464]: time="2025-08-13T04:16:35.354843301Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 04:16:35.359689 env[1464]: time="2025-08-13T04:16:35.359626647Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 04:16:35.359846 env[1464]: time="2025-08-13T04:16:35.359816382Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 04:16:35.359978 env[1464]: time="2025-08-13T04:16:35.359946520Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 04:16:35.360107 env[1464]: time="2025-08-13T04:16:35.360079106Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 04:16:35.368753 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport89405863-merged.mount: Deactivated successfully. Aug 13 04:16:35.415598 env[1464]: time="2025-08-13T04:16:35.415547722Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 04:16:35.415873 env[1464]: time="2025-08-13T04:16:35.415843785Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 04:16:35.416305 env[1464]: time="2025-08-13T04:16:35.416267386Z" level=info msg="Loading containers: start." Aug 13 04:16:35.583566 kernel: Initializing XFRM netlink socket Aug 13 04:16:35.641665 env[1464]: time="2025-08-13T04:16:35.641607978Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 04:16:35.748112 systemd-networkd[1069]: docker0: Link UP Aug 13 04:16:35.765864 env[1464]: time="2025-08-13T04:16:35.765809841Z" level=info msg="Loading containers: done." Aug 13 04:16:35.789076 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1642592675-merged.mount: Deactivated successfully. Aug 13 04:16:35.798177 env[1464]: time="2025-08-13T04:16:35.798073865Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 04:16:35.798518 env[1464]: time="2025-08-13T04:16:35.798488338Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 04:16:35.798714 env[1464]: time="2025-08-13T04:16:35.798666982Z" level=info msg="Daemon has completed initialization" Aug 13 04:16:35.816144 systemd[1]: Started docker.service. Aug 13 04:16:35.825964 env[1464]: time="2025-08-13T04:16:35.825882106Z" level=info msg="API listen on /run/docker.sock" Aug 13 04:16:36.649297 env[1293]: time="2025-08-13T04:16:36.649055243Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Aug 13 04:16:37.627736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921040249.mount: Deactivated successfully. Aug 13 04:16:40.053150 env[1293]: time="2025-08-13T04:16:40.052996442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:40.055840 env[1293]: time="2025-08-13T04:16:40.055805175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:40.058698 env[1293]: time="2025-08-13T04:16:40.058663433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:40.061469 env[1293]: time="2025-08-13T04:16:40.061417752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:40.062875 env[1293]: time="2025-08-13T04:16:40.062812056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Aug 13 04:16:40.065491 env[1293]: time="2025-08-13T04:16:40.065419989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Aug 13 04:16:40.553826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 04:16:40.554408 systemd[1]: Stopped kubelet.service. Aug 13 04:16:40.558557 systemd[1]: Starting kubelet.service... Aug 13 04:16:40.754045 systemd[1]: Started kubelet.service. Aug 13 04:16:40.836190 kubelet[1598]: E0813 04:16:40.835983 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 04:16:40.838651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 04:16:40.838963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 04:16:43.804674 env[1293]: time="2025-08-13T04:16:43.804531088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:43.808532 env[1293]: time="2025-08-13T04:16:43.808496299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:43.810846 env[1293]: time="2025-08-13T04:16:43.810792651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:43.813326 env[1293]: time="2025-08-13T04:16:43.813278943Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:43.815558 env[1293]: time="2025-08-13T04:16:43.814858627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Aug 13 04:16:43.816626 env[1293]: time="2025-08-13T04:16:43.816592262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Aug 13 04:16:46.360152 env[1293]: time="2025-08-13T04:16:46.359937822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:46.363439 env[1293]: time="2025-08-13T04:16:46.363399358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:46.365976 env[1293]: time="2025-08-13T04:16:46.365936936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:46.368275 env[1293]: time="2025-08-13T04:16:46.368224351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:46.369568 env[1293]: time="2025-08-13T04:16:46.369506229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Aug 13 04:16:46.371623 env[1293]: time="2025-08-13T04:16:46.371569944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Aug 13 04:16:48.064346 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 04:16:48.652097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152457317.mount: Deactivated successfully. Aug 13 04:16:49.855285 env[1293]: time="2025-08-13T04:16:49.855149087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:49.857393 env[1293]: time="2025-08-13T04:16:49.857339495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:49.860344 env[1293]: time="2025-08-13T04:16:49.860290454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:49.862959 env[1293]: time="2025-08-13T04:16:49.862910531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:49.864676 env[1293]: time="2025-08-13T04:16:49.863830540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Aug 13 04:16:49.866711 env[1293]: time="2025-08-13T04:16:49.866673505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 04:16:50.862064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738767169.mount: Deactivated successfully. Aug 13 04:16:50.863639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 04:16:50.863894 systemd[1]: Stopped kubelet.service. Aug 13 04:16:50.867729 systemd[1]: Starting kubelet.service... Aug 13 04:16:51.066861 systemd[1]: Started kubelet.service. Aug 13 04:16:51.169022 kubelet[1615]: E0813 04:16:51.168367 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 04:16:51.172013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 04:16:51.172384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 04:16:52.677955 env[1293]: time="2025-08-13T04:16:52.677650272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:52.681681 env[1293]: time="2025-08-13T04:16:52.681627047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:52.685341 env[1293]: time="2025-08-13T04:16:52.685287379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:52.688771 env[1293]: time="2025-08-13T04:16:52.688716735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:52.691318 env[1293]: time="2025-08-13T04:16:52.690064904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 04:16:52.692708 env[1293]: time="2025-08-13T04:16:52.692658086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 04:16:54.108154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048722665.mount: Deactivated successfully. Aug 13 04:16:54.114273 env[1293]: time="2025-08-13T04:16:54.114099729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:54.116735 env[1293]: time="2025-08-13T04:16:54.116668170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:54.117918 env[1293]: time="2025-08-13T04:16:54.117882761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:54.119625 env[1293]: time="2025-08-13T04:16:54.119588425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:16:54.120570 env[1293]: time="2025-08-13T04:16:54.120503117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 04:16:54.121687 env[1293]: time="2025-08-13T04:16:54.121649607Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 04:16:55.382598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175362203.mount: Deactivated successfully. Aug 13 04:17:00.646701 env[1293]: time="2025-08-13T04:17:00.646522894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:00.650703 env[1293]: time="2025-08-13T04:17:00.650660784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:00.653619 env[1293]: time="2025-08-13T04:17:00.653587065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:00.656623 env[1293]: time="2025-08-13T04:17:00.656580821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:00.658044 env[1293]: time="2025-08-13T04:17:00.658007638Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 04:17:01.192003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 04:17:01.193607 systemd[1]: Stopped kubelet.service. Aug 13 04:17:01.197843 systemd[1]: Starting kubelet.service... Aug 13 04:17:01.716765 systemd[1]: Started kubelet.service. Aug 13 04:17:02.039798 kubelet[1648]: E0813 04:17:02.039503 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 04:17:02.044204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 04:17:02.044553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 04:17:02.540481 update_engine[1283]: I0813 04:17:02.540323 1283 update_attempter.cc:509] Updating boot flags... Aug 13 04:17:03.996405 systemd[1]: Stopped kubelet.service. Aug 13 04:17:04.001433 systemd[1]: Starting kubelet.service... Aug 13 04:17:04.046149 systemd[1]: Reloading. Aug 13 04:17:04.178237 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-08-13T04:17:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 04:17:04.178298 /usr/lib/systemd/system-generators/torcx-generator[1697]: time="2025-08-13T04:17:04Z" level=info msg="torcx already run" Aug 13 04:17:04.311110 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 04:17:04.311150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 04:17:04.339720 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 04:17:04.474847 systemd[1]: Started kubelet.service. Aug 13 04:17:04.478047 systemd[1]: Stopping kubelet.service... Aug 13 04:17:04.480083 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 04:17:04.480555 systemd[1]: Stopped kubelet.service. Aug 13 04:17:04.484117 systemd[1]: Starting kubelet.service... Aug 13 04:17:04.622665 systemd[1]: Started kubelet.service. Aug 13 04:17:04.773374 kubelet[1764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 04:17:04.774059 kubelet[1764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 04:17:04.774193 kubelet[1764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 04:17:04.774523 kubelet[1764]: I0813 04:17:04.774432 1764 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 04:17:05.253799 kubelet[1764]: I0813 04:17:05.253640 1764 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 04:17:05.254251 kubelet[1764]: I0813 04:17:05.254226 1764 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 04:17:05.254906 kubelet[1764]: I0813 04:17:05.254880 1764 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 04:17:05.303624 kubelet[1764]: I0813 04:17:05.303574 1764 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 04:17:05.304347 kubelet[1764]: E0813 04:17:05.304289 1764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.14.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:05.314882 kubelet[1764]: E0813 04:17:05.314821 1764 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 04:17:05.315082 kubelet[1764]: I0813 04:17:05.315056 1764 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 04:17:05.324372 kubelet[1764]: I0813 04:17:05.324322 1764 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 04:17:05.325736 kubelet[1764]: I0813 04:17:05.325702 1764 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 04:17:05.325979 kubelet[1764]: I0813 04:17:05.325921 1764 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 04:17:05.326252 kubelet[1764]: I0813 04:17:05.325978 1764 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-h1d3j.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 04:17:05.326533 kubelet[1764]: I0813 04:17:05.326281 1764 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 04:17:05.326533 kubelet[1764]: I0813 04:17:05.326299 1764 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 04:17:05.326533 kubelet[1764]: I0813 04:17:05.326513 1764 state_mem.go:36] "Initialized new in-memory state store" Aug 13 04:17:05.330091 kubelet[1764]: I0813 04:17:05.330050 1764 kubelet.go:408] "Attempting to sync node with API server" Aug 13 04:17:05.330091 kubelet[1764]: I0813 04:17:05.330088 1764 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 04:17:05.330261 kubelet[1764]: I0813 04:17:05.330153 1764 kubelet.go:314] "Adding apiserver pod source" Aug 13 04:17:05.330261 kubelet[1764]: I0813 04:17:05.330201 1764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 04:17:05.341409 kubelet[1764]: W0813 04:17:05.341331 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.14.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-h1d3j.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:05.341618 kubelet[1764]: E0813 04:17:05.341583 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.14.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-h1d3j.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:05.341863 kubelet[1764]: I0813 04:17:05.341834 1764 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 04:17:05.342632 kubelet[1764]: I0813 04:17:05.342606 1764 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 04:17:05.345746 kubelet[1764]: W0813 04:17:05.345717 1764 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 04:17:05.351073 kubelet[1764]: W0813 04:17:05.351011 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.14.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:05.351198 kubelet[1764]: E0813 04:17:05.351090 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.14.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:05.353072 kubelet[1764]: I0813 04:17:05.353040 1764 server.go:1274] "Started kubelet" Aug 13 04:17:05.366423 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 04:17:05.368184 kubelet[1764]: E0813 04:17:05.366744 1764 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.14.178:6443/api/v1/namespaces/default/events\": dial tcp 10.244.14.178:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-h1d3j.gb1.brightbox.com.185b38856426a6d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-h1d3j.gb1.brightbox.com,UID:srv-h1d3j.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-h1d3j.gb1.brightbox.com,},FirstTimestamp:2025-08-13 04:17:05.353000656 +0000 UTC m=+0.718491292,LastTimestamp:2025-08-13 04:17:05.353000656 +0000 UTC m=+0.718491292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-h1d3j.gb1.brightbox.com,}" Aug 13 04:17:05.368590 kubelet[1764]: I0813 04:17:05.368559 1764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 04:17:05.369664 kubelet[1764]: I0813 04:17:05.369561 1764 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 04:17:05.371701 kubelet[1764]: I0813 04:17:05.371673 1764 server.go:449] "Adding debug handlers to kubelet server" Aug 13 04:17:05.373583 kubelet[1764]: I0813 04:17:05.373541 1764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 04:17:05.374083 kubelet[1764]: I0813 04:17:05.374058 1764 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 04:17:05.374565 kubelet[1764]: I0813 04:17:05.374538 1764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 04:17:05.377159 kubelet[1764]: I0813 04:17:05.374920 1764 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 04:17:05.377714 kubelet[1764]: I0813 04:17:05.374977 1764 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 04:17:05.377854 kubelet[1764]: I0813 04:17:05.377829 1764 reconciler.go:26] "Reconciler: start to sync state" Aug 13 04:17:05.377950 kubelet[1764]: W0813 04:17:05.377665 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.14.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:05.377950 kubelet[1764]: E0813 04:17:05.377915 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.14.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:05.377950 kubelet[1764]: E0813 04:17:05.375194 1764 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-h1d3j.gb1.brightbox.com\" not found" Aug 13 04:17:05.378826 kubelet[1764]: I0813 04:17:05.378799 1764 factory.go:221] Registration of the systemd container factory successfully Aug 13 04:17:05.379095 kubelet[1764]: I0813 04:17:05.379065 1764 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 04:17:05.380851 kubelet[1764]: E0813 04:17:05.380816 1764 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 04:17:05.381038 kubelet[1764]: E0813 04:17:05.380999 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-h1d3j.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.178:6443: connect: connection refused" interval="200ms" Aug 13 04:17:05.381626 kubelet[1764]: I0813 04:17:05.381602 1764 factory.go:221] Registration of the containerd container factory successfully Aug 13 04:17:05.425640 kubelet[1764]: I0813 04:17:05.425578 1764 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 04:17:05.425640 kubelet[1764]: I0813 04:17:05.425634 1764 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 04:17:05.425910 kubelet[1764]: I0813 04:17:05.425679 1764 state_mem.go:36] "Initialized new in-memory state store" Aug 13 04:17:05.428125 kubelet[1764]: I0813 04:17:05.428089 1764 policy_none.go:49] "None policy: Start" Aug 13 04:17:05.429027 kubelet[1764]: I0813 04:17:05.428991 1764 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 04:17:05.429124 kubelet[1764]: I0813 04:17:05.429040 1764 state_mem.go:35] "Initializing new in-memory state store" Aug 13 04:17:05.444506 kubelet[1764]: I0813 04:17:05.444442 1764 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 04:17:05.444750 kubelet[1764]: I0813 04:17:05.444725 1764 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 04:17:05.446706 kubelet[1764]: I0813 04:17:05.444842 1764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 04:17:05.446813 kubelet[1764]: E0813 04:17:05.446503 1764 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-h1d3j.gb1.brightbox.com\" not found" Aug 13 04:17:05.447162 kubelet[1764]: I0813 04:17:05.447138 1764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 04:17:05.449226 kubelet[1764]: I0813 04:17:05.449194 1764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 04:17:05.451836 kubelet[1764]: I0813 04:17:05.451796 1764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 04:17:05.452028 kubelet[1764]: I0813 04:17:05.452003 1764 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 04:17:05.452172 kubelet[1764]: I0813 04:17:05.452148 1764 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 04:17:05.452357 kubelet[1764]: E0813 04:17:05.452332 1764 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 04:17:05.453562 kubelet[1764]: W0813 04:17:05.453501 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.14.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:05.453653 kubelet[1764]: E0813 04:17:05.453572 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.14.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:05.551528 kubelet[1764]: I0813 04:17:05.550406 1764 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.551528 kubelet[1764]: E0813 04:17:05.550886 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.14.178:6443/api/v1/nodes\": dial tcp 10.244.14.178:6443: connect: connection refused" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.582226 kubelet[1764]: E0813 04:17:05.582164 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-h1d3j.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.178:6443: connect: connection refused" interval="400ms" Aug 13 04:17:05.678840 kubelet[1764]: I0813 04:17:05.678764 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.679218 kubelet[1764]: I0813 04:17:05.679182 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-ca-certs\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.679419 kubelet[1764]: I0813 04:17:05.679377 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-kubeconfig\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.679620 kubelet[1764]: I0813 04:17:05.679587 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.679770 kubelet[1764]: I0813 04:17:05.679740 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-ca-certs\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.679945 kubelet[1764]: I0813 04:17:05.679917 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-flexvolume-dir\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.680101 kubelet[1764]: I0813 04:17:05.680073 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-k8s-certs\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.680261 kubelet[1764]: I0813 04:17:05.680233 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/617e8626f4fcac94e2b1b527095ba603-kubeconfig\") pod \"kube-scheduler-srv-h1d3j.gb1.brightbox.com\" (UID: \"617e8626f4fcac94e2b1b527095ba603\") " pod="kube-system/kube-scheduler-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.680478 kubelet[1764]: I0813 04:17:05.680436 1764 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-k8s-certs\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.755559 kubelet[1764]: I0813 04:17:05.755513 1764 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.755987 kubelet[1764]: E0813 04:17:05.755939 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.14.178:6443/api/v1/nodes\": dial tcp 10.244.14.178:6443: connect: connection refused" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:05.869989 env[1293]: time="2025-08-13T04:17:05.869866024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-h1d3j.gb1.brightbox.com,Uid:109402b17e583002e06ab0bac90007fe,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:05.871768 env[1293]: time="2025-08-13T04:17:05.871497119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-h1d3j.gb1.brightbox.com,Uid:6d9f0aee3867020c6905b297ef215fdf,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:05.875720 env[1293]: time="2025-08-13T04:17:05.875643974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-h1d3j.gb1.brightbox.com,Uid:617e8626f4fcac94e2b1b527095ba603,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:05.983090 kubelet[1764]: E0813 04:17:05.983024 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-h1d3j.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.178:6443: connect: connection refused" interval="800ms" Aug 13 04:17:06.160109 kubelet[1764]: I0813 04:17:06.159775 1764 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:06.160764 kubelet[1764]: E0813 04:17:06.160729 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.14.178:6443/api/v1/nodes\": dial tcp 10.244.14.178:6443: connect: connection refused" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:06.219549 kubelet[1764]: W0813 04:17:06.219425 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.14.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:06.219971 kubelet[1764]: E0813 04:17:06.219935 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.14.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:06.340732 kubelet[1764]: W0813 04:17:06.340617 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.14.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:06.340969 kubelet[1764]: E0813 04:17:06.340740 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.14.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:06.784292 kubelet[1764]: E0813 04:17:06.784163 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-h1d3j.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.178:6443: connect: connection refused" interval="1.6s" Aug 13 04:17:06.872927 kubelet[1764]: W0813 04:17:06.872765 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.14.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-h1d3j.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:06.872927 kubelet[1764]: E0813 04:17:06.872865 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.14.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-h1d3j.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:06.905492 kubelet[1764]: W0813 04:17:06.905375 1764 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.14.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.14.178:6443: connect: connection refused Aug 13 04:17:06.905813 kubelet[1764]: E0813 04:17:06.905752 1764 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.14.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:06.964032 kubelet[1764]: I0813 04:17:06.963589 1764 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:06.964032 kubelet[1764]: E0813 04:17:06.963983 1764 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.14.178:6443/api/v1/nodes\": dial tcp 10.244.14.178:6443: connect: connection refused" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:07.114139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391396858.mount: Deactivated successfully. Aug 13 04:17:07.121566 env[1293]: time="2025-08-13T04:17:07.121508484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.124291 env[1293]: time="2025-08-13T04:17:07.124252539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.125255 env[1293]: time="2025-08-13T04:17:07.125217663Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.127332 env[1293]: time="2025-08-13T04:17:07.127296874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.128434 env[1293]: time="2025-08-13T04:17:07.128399343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.129517 env[1293]: time="2025-08-13T04:17:07.129484166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.130602 env[1293]: time="2025-08-13T04:17:07.130562078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.134371 env[1293]: time="2025-08-13T04:17:07.134336791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.138796 env[1293]: time="2025-08-13T04:17:07.138760019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.141663 env[1293]: time="2025-08-13T04:17:07.141628439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.142753 env[1293]: time="2025-08-13T04:17:07.142718747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.158950 env[1293]: time="2025-08-13T04:17:07.158904362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:07.180992 env[1293]: time="2025-08-13T04:17:07.180813264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:07.181180 env[1293]: time="2025-08-13T04:17:07.181008343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:07.181180 env[1293]: time="2025-08-13T04:17:07.181077172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:07.181551 env[1293]: time="2025-08-13T04:17:07.181492816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c06f5be68561f125a772157d26ae3cc6161ba57b8dd38418bbeeb6c29e4a0ed pid=1810 runtime=io.containerd.runc.v2 Aug 13 04:17:07.194905 env[1293]: time="2025-08-13T04:17:07.194823967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:07.195142 env[1293]: time="2025-08-13T04:17:07.195097674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:07.195297 env[1293]: time="2025-08-13T04:17:07.195253742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:07.195672 env[1293]: time="2025-08-13T04:17:07.195625201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16bc461c8535527436d660d1fbe92a6c48831bf783e5d9192851c43357524909 pid=1819 runtime=io.containerd.runc.v2 Aug 13 04:17:07.239596 env[1293]: time="2025-08-13T04:17:07.239442370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:07.239927 env[1293]: time="2025-08-13T04:17:07.239871907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:07.240111 env[1293]: time="2025-08-13T04:17:07.240068347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:07.240578 env[1293]: time="2025-08-13T04:17:07.240528885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a28316ca8f876f757745d0cef68b3407e0aff5be0b67547f39aa42ac8d02b4 pid=1851 runtime=io.containerd.runc.v2 Aug 13 04:17:07.346871 env[1293]: time="2025-08-13T04:17:07.346805739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-h1d3j.gb1.brightbox.com,Uid:109402b17e583002e06ab0bac90007fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"16bc461c8535527436d660d1fbe92a6c48831bf783e5d9192851c43357524909\"" Aug 13 04:17:07.361195 env[1293]: time="2025-08-13T04:17:07.361136511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-h1d3j.gb1.brightbox.com,Uid:6d9f0aee3867020c6905b297ef215fdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c06f5be68561f125a772157d26ae3cc6161ba57b8dd38418bbeeb6c29e4a0ed\"" Aug 13 04:17:07.362863 env[1293]: time="2025-08-13T04:17:07.362761604Z" level=info msg="CreateContainer within sandbox \"16bc461c8535527436d660d1fbe92a6c48831bf783e5d9192851c43357524909\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 04:17:07.367655 env[1293]: time="2025-08-13T04:17:07.366775076Z" level=info msg="CreateContainer within sandbox \"4c06f5be68561f125a772157d26ae3cc6161ba57b8dd38418bbeeb6c29e4a0ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 04:17:07.390973 env[1293]: time="2025-08-13T04:17:07.390893499Z" level=info msg="CreateContainer within sandbox \"16bc461c8535527436d660d1fbe92a6c48831bf783e5d9192851c43357524909\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3241020377e64834290749b4975cc8b704ab64b25f8614501b3e5e81238c6aa\"" Aug 13 04:17:07.392519 env[1293]: time="2025-08-13T04:17:07.392471334Z" level=info msg="StartContainer for \"d3241020377e64834290749b4975cc8b704ab64b25f8614501b3e5e81238c6aa\"" Aug 13 04:17:07.393048 env[1293]: time="2025-08-13T04:17:07.393007664Z" level=info msg="CreateContainer within sandbox \"4c06f5be68561f125a772157d26ae3cc6161ba57b8dd38418bbeeb6c29e4a0ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49a0b018315422290a708d88bd085cc56ca8b06122ff140a7ade748bf23f4bdc\"" Aug 13 04:17:07.393695 env[1293]: time="2025-08-13T04:17:07.393635587Z" level=info msg="StartContainer for \"49a0b018315422290a708d88bd085cc56ca8b06122ff140a7ade748bf23f4bdc\"" Aug 13 04:17:07.402493 kubelet[1764]: E0813 04:17:07.402373 1764 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.14.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.14.178:6443: connect: connection refused" logger="UnhandledError" Aug 13 04:17:07.403404 env[1293]: time="2025-08-13T04:17:07.403344788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-h1d3j.gb1.brightbox.com,Uid:617e8626f4fcac94e2b1b527095ba603,Namespace:kube-system,Attempt:0,} returns sandbox id \"89a28316ca8f876f757745d0cef68b3407e0aff5be0b67547f39aa42ac8d02b4\"" Aug 13 04:17:07.406941 env[1293]: time="2025-08-13T04:17:07.406901228Z" level=info msg="CreateContainer within sandbox \"89a28316ca8f876f757745d0cef68b3407e0aff5be0b67547f39aa42ac8d02b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 04:17:07.420829 env[1293]: time="2025-08-13T04:17:07.420772264Z" level=info msg="CreateContainer within sandbox \"89a28316ca8f876f757745d0cef68b3407e0aff5be0b67547f39aa42ac8d02b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2af4950edaec8c09e5df4c1a5f9d5506b8f8e4e70ae3dcc1423185d22e20d02c\"" Aug 13 04:17:07.421471 env[1293]: time="2025-08-13T04:17:07.421419649Z" level=info msg="StartContainer for \"2af4950edaec8c09e5df4c1a5f9d5506b8f8e4e70ae3dcc1423185d22e20d02c\"" Aug 13 04:17:07.581694 env[1293]: time="2025-08-13T04:17:07.581624715Z" level=info msg="StartContainer for \"d3241020377e64834290749b4975cc8b704ab64b25f8614501b3e5e81238c6aa\" returns successfully" Aug 13 04:17:07.613213 env[1293]: time="2025-08-13T04:17:07.613129102Z" level=info msg="StartContainer for \"49a0b018315422290a708d88bd085cc56ca8b06122ff140a7ade748bf23f4bdc\" returns successfully" Aug 13 04:17:07.662951 env[1293]: time="2025-08-13T04:17:07.662119019Z" level=info msg="StartContainer for \"2af4950edaec8c09e5df4c1a5f9d5506b8f8e4e70ae3dcc1423185d22e20d02c\" returns successfully" Aug 13 04:17:08.385607 kubelet[1764]: E0813 04:17:08.385524 1764 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.14.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-h1d3j.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.14.178:6443: connect: connection refused" interval="3.2s" Aug 13 04:17:08.568946 kubelet[1764]: I0813 04:17:08.568877 1764 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:11.018315 kubelet[1764]: I0813 04:17:11.018234 1764 kubelet_node_status.go:75] "Successfully registered node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:11.351975 kubelet[1764]: I0813 04:17:11.351914 1764 apiserver.go:52] "Watching apiserver" Aug 13 04:17:11.378780 kubelet[1764]: I0813 04:17:11.378705 1764 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 04:17:11.500912 kubelet[1764]: E0813 04:17:11.500854 1764 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:13.197913 systemd[1]: Reloading. Aug 13 04:17:13.421079 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2025-08-13T04:17:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 04:17:13.421136 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2025-08-13T04:17:13Z" level=info msg="torcx already run" Aug 13 04:17:13.566384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 04:17:13.566926 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 04:17:13.596552 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 04:17:13.746607 systemd[1]: Stopping kubelet.service... Aug 13 04:17:13.771665 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 04:17:13.772648 systemd[1]: Stopped kubelet.service. Aug 13 04:17:13.778312 systemd[1]: Starting kubelet.service... Aug 13 04:17:15.204254 systemd[1]: Started kubelet.service. Aug 13 04:17:15.362314 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 04:17:15.363232 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 04:17:15.363404 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 04:17:15.363706 kubelet[2119]: I0813 04:17:15.363631 2119 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 04:17:15.363805 sudo[2130]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 04:17:15.364285 sudo[2130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 04:17:15.385555 kubelet[2119]: I0813 04:17:15.385510 2119 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 04:17:15.385790 kubelet[2119]: I0813 04:17:15.385766 2119 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 04:17:15.386376 kubelet[2119]: I0813 04:17:15.386349 2119 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 04:17:15.392419 kubelet[2119]: I0813 04:17:15.392382 2119 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 04:17:15.401579 kubelet[2119]: I0813 04:17:15.401543 2119 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 04:17:15.409467 kubelet[2119]: E0813 04:17:15.409418 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 04:17:15.409564 kubelet[2119]: I0813 04:17:15.409475 2119 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 04:17:15.422722 kubelet[2119]: I0813 04:17:15.422689 2119 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 04:17:15.423627 kubelet[2119]: I0813 04:17:15.423600 2119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 04:17:15.423986 kubelet[2119]: I0813 04:17:15.423928 2119 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 04:17:15.424278 kubelet[2119]: I0813 04:17:15.423983 2119 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-h1d3j.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 04:17:15.424573 kubelet[2119]: I0813 04:17:15.424297 2119 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 04:17:15.424573 kubelet[2119]: I0813 04:17:15.424316 2119 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 04:17:15.424732 kubelet[2119]: I0813 04:17:15.424579 2119 state_mem.go:36] "Initialized new in-memory state store" Aug 13 04:17:15.424831 kubelet[2119]: I0813 04:17:15.424800 2119 kubelet.go:408] "Attempting to sync node with API server" Aug 13 04:17:15.424971 kubelet[2119]: I0813 04:17:15.424944 2119 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 04:17:15.425536 kubelet[2119]: I0813 04:17:15.425511 2119 kubelet.go:314] "Adding apiserver pod source" Aug 13 04:17:15.425638 kubelet[2119]: I0813 04:17:15.425559 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 04:17:15.432830 kubelet[2119]: I0813 04:17:15.432800 2119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 04:17:15.434809 kubelet[2119]: I0813 04:17:15.434774 2119 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 04:17:15.436291 kubelet[2119]: I0813 04:17:15.436268 2119 server.go:1274] "Started kubelet" Aug 13 04:17:15.449790 kubelet[2119]: I0813 04:17:15.449733 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 04:17:15.453586 kubelet[2119]: I0813 04:17:15.453540 2119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 04:17:15.463559 kubelet[2119]: I0813 04:17:15.460586 2119 server.go:449] "Adding debug handlers to kubelet server" Aug 13 04:17:15.463720 kubelet[2119]: I0813 04:17:15.463674 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 04:17:15.464015 kubelet[2119]: I0813 04:17:15.463986 2119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 04:17:15.469040 kubelet[2119]: I0813 04:17:15.469005 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 04:17:15.483426 kubelet[2119]: I0813 04:17:15.483382 2119 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 04:17:15.489213 kubelet[2119]: I0813 04:17:15.489176 2119 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 04:17:15.490961 kubelet[2119]: I0813 04:17:15.490931 2119 reconciler.go:26] "Reconciler: start to sync state" Aug 13 04:17:15.498144 kubelet[2119]: I0813 04:17:15.494333 2119 factory.go:221] Registration of the systemd container factory successfully Aug 13 04:17:15.498144 kubelet[2119]: I0813 04:17:15.494499 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 04:17:15.498144 kubelet[2119]: E0813 04:17:15.497092 2119 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 04:17:15.500178 kubelet[2119]: I0813 04:17:15.500149 2119 factory.go:221] Registration of the containerd container factory successfully Aug 13 04:17:15.549154 kubelet[2119]: I0813 04:17:15.549044 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 04:17:15.561547 kubelet[2119]: I0813 04:17:15.561218 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 04:17:15.561547 kubelet[2119]: I0813 04:17:15.561279 2119 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 04:17:15.561547 kubelet[2119]: I0813 04:17:15.561334 2119 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 04:17:15.561547 kubelet[2119]: E0813 04:17:15.561445 2119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 04:17:15.664882 kubelet[2119]: E0813 04:17:15.664742 2119 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 04:17:15.677381 kubelet[2119]: I0813 04:17:15.677327 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 04:17:15.677381 kubelet[2119]: I0813 04:17:15.677370 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 04:17:15.677648 kubelet[2119]: I0813 04:17:15.677411 2119 state_mem.go:36] "Initialized new in-memory state store" Aug 13 04:17:15.677725 kubelet[2119]: I0813 04:17:15.677668 2119 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 04:17:15.677725 kubelet[2119]: I0813 04:17:15.677689 2119 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 04:17:15.677858 kubelet[2119]: I0813 04:17:15.677729 2119 policy_none.go:49] "None policy: Start" Aug 13 04:17:15.678667 kubelet[2119]: I0813 04:17:15.678639 2119 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 04:17:15.678776 kubelet[2119]: I0813 04:17:15.678686 2119 state_mem.go:35] "Initializing new in-memory state store" Aug 13 04:17:15.678912 kubelet[2119]: I0813 04:17:15.678888 2119 state_mem.go:75] "Updated machine memory state" Aug 13 04:17:15.681046 kubelet[2119]: I0813 04:17:15.681019 2119 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 04:17:15.681304 kubelet[2119]: I0813 04:17:15.681279 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 04:17:15.681401 kubelet[2119]: I0813 04:17:15.681315 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 04:17:15.683670 kubelet[2119]: I0813 04:17:15.683645 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 04:17:15.809564 kubelet[2119]: I0813 04:17:15.809496 2119 kubelet_node_status.go:72] "Attempting to register node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.831763 kubelet[2119]: I0813 04:17:15.831694 2119 kubelet_node_status.go:111] "Node was previously registered" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.831951 kubelet[2119]: I0813 04:17:15.831868 2119 kubelet_node_status.go:75] "Successfully registered node" node="srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.877587 kubelet[2119]: W0813 04:17:15.877115 2119 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 04:17:15.878466 kubelet[2119]: W0813 04:17:15.878415 2119 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 04:17:15.878574 kubelet[2119]: W0813 04:17:15.878483 2119 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 04:17:15.893318 kubelet[2119]: I0813 04:17:15.893257 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893532 kubelet[2119]: I0813 04:17:15.893346 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-flexvolume-dir\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893532 kubelet[2119]: I0813 04:17:15.893383 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-kubeconfig\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893532 kubelet[2119]: I0813 04:17:15.893432 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893532 kubelet[2119]: I0813 04:17:15.893492 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/617e8626f4fcac94e2b1b527095ba603-kubeconfig\") pod \"kube-scheduler-srv-h1d3j.gb1.brightbox.com\" (UID: \"617e8626f4fcac94e2b1b527095ba603\") " pod="kube-system/kube-scheduler-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893532 kubelet[2119]: I0813 04:17:15.893521 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-ca-certs\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893841 kubelet[2119]: I0813 04:17:15.893570 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9f0aee3867020c6905b297ef215fdf-k8s-certs\") pod \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" (UID: \"6d9f0aee3867020c6905b297ef215fdf\") " pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893841 kubelet[2119]: I0813 04:17:15.893599 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-ca-certs\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:15.893841 kubelet[2119]: I0813 04:17:15.893733 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/109402b17e583002e06ab0bac90007fe-k8s-certs\") pod \"kube-controller-manager-srv-h1d3j.gb1.brightbox.com\" (UID: \"109402b17e583002e06ab0bac90007fe\") " pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:16.310685 sudo[2130]: pam_unix(sudo:session): session closed for user root Aug 13 04:17:16.429580 kubelet[2119]: I0813 04:17:16.427553 2119 apiserver.go:52] "Watching apiserver" Aug 13 04:17:16.490257 kubelet[2119]: I0813 04:17:16.490208 2119 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 04:17:16.665008 kubelet[2119]: W0813 04:17:16.664783 2119 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 04:17:16.665008 kubelet[2119]: E0813 04:17:16.664959 2119 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-h1d3j.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" Aug 13 04:17:16.707082 kubelet[2119]: I0813 04:17:16.706975 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-h1d3j.gb1.brightbox.com" podStartSLOduration=1.706937258 podStartE2EDuration="1.706937258s" podCreationTimestamp="2025-08-13 04:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:16.704841368 +0000 UTC m=+1.469138464" watchObservedRunningTime="2025-08-13 04:17:16.706937258 +0000 UTC m=+1.471234340" Aug 13 04:17:16.707378 kubelet[2119]: I0813 04:17:16.707186 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-h1d3j.gb1.brightbox.com" podStartSLOduration=1.707178547 podStartE2EDuration="1.707178547s" podCreationTimestamp="2025-08-13 04:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:16.689673846 +0000 UTC m=+1.453970922" watchObservedRunningTime="2025-08-13 04:17:16.707178547 +0000 UTC m=+1.471475633" Aug 13 04:17:16.723798 kubelet[2119]: I0813 04:17:16.723703 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-h1d3j.gb1.brightbox.com" podStartSLOduration=1.723680757 podStartE2EDuration="1.723680757s" podCreationTimestamp="2025-08-13 04:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:16.721417279 +0000 UTC m=+1.485714370" watchObservedRunningTime="2025-08-13 04:17:16.723680757 +0000 UTC m=+1.487977846" Aug 13 04:17:17.940933 kubelet[2119]: I0813 04:17:17.940762 2119 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 04:17:17.941800 env[1293]: time="2025-08-13T04:17:17.941438575Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 04:17:17.942623 kubelet[2119]: I0813 04:17:17.942577 2119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 04:17:18.713848 kubelet[2119]: I0813 04:17:18.713764 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-cgroup\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.714528 kubelet[2119]: I0813 04:17:18.714494 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/593bb025-c651-493c-be38-e6c69bb19883-kube-proxy\") pod \"kube-proxy-s4z57\" (UID: \"593bb025-c651-493c-be38-e6c69bb19883\") " pod="kube-system/kube-proxy-s4z57" Aug 13 04:17:18.714724 kubelet[2119]: I0813 04:17:18.714696 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/593bb025-c651-493c-be38-e6c69bb19883-lib-modules\") pod \"kube-proxy-s4z57\" (UID: \"593bb025-c651-493c-be38-e6c69bb19883\") " pod="kube-system/kube-proxy-s4z57" Aug 13 04:17:18.715006 kubelet[2119]: I0813 04:17:18.714965 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hubble-tls\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.715227 kubelet[2119]: I0813 04:17:18.715179 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cni-path\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.715489 kubelet[2119]: I0813 04:17:18.715430 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-etc-cni-netd\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.715720 kubelet[2119]: I0813 04:17:18.715661 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-lib-modules\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.715945 kubelet[2119]: I0813 04:17:18.715895 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-config-path\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.716171 kubelet[2119]: I0813 04:17:18.716111 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-kernel\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.716390 kubelet[2119]: I0813 04:17:18.716329 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hostproc\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.716625 kubelet[2119]: I0813 04:17:18.716586 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-clustermesh-secrets\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.716848 kubelet[2119]: I0813 04:17:18.716815 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-net\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.717072 kubelet[2119]: I0813 04:17:18.717012 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8d7m\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.717264 kubelet[2119]: I0813 04:17:18.717228 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/593bb025-c651-493c-be38-e6c69bb19883-xtables-lock\") pod \"kube-proxy-s4z57\" (UID: \"593bb025-c651-493c-be38-e6c69bb19883\") " pod="kube-system/kube-proxy-s4z57" Aug 13 04:17:18.717515 kubelet[2119]: I0813 04:17:18.717488 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-run\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.717751 kubelet[2119]: I0813 04:17:18.717702 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-bpf-maps\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.717983 kubelet[2119]: I0813 04:17:18.717926 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zls\" (UniqueName: \"kubernetes.io/projected/593bb025-c651-493c-be38-e6c69bb19883-kube-api-access-r6zls\") pod \"kube-proxy-s4z57\" (UID: \"593bb025-c651-493c-be38-e6c69bb19883\") " pod="kube-system/kube-proxy-s4z57" Aug 13 04:17:18.718160 kubelet[2119]: I0813 04:17:18.718131 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-xtables-lock\") pod \"cilium-nqq2f\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " pod="kube-system/cilium-nqq2f" Aug 13 04:17:18.822374 kubelet[2119]: I0813 04:17:18.822292 2119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 04:17:18.883524 kubelet[2119]: E0813 04:17:18.883477 2119 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 04:17:18.883839 kubelet[2119]: E0813 04:17:18.883799 2119 projected.go:194] Error preparing data for projected volume kube-api-access-r6zls for pod kube-system/kube-proxy-s4z57: configmap "kube-root-ca.crt" not found Aug 13 04:17:18.884106 kubelet[2119]: E0813 04:17:18.884071 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/593bb025-c651-493c-be38-e6c69bb19883-kube-api-access-r6zls podName:593bb025-c651-493c-be38-e6c69bb19883 nodeName:}" failed. No retries permitted until 2025-08-13 04:17:19.384035056 +0000 UTC m=+4.148332138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r6zls" (UniqueName: "kubernetes.io/projected/593bb025-c651-493c-be38-e6c69bb19883-kube-api-access-r6zls") pod "kube-proxy-s4z57" (UID: "593bb025-c651-493c-be38-e6c69bb19883") : configmap "kube-root-ca.crt" not found Aug 13 04:17:18.884429 kubelet[2119]: E0813 04:17:18.884197 2119 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 04:17:18.884639 kubelet[2119]: E0813 04:17:18.884616 2119 projected.go:194] Error preparing data for projected volume kube-api-access-m8d7m for pod kube-system/cilium-nqq2f: configmap "kube-root-ca.crt" not found Aug 13 04:17:18.884856 kubelet[2119]: E0813 04:17:18.884796 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m podName:4d87b257-e40c-4f3b-b504-215e6f0ec0fb nodeName:}" failed. No retries permitted until 2025-08-13 04:17:19.38478046 +0000 UTC m=+4.149077545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m8d7m" (UniqueName: "kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m") pod "cilium-nqq2f" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb") : configmap "kube-root-ca.crt" not found Aug 13 04:17:18.929262 kubelet[2119]: E0813 04:17:18.929204 2119 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-m8d7m], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-nqq2f" podUID="4d87b257-e40c-4f3b-b504-215e6f0ec0fb" Aug 13 04:17:19.123666 kubelet[2119]: I0813 04:17:19.121953 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50e1e23-86e6-4d1a-bf02-638fb42dab18-cilium-config-path\") pod \"cilium-operator-5d85765b45-lt8qn\" (UID: \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\") " pod="kube-system/cilium-operator-5d85765b45-lt8qn" Aug 13 04:17:19.124512 kubelet[2119]: I0813 04:17:19.124485 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrwj\" (UniqueName: \"kubernetes.io/projected/f50e1e23-86e6-4d1a-bf02-638fb42dab18-kube-api-access-nlrwj\") pod \"cilium-operator-5d85765b45-lt8qn\" (UID: \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\") " pod="kube-system/cilium-operator-5d85765b45-lt8qn" Aug 13 04:17:19.329404 env[1293]: time="2025-08-13T04:17:19.329311915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lt8qn,Uid:f50e1e23-86e6-4d1a-bf02-638fb42dab18,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:19.365925 env[1293]: time="2025-08-13T04:17:19.365765073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:19.366180 env[1293]: time="2025-08-13T04:17:19.365961228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:19.366180 env[1293]: time="2025-08-13T04:17:19.366039996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:19.367899 env[1293]: time="2025-08-13T04:17:19.366977950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364 pid=2186 runtime=io.containerd.runc.v2 Aug 13 04:17:19.475812 env[1293]: time="2025-08-13T04:17:19.475358213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lt8qn,Uid:f50e1e23-86e6-4d1a-bf02-638fb42dab18,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\"" Aug 13 04:17:19.481913 env[1293]: time="2025-08-13T04:17:19.481866093Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 04:17:19.579283 env[1293]: time="2025-08-13T04:17:19.579206615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4z57,Uid:593bb025-c651-493c-be38-e6c69bb19883,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:19.599405 env[1293]: time="2025-08-13T04:17:19.599310004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:19.599405 env[1293]: time="2025-08-13T04:17:19.599369231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:19.599780 env[1293]: time="2025-08-13T04:17:19.599728043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:19.600736 env[1293]: time="2025-08-13T04:17:19.600678688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3a9f4b91d43cd0cc7682c3350fa7fbc924f29a5f253c87909005ff7b28fc71d pid=2229 runtime=io.containerd.runc.v2 Aug 13 04:17:19.681107 env[1293]: time="2025-08-13T04:17:19.681051252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4z57,Uid:593bb025-c651-493c-be38-e6c69bb19883,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a9f4b91d43cd0cc7682c3350fa7fbc924f29a5f253c87909005ff7b28fc71d\"" Aug 13 04:17:19.686834 env[1293]: time="2025-08-13T04:17:19.686792114Z" level=info msg="CreateContainer within sandbox \"b3a9f4b91d43cd0cc7682c3350fa7fbc924f29a5f253c87909005ff7b28fc71d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 04:17:19.709669 env[1293]: time="2025-08-13T04:17:19.709606546Z" level=info msg="CreateContainer within sandbox \"b3a9f4b91d43cd0cc7682c3350fa7fbc924f29a5f253c87909005ff7b28fc71d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c91ccaf56cb16955172001a320dfe29a2e9120f343d14e27a64b696fbb3f4e5\"" Aug 13 04:17:19.712257 env[1293]: time="2025-08-13T04:17:19.710924512Z" level=info msg="StartContainer for \"0c91ccaf56cb16955172001a320dfe29a2e9120f343d14e27a64b696fbb3f4e5\"" Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732377 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-cgroup\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732486 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hubble-tls\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732541 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hostproc\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732570 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-net\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732618 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cni-path\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.732914 kubelet[2119]: I0813 04:17:19.732646 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-etc-cni-netd\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.732674 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-lib-modules\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.733046 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-clustermesh-secrets\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.733087 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-bpf-maps\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.733138 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-xtables-lock\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.733178 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-config-path\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733493 kubelet[2119]: I0813 04:17:19.733225 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-kernel\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733913 kubelet[2119]: I0813 04:17:19.733253 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8d7m\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733913 kubelet[2119]: I0813 04:17:19.733318 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-run\") pod \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\" (UID: \"4d87b257-e40c-4f3b-b504-215e6f0ec0fb\") " Aug 13 04:17:19.733913 kubelet[2119]: I0813 04:17:19.733479 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.733913 kubelet[2119]: I0813 04:17:19.733575 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.735492 kubelet[2119]: I0813 04:17:19.734772 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.735492 kubelet[2119]: I0813 04:17:19.734822 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.735492 kubelet[2119]: I0813 04:17:19.734852 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.735492 kubelet[2119]: I0813 04:17:19.734884 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.735492 kubelet[2119]: I0813 04:17:19.734920 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.736558 kubelet[2119]: I0813 04:17:19.736515 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.737205 kubelet[2119]: I0813 04:17:19.737170 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.737362 kubelet[2119]: I0813 04:17:19.737293 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:17:19.742980 kubelet[2119]: I0813 04:17:19.742938 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 04:17:19.744002 kubelet[2119]: I0813 04:17:19.743953 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 04:17:19.751491 kubelet[2119]: I0813 04:17:19.747827 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:17:19.751491 kubelet[2119]: I0813 04:17:19.748643 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m" (OuterVolumeSpecName: "kube-api-access-m8d7m") pod "4d87b257-e40c-4f3b-b504-215e6f0ec0fb" (UID: "4d87b257-e40c-4f3b-b504-215e6f0ec0fb"). InnerVolumeSpecName "kube-api-access-m8d7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:17:19.802698 env[1293]: time="2025-08-13T04:17:19.802637741Z" level=info msg="StartContainer for \"0c91ccaf56cb16955172001a320dfe29a2e9120f343d14e27a64b696fbb3f4e5\" returns successfully" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834533 2119 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cni-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834588 2119 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-etc-cni-netd\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834604 2119 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-lib-modules\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834619 2119 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-clustermesh-secrets\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834640 2119 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-bpf-maps\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834655 2119 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-xtables-lock\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834670 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-config-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.834822 kubelet[2119]: I0813 04:17:19.834686 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-kernel\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834701 2119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8d7m\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-kube-api-access-m8d7m\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834720 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-run\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834736 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-cilium-cgroup\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834751 2119 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hubble-tls\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834765 2119 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-hostproc\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.835496 kubelet[2119]: I0813 04:17:19.834781 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d87b257-e40c-4f3b-b504-215e6f0ec0fb-host-proc-sys-net\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:17:19.859943 systemd[1]: var-lib-kubelet-pods-4d87b257\x2de40c\x2d4f3b\x2db504\x2d215e6f0ec0fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 04:17:19.860260 systemd[1]: var-lib-kubelet-pods-4d87b257\x2de40c\x2d4f3b\x2db504\x2d215e6f0ec0fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 04:17:20.715538 kubelet[2119]: I0813 04:17:20.715388 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s4z57" podStartSLOduration=2.7153053419999997 podStartE2EDuration="2.715305342s" podCreationTimestamp="2025-08-13 04:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:20.685189436 +0000 UTC m=+5.449486551" watchObservedRunningTime="2025-08-13 04:17:20.715305342 +0000 UTC m=+5.479602431" Aug 13 04:17:20.841229 kubelet[2119]: I0813 04:17:20.840939 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-run\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.841229 kubelet[2119]: I0813 04:17:20.841174 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-config-path\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.841229 kubelet[2119]: I0813 04:17:20.841251 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cni-path\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842037 kubelet[2119]: I0813 04:17:20.841373 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-xtables-lock\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842037 kubelet[2119]: I0813 04:17:20.842008 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tcv\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-kube-api-access-v8tcv\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842186 kubelet[2119]: I0813 04:17:20.842062 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-net\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842186 kubelet[2119]: I0813 04:17:20.842093 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-lib-modules\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842353 kubelet[2119]: I0813 04:17:20.842198 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hostproc\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842353 kubelet[2119]: I0813 04:17:20.842239 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-etc-cni-netd\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842516 kubelet[2119]: I0813 04:17:20.842337 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-clustermesh-secrets\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842600 kubelet[2119]: I0813 04:17:20.842517 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hubble-tls\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842600 kubelet[2119]: I0813 04:17:20.842581 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-cgroup\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842738 kubelet[2119]: I0813 04:17:20.842625 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-bpf-maps\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.842738 kubelet[2119]: I0813 04:17:20.842687 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-kernel\") pod \"cilium-6gzbw\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " pod="kube-system/cilium-6gzbw" Aug 13 04:17:20.962634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492245147.mount: Deactivated successfully. Aug 13 04:17:21.051754 env[1293]: time="2025-08-13T04:17:21.051563034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gzbw,Uid:dada73f5-f6ed-4e27-bc19-d43ce49f13f7,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:21.073375 env[1293]: time="2025-08-13T04:17:21.073237928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:21.073666 env[1293]: time="2025-08-13T04:17:21.073610634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:21.073874 env[1293]: time="2025-08-13T04:17:21.073822834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:21.074480 env[1293]: time="2025-08-13T04:17:21.074401093Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6 pid=2441 runtime=io.containerd.runc.v2 Aug 13 04:17:21.140420 env[1293]: time="2025-08-13T04:17:21.140361745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gzbw,Uid:dada73f5-f6ed-4e27-bc19-d43ce49f13f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\"" Aug 13 04:17:21.565806 kubelet[2119]: I0813 04:17:21.565748 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d87b257-e40c-4f3b-b504-215e6f0ec0fb" path="/var/lib/kubelet/pods/4d87b257-e40c-4f3b-b504-215e6f0ec0fb/volumes" Aug 13 04:17:22.556843 env[1293]: time="2025-08-13T04:17:22.556736535Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:22.559299 env[1293]: time="2025-08-13T04:17:22.559236713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:22.561670 env[1293]: time="2025-08-13T04:17:22.561625984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:22.562492 env[1293]: time="2025-08-13T04:17:22.562431868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 04:17:22.565123 env[1293]: time="2025-08-13T04:17:22.565085586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 04:17:22.566369 env[1293]: time="2025-08-13T04:17:22.566330683Z" level=info msg="CreateContainer within sandbox \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 04:17:22.673172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466787481.mount: Deactivated successfully. Aug 13 04:17:22.684411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111201763.mount: Deactivated successfully. Aug 13 04:17:22.691323 env[1293]: time="2025-08-13T04:17:22.691252614Z" level=info msg="CreateContainer within sandbox \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\"" Aug 13 04:17:22.692919 env[1293]: time="2025-08-13T04:17:22.692830337Z" level=info msg="StartContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\"" Aug 13 04:17:22.769946 env[1293]: time="2025-08-13T04:17:22.769843309Z" level=info msg="StartContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" returns successfully" Aug 13 04:17:25.549800 kubelet[2119]: I0813 04:17:25.549369 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lt8qn" podStartSLOduration=4.463355128 podStartE2EDuration="7.54930888s" podCreationTimestamp="2025-08-13 04:17:18 +0000 UTC" firstStartedPulling="2025-08-13 04:17:19.478125557 +0000 UTC m=+4.242422633" lastFinishedPulling="2025-08-13 04:17:22.564079298 +0000 UTC m=+7.328376385" observedRunningTime="2025-08-13 04:17:23.754995892 +0000 UTC m=+8.519293002" watchObservedRunningTime="2025-08-13 04:17:25.54930888 +0000 UTC m=+10.313605972" Aug 13 04:17:31.236389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953163262.mount: Deactivated successfully. Aug 13 04:17:36.659181 env[1293]: time="2025-08-13T04:17:36.658996038Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:36.665317 env[1293]: time="2025-08-13T04:17:36.665266402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:36.670893 env[1293]: time="2025-08-13T04:17:36.670853930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 04:17:36.671833 env[1293]: time="2025-08-13T04:17:36.671790136Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 04:17:36.692925 env[1293]: time="2025-08-13T04:17:36.692870425Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 04:17:36.710636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914489734.mount: Deactivated successfully. Aug 13 04:17:36.720510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991944044.mount: Deactivated successfully. Aug 13 04:17:36.721546 env[1293]: time="2025-08-13T04:17:36.720761248Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\"" Aug 13 04:17:36.723483 env[1293]: time="2025-08-13T04:17:36.722012365Z" level=info msg="StartContainer for \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\"" Aug 13 04:17:36.834905 env[1293]: time="2025-08-13T04:17:36.834847109Z" level=info msg="StartContainer for \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\" returns successfully" Aug 13 04:17:37.010365 env[1293]: time="2025-08-13T04:17:37.009640253Z" level=info msg="shim disconnected" id=c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5 Aug 13 04:17:37.010696 env[1293]: time="2025-08-13T04:17:37.010661398Z" level=warning msg="cleaning up after shim disconnected" id=c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5 namespace=k8s.io Aug 13 04:17:37.010844 env[1293]: time="2025-08-13T04:17:37.010815294Z" level=info msg="cleaning up dead shim" Aug 13 04:17:37.027301 env[1293]: time="2025-08-13T04:17:37.027242620Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:17:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2562 runtime=io.containerd.runc.v2\n" Aug 13 04:17:37.704770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5-rootfs.mount: Deactivated successfully. Aug 13 04:17:37.749734 env[1293]: time="2025-08-13T04:17:37.749106096Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 04:17:37.772149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516474763.mount: Deactivated successfully. Aug 13 04:17:37.786318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137850443.mount: Deactivated successfully. Aug 13 04:17:37.800258 env[1293]: time="2025-08-13T04:17:37.800195659Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\"" Aug 13 04:17:37.812468 env[1293]: time="2025-08-13T04:17:37.810439406Z" level=info msg="StartContainer for \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\"" Aug 13 04:17:37.893860 env[1293]: time="2025-08-13T04:17:37.893798373Z" level=info msg="StartContainer for \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\" returns successfully" Aug 13 04:17:37.914377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 04:17:37.914886 systemd[1]: Stopped systemd-sysctl.service. Aug 13 04:17:37.916853 systemd[1]: Stopping systemd-sysctl.service... Aug 13 04:17:37.929672 systemd[1]: Starting systemd-sysctl.service... Aug 13 04:17:37.949011 systemd[1]: Finished systemd-sysctl.service. Aug 13 04:17:37.959479 env[1293]: time="2025-08-13T04:17:37.959319826Z" level=info msg="shim disconnected" id=eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64 Aug 13 04:17:37.959479 env[1293]: time="2025-08-13T04:17:37.959396814Z" level=warning msg="cleaning up after shim disconnected" id=eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64 namespace=k8s.io Aug 13 04:17:37.959479 env[1293]: time="2025-08-13T04:17:37.959415420Z" level=info msg="cleaning up dead shim" Aug 13 04:17:37.970599 env[1293]: time="2025-08-13T04:17:37.970544077Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:17:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2626 runtime=io.containerd.runc.v2\n" Aug 13 04:17:38.746559 env[1293]: time="2025-08-13T04:17:38.746490232Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 04:17:38.784164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057968234.mount: Deactivated successfully. Aug 13 04:17:38.800473 env[1293]: time="2025-08-13T04:17:38.800398073Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\"" Aug 13 04:17:38.802917 env[1293]: time="2025-08-13T04:17:38.802878540Z" level=info msg="StartContainer for \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\"" Aug 13 04:17:38.900015 env[1293]: time="2025-08-13T04:17:38.899950701Z" level=info msg="StartContainer for \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\" returns successfully" Aug 13 04:17:38.940667 env[1293]: time="2025-08-13T04:17:38.940575266Z" level=info msg="shim disconnected" id=5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296 Aug 13 04:17:38.941016 env[1293]: time="2025-08-13T04:17:38.940983486Z" level=warning msg="cleaning up after shim disconnected" id=5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296 namespace=k8s.io Aug 13 04:17:38.941153 env[1293]: time="2025-08-13T04:17:38.941124346Z" level=info msg="cleaning up dead shim" Aug 13 04:17:38.953257 env[1293]: time="2025-08-13T04:17:38.953183091Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:17:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2685 runtime=io.containerd.runc.v2\n" Aug 13 04:17:39.752588 env[1293]: time="2025-08-13T04:17:39.752489794Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 04:17:39.777293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611451633.mount: Deactivated successfully. Aug 13 04:17:39.796643 env[1293]: time="2025-08-13T04:17:39.796577079Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\"" Aug 13 04:17:39.797959 env[1293]: time="2025-08-13T04:17:39.797922871Z" level=info msg="StartContainer for \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\"" Aug 13 04:17:39.885629 env[1293]: time="2025-08-13T04:17:39.885541932Z" level=info msg="StartContainer for \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\" returns successfully" Aug 13 04:17:39.913563 env[1293]: time="2025-08-13T04:17:39.913500334Z" level=info msg="shim disconnected" id=61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb Aug 13 04:17:39.913999 env[1293]: time="2025-08-13T04:17:39.913969096Z" level=warning msg="cleaning up after shim disconnected" id=61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb namespace=k8s.io Aug 13 04:17:39.914177 env[1293]: time="2025-08-13T04:17:39.914148438Z" level=info msg="cleaning up dead shim" Aug 13 04:17:39.925534 env[1293]: time="2025-08-13T04:17:39.925483198Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:17:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2740 runtime=io.containerd.runc.v2\n" Aug 13 04:17:40.705293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb-rootfs.mount: Deactivated successfully. Aug 13 04:17:40.759149 env[1293]: time="2025-08-13T04:17:40.758682244Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 04:17:40.783857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783089258.mount: Deactivated successfully. Aug 13 04:17:40.804431 env[1293]: time="2025-08-13T04:17:40.804262351Z" level=info msg="CreateContainer within sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\"" Aug 13 04:17:40.806914 env[1293]: time="2025-08-13T04:17:40.805649792Z" level=info msg="StartContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\"" Aug 13 04:17:40.895912 env[1293]: time="2025-08-13T04:17:40.895824819Z" level=info msg="StartContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" returns successfully" Aug 13 04:17:41.117632 kubelet[2119]: I0813 04:17:41.117554 2119 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 04:17:41.198913 kubelet[2119]: W0813 04:17:41.198858 2119 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-h1d3j.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-h1d3j.gb1.brightbox.com' and this object Aug 13 04:17:41.211178 kubelet[2119]: E0813 04:17:41.211121 2119 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-h1d3j.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-h1d3j.gb1.brightbox.com' and this object" logger="UnhandledError" Aug 13 04:17:41.241364 kubelet[2119]: I0813 04:17:41.241279 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99lz\" (UniqueName: \"kubernetes.io/projected/626bfc9e-80e3-468c-a263-871009649101-kube-api-access-w99lz\") pod \"coredns-7c65d6cfc9-8klnp\" (UID: \"626bfc9e-80e3-468c-a263-871009649101\") " pod="kube-system/coredns-7c65d6cfc9-8klnp" Aug 13 04:17:41.241597 kubelet[2119]: I0813 04:17:41.241405 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4d9d768-c902-4463-800d-5e072f2b68a8-config-volume\") pod \"coredns-7c65d6cfc9-xfr9z\" (UID: \"a4d9d768-c902-4463-800d-5e072f2b68a8\") " pod="kube-system/coredns-7c65d6cfc9-xfr9z" Aug 13 04:17:41.241597 kubelet[2119]: I0813 04:17:41.241504 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/626bfc9e-80e3-468c-a263-871009649101-config-volume\") pod \"coredns-7c65d6cfc9-8klnp\" (UID: \"626bfc9e-80e3-468c-a263-871009649101\") " pod="kube-system/coredns-7c65d6cfc9-8klnp" Aug 13 04:17:41.241740 kubelet[2119]: I0813 04:17:41.241595 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4qv\" (UniqueName: \"kubernetes.io/projected/a4d9d768-c902-4463-800d-5e072f2b68a8-kube-api-access-ql4qv\") pod \"coredns-7c65d6cfc9-xfr9z\" (UID: \"a4d9d768-c902-4463-800d-5e072f2b68a8\") " pod="kube-system/coredns-7c65d6cfc9-xfr9z" Aug 13 04:17:41.802393 kubelet[2119]: I0813 04:17:41.802253 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6gzbw" podStartSLOduration=6.264281051 podStartE2EDuration="21.798420826s" podCreationTimestamp="2025-08-13 04:17:20 +0000 UTC" firstStartedPulling="2025-08-13 04:17:21.142354763 +0000 UTC m=+5.906651845" lastFinishedPulling="2025-08-13 04:17:36.676494531 +0000 UTC m=+21.440791620" observedRunningTime="2025-08-13 04:17:41.797290982 +0000 UTC m=+26.561588090" watchObservedRunningTime="2025-08-13 04:17:41.798420826 +0000 UTC m=+26.562717915" Aug 13 04:17:42.342927 kubelet[2119]: E0813 04:17:42.342866 2119 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 04:17:42.343757 kubelet[2119]: E0813 04:17:42.343172 2119 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 04:17:42.347072 kubelet[2119]: E0813 04:17:42.347029 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/626bfc9e-80e3-468c-a263-871009649101-config-volume podName:626bfc9e-80e3-468c-a263-871009649101 nodeName:}" failed. No retries permitted until 2025-08-13 04:17:42.843650197 +0000 UTC m=+27.607947274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/626bfc9e-80e3-468c-a263-871009649101-config-volume") pod "coredns-7c65d6cfc9-8klnp" (UID: "626bfc9e-80e3-468c-a263-871009649101") : failed to sync configmap cache: timed out waiting for the condition Aug 13 04:17:42.347235 kubelet[2119]: E0813 04:17:42.347095 2119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4d9d768-c902-4463-800d-5e072f2b68a8-config-volume podName:a4d9d768-c902-4463-800d-5e072f2b68a8 nodeName:}" failed. No retries permitted until 2025-08-13 04:17:42.847070553 +0000 UTC m=+27.611367641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a4d9d768-c902-4463-800d-5e072f2b68a8-config-volume") pod "coredns-7c65d6cfc9-xfr9z" (UID: "a4d9d768-c902-4463-800d-5e072f2b68a8") : failed to sync configmap cache: timed out waiting for the condition Aug 13 04:17:42.985558 env[1293]: time="2025-08-13T04:17:42.985448532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8klnp,Uid:626bfc9e-80e3-468c-a263-871009649101,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:43.002506 env[1293]: time="2025-08-13T04:17:43.002130964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfr9z,Uid:a4d9d768-c902-4463-800d-5e072f2b68a8,Namespace:kube-system,Attempt:0,}" Aug 13 04:17:43.634788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 04:17:43.637051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 04:17:43.640618 systemd-networkd[1069]: cilium_host: Link UP Aug 13 04:17:43.640876 systemd-networkd[1069]: cilium_net: Link UP Aug 13 04:17:43.641192 systemd-networkd[1069]: cilium_net: Gained carrier Aug 13 04:17:43.644720 systemd-networkd[1069]: cilium_host: Gained carrier Aug 13 04:17:43.852786 systemd-networkd[1069]: cilium_vxlan: Link UP Aug 13 04:17:43.856257 systemd-networkd[1069]: cilium_vxlan: Gained carrier Aug 13 04:17:43.999576 systemd[1]: run-containerd-runc-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-runc.KzIRxz.mount: Deactivated successfully. Aug 13 04:17:44.009725 systemd-networkd[1069]: cilium_host: Gained IPv6LL Aug 13 04:17:44.410541 kernel: NET: Registered PF_ALG protocol family Aug 13 04:17:44.537709 systemd-networkd[1069]: cilium_net: Gained IPv6LL Aug 13 04:17:45.496737 systemd-networkd[1069]: lxc_health: Link UP Aug 13 04:17:45.519562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 04:17:45.519822 systemd-networkd[1069]: lxc_health: Gained carrier Aug 13 04:17:45.882433 systemd-networkd[1069]: cilium_vxlan: Gained IPv6LL Aug 13 04:17:46.138061 systemd-networkd[1069]: lxc9813aff84447: Link UP Aug 13 04:17:46.142503 kernel: eth0: renamed from tmpa2e01 Aug 13 04:17:46.173977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9813aff84447: link becomes ready Aug 13 04:17:46.174464 systemd-networkd[1069]: lxc9813aff84447: Gained carrier Aug 13 04:17:46.191818 systemd-networkd[1069]: lxc62ae1d162eec: Link UP Aug 13 04:17:46.202503 kernel: eth0: renamed from tmp51ea6 Aug 13 04:17:46.207692 systemd-networkd[1069]: lxc62ae1d162eec: Gained carrier Aug 13 04:17:46.210151 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc62ae1d162eec: link becomes ready Aug 13 04:17:46.329158 systemd[1]: run-containerd-runc-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-runc.dXsuC3.mount: Deactivated successfully. Aug 13 04:17:46.623602 systemd-networkd[1069]: lxc_health: Gained IPv6LL Aug 13 04:17:47.929685 systemd-networkd[1069]: lxc9813aff84447: Gained IPv6LL Aug 13 04:17:47.993641 systemd-networkd[1069]: lxc62ae1d162eec: Gained IPv6LL Aug 13 04:17:48.607581 kubelet[2119]: I0813 04:17:48.607512 2119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 04:17:48.792242 systemd[1]: run-containerd-runc-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-runc.0ra2oZ.mount: Deactivated successfully. Aug 13 04:17:50.986972 systemd[1]: run-containerd-runc-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-runc.hj4yna.mount: Deactivated successfully. Aug 13 04:17:52.244594 env[1293]: time="2025-08-13T04:17:52.243857729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:52.244594 env[1293]: time="2025-08-13T04:17:52.243955031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:52.244594 env[1293]: time="2025-08-13T04:17:52.243974757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:52.245842 env[1293]: time="2025-08-13T04:17:52.245011443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2e01fee0b8d6bd589921946820d99a8b2742ce070fe4ad053f1364ff83ec92d pid=3383 runtime=io.containerd.runc.v2 Aug 13 04:17:52.338695 sudo[1454]: pam_unix(sudo:session): session closed for user root Aug 13 04:17:52.342597 systemd[1]: run-containerd-runc-k8s.io-a2e01fee0b8d6bd589921946820d99a8b2742ce070fe4ad053f1364ff83ec92d-runc.aO7NfO.mount: Deactivated successfully. Aug 13 04:17:52.428945 env[1293]: time="2025-08-13T04:17:52.428731192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:17:52.438918 env[1293]: time="2025-08-13T04:17:52.428903032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:17:52.438918 env[1293]: time="2025-08-13T04:17:52.429195599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:17:52.438918 env[1293]: time="2025-08-13T04:17:52.429409680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51ea68e8dafb1677fac3a5490f39284b7b0e49de833217b7dc24d8ab5e250403 pid=3425 runtime=io.containerd.runc.v2 Aug 13 04:17:52.508262 sshd[1450]: pam_unix(sshd:session): session closed for user core Aug 13 04:17:52.542393 systemd[1]: sshd@4-10.244.14.178:22-139.178.89.65:59424.service: Deactivated successfully. Aug 13 04:17:52.544167 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 04:17:52.544196 systemd-logind[1282]: Session 5 logged out. Waiting for processes to exit. Aug 13 04:17:52.552607 systemd-logind[1282]: Removed session 5. Aug 13 04:17:52.612520 env[1293]: time="2025-08-13T04:17:52.612412919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xfr9z,Uid:a4d9d768-c902-4463-800d-5e072f2b68a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e01fee0b8d6bd589921946820d99a8b2742ce070fe4ad053f1364ff83ec92d\"" Aug 13 04:17:52.622435 env[1293]: time="2025-08-13T04:17:52.621689938Z" level=info msg="CreateContainer within sandbox \"a2e01fee0b8d6bd589921946820d99a8b2742ce070fe4ad053f1364ff83ec92d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 04:17:52.628871 env[1293]: time="2025-08-13T04:17:52.628811691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8klnp,Uid:626bfc9e-80e3-468c-a263-871009649101,Namespace:kube-system,Attempt:0,} returns sandbox id \"51ea68e8dafb1677fac3a5490f39284b7b0e49de833217b7dc24d8ab5e250403\"" Aug 13 04:17:52.632713 env[1293]: time="2025-08-13T04:17:52.631971854Z" level=info msg="CreateContainer within sandbox \"51ea68e8dafb1677fac3a5490f39284b7b0e49de833217b7dc24d8ab5e250403\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 04:17:52.650020 env[1293]: time="2025-08-13T04:17:52.649937366Z" level=info msg="CreateContainer within sandbox \"a2e01fee0b8d6bd589921946820d99a8b2742ce070fe4ad053f1364ff83ec92d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94908df237bd89df19128b6f450f97bd30fdb0e2f73a1c6fba3d578182e6479c\"" Aug 13 04:17:52.650678 env[1293]: time="2025-08-13T04:17:52.650605859Z" level=info msg="CreateContainer within sandbox \"51ea68e8dafb1677fac3a5490f39284b7b0e49de833217b7dc24d8ab5e250403\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d2ad0cb5afdbedc4a754b55c71b5b992c3f54c0a20409e2f8dea235400d5e60\"" Aug 13 04:17:52.652662 env[1293]: time="2025-08-13T04:17:52.650910724Z" level=info msg="StartContainer for \"94908df237bd89df19128b6f450f97bd30fdb0e2f73a1c6fba3d578182e6479c\"" Aug 13 04:17:52.653325 env[1293]: time="2025-08-13T04:17:52.653275452Z" level=info msg="StartContainer for \"9d2ad0cb5afdbedc4a754b55c71b5b992c3f54c0a20409e2f8dea235400d5e60\"" Aug 13 04:17:52.763740 env[1293]: time="2025-08-13T04:17:52.762636898Z" level=info msg="StartContainer for \"9d2ad0cb5afdbedc4a754b55c71b5b992c3f54c0a20409e2f8dea235400d5e60\" returns successfully" Aug 13 04:17:52.776008 env[1293]: time="2025-08-13T04:17:52.775941571Z" level=info msg="StartContainer for \"94908df237bd89df19128b6f450f97bd30fdb0e2f73a1c6fba3d578182e6479c\" returns successfully" Aug 13 04:17:52.841333 kubelet[2119]: I0813 04:17:52.841171 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xfr9z" podStartSLOduration=34.84107897 podStartE2EDuration="34.84107897s" podCreationTimestamp="2025-08-13 04:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:52.822075869 +0000 UTC m=+37.586372968" watchObservedRunningTime="2025-08-13 04:17:52.84107897 +0000 UTC m=+37.605376072" Aug 13 04:17:53.835930 kubelet[2119]: I0813 04:17:53.835656 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8klnp" podStartSLOduration=35.835613952 podStartE2EDuration="35.835613952s" podCreationTimestamp="2025-08-13 04:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:17:52.84262729 +0000 UTC m=+37.606924381" watchObservedRunningTime="2025-08-13 04:17:53.835613952 +0000 UTC m=+38.599911047" Aug 13 04:18:58.494845 systemd[1]: Started sshd@5-10.244.14.178:22-139.178.89.65:45124.service. Aug 13 04:18:59.412284 sshd[3552]: Accepted publickey for core from 139.178.89.65 port 45124 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:18:59.415637 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:18:59.425696 systemd-logind[1282]: New session 6 of user core. Aug 13 04:18:59.426883 systemd[1]: Started session-6.scope. Aug 13 04:19:00.455206 sshd[3552]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:00.460701 systemd-logind[1282]: Session 6 logged out. Waiting for processes to exit. Aug 13 04:19:00.461700 systemd[1]: sshd@5-10.244.14.178:22-139.178.89.65:45124.service: Deactivated successfully. Aug 13 04:19:00.462971 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 04:19:00.466412 systemd-logind[1282]: Removed session 6. Aug 13 04:19:05.602336 systemd[1]: Started sshd@6-10.244.14.178:22-139.178.89.65:53308.service. Aug 13 04:19:06.500014 sshd[3566]: Accepted publickey for core from 139.178.89.65 port 53308 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:06.502779 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:06.510418 systemd-logind[1282]: New session 7 of user core. Aug 13 04:19:06.511440 systemd[1]: Started session-7.scope. Aug 13 04:19:07.217680 sshd[3566]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:07.221611 systemd-logind[1282]: Session 7 logged out. Waiting for processes to exit. Aug 13 04:19:07.221921 systemd[1]: sshd@6-10.244.14.178:22-139.178.89.65:53308.service: Deactivated successfully. Aug 13 04:19:07.223226 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 04:19:07.224799 systemd-logind[1282]: Removed session 7. Aug 13 04:19:10.948192 systemd[1]: Started sshd@7-10.244.14.178:22-80.94.93.119:33072.service. Aug 13 04:19:11.288575 sshd[3579]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:12.384631 systemd[1]: Started sshd@8-10.244.14.178:22-139.178.89.65:56696.service. Aug 13 04:19:12.854494 sshd[3579]: Failed password for root from 80.94.93.119 port 33072 ssh2 Aug 13 04:19:13.341705 sshd[3581]: Accepted publickey for core from 139.178.89.65 port 56696 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:13.344314 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:13.355549 systemd[1]: Started session-8.scope. Aug 13 04:19:13.355910 systemd-logind[1282]: New session 8 of user core. Aug 13 04:19:14.113810 sshd[3581]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:14.118164 systemd[1]: sshd@8-10.244.14.178:22-139.178.89.65:56696.service: Deactivated successfully. Aug 13 04:19:14.120118 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 04:19:14.120642 systemd-logind[1282]: Session 8 logged out. Waiting for processes to exit. Aug 13 04:19:14.122090 systemd-logind[1282]: Removed session 8. Aug 13 04:19:16.147088 sshd[3579]: Failed password for root from 80.94.93.119 port 33072 ssh2 Aug 13 04:19:19.242923 sshd[3579]: Failed password for root from 80.94.93.119 port 33072 ssh2 Aug 13 04:19:19.255225 systemd[1]: Started sshd@9-10.244.14.178:22-139.178.89.65:42574.service. Aug 13 04:19:19.346141 sshd[3579]: Received disconnect from 80.94.93.119 port 33072:11: [preauth] Aug 13 04:19:19.346141 sshd[3579]: Disconnected from authenticating user root 80.94.93.119 port 33072 [preauth] Aug 13 04:19:19.346880 sshd[3579]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:19.348376 systemd[1]: sshd@7-10.244.14.178:22-80.94.93.119:33072.service: Deactivated successfully. Aug 13 04:19:19.389880 systemd[1]: Started sshd@10-10.244.14.178:22-80.94.93.119:64892.service. Aug 13 04:19:19.730443 sshd[3601]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:20.176277 sshd[3597]: Accepted publickey for core from 139.178.89.65 port 42574 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:20.178499 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:20.185889 systemd-logind[1282]: New session 9 of user core. Aug 13 04:19:20.187229 systemd[1]: Started session-9.scope. Aug 13 04:19:20.909523 sshd[3597]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:20.914146 systemd[1]: sshd@9-10.244.14.178:22-139.178.89.65:42574.service: Deactivated successfully. Aug 13 04:19:20.915951 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 04:19:20.916302 systemd-logind[1282]: Session 9 logged out. Waiting for processes to exit. Aug 13 04:19:20.917597 systemd-logind[1282]: Removed session 9. Aug 13 04:19:21.057167 systemd[1]: Started sshd@11-10.244.14.178:22-139.178.89.65:42588.service. Aug 13 04:19:21.978646 sshd[3617]: Accepted publickey for core from 139.178.89.65 port 42588 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:21.981016 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:21.990096 systemd-logind[1282]: New session 10 of user core. Aug 13 04:19:21.991403 systemd[1]: Started session-10.scope. Aug 13 04:19:22.059063 sshd[3601]: Failed password for root from 80.94.93.119 port 64892 ssh2 Aug 13 04:19:22.416399 sshd[3601]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Aug 13 04:19:22.786548 sshd[3617]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:22.790351 systemd[1]: sshd@11-10.244.14.178:22-139.178.89.65:42588.service: Deactivated successfully. Aug 13 04:19:22.792016 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 04:19:22.792053 systemd-logind[1282]: Session 10 logged out. Waiting for processes to exit. Aug 13 04:19:22.793646 systemd-logind[1282]: Removed session 10. Aug 13 04:19:22.934655 systemd[1]: Started sshd@12-10.244.14.178:22-139.178.89.65:42592.service. Aug 13 04:19:23.840521 sshd[3627]: Accepted publickey for core from 139.178.89.65 port 42592 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:23.843426 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:23.850526 systemd-logind[1282]: New session 11 of user core. Aug 13 04:19:23.852392 systemd[1]: Started session-11.scope. Aug 13 04:19:24.157323 sshd[3601]: Failed password for root from 80.94.93.119 port 64892 ssh2 Aug 13 04:19:24.558777 sshd[3627]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:24.563224 systemd[1]: sshd@12-10.244.14.178:22-139.178.89.65:42592.service: Deactivated successfully. Aug 13 04:19:24.564822 systemd-logind[1282]: Session 11 logged out. Waiting for processes to exit. Aug 13 04:19:24.564912 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 04:19:24.567371 systemd-logind[1282]: Removed session 11. Aug 13 04:19:27.291798 sshd[3601]: Failed password for root from 80.94.93.119 port 64892 ssh2 Aug 13 04:19:27.828596 sshd[3601]: Received disconnect from 80.94.93.119 port 64892:11: [preauth] Aug 13 04:19:27.828596 sshd[3601]: Disconnected from authenticating user root 80.94.93.119 port 64892 [preauth] Aug 13 04:19:27.829080 sshd[3601]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:27.830643 systemd[1]: sshd@10-10.244.14.178:22-80.94.93.119:64892.service: Deactivated successfully. Aug 13 04:19:27.869319 systemd[1]: Started sshd@13-10.244.14.178:22-80.94.93.119:20902.service. Aug 13 04:19:28.207336 sshd[3641]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:29.727534 systemd[1]: Started sshd@14-10.244.14.178:22-139.178.89.65:44314.service. Aug 13 04:19:30.690761 sshd[3643]: Accepted publickey for core from 139.178.89.65 port 44314 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:30.692815 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:30.700257 systemd-logind[1282]: New session 12 of user core. Aug 13 04:19:30.701267 systemd[1]: Started session-12.scope. Aug 13 04:19:30.771606 sshd[3641]: Failed password for root from 80.94.93.119 port 20902 ssh2 Aug 13 04:19:31.462870 sshd[3643]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:31.468235 systemd[1]: sshd@14-10.244.14.178:22-139.178.89.65:44314.service: Deactivated successfully. Aug 13 04:19:31.470772 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 04:19:31.470833 systemd-logind[1282]: Session 12 logged out. Waiting for processes to exit. Aug 13 04:19:31.472745 systemd-logind[1282]: Removed session 12. Aug 13 04:19:33.065921 sshd[3641]: Failed password for root from 80.94.93.119 port 20902 ssh2 Aug 13 04:19:35.163938 sshd[3641]: Failed password for root from 80.94.93.119 port 20902 ssh2 Aug 13 04:19:36.267319 sshd[3641]: Received disconnect from 80.94.93.119 port 20902:11: [preauth] Aug 13 04:19:36.267319 sshd[3641]: Disconnected from authenticating user root 80.94.93.119 port 20902 [preauth] Aug 13 04:19:36.268147 sshd[3641]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119 user=root Aug 13 04:19:36.270047 systemd[1]: sshd@13-10.244.14.178:22-80.94.93.119:20902.service: Deactivated successfully. Aug 13 04:19:36.600851 systemd[1]: Started sshd@15-10.244.14.178:22-139.178.89.65:44328.service. Aug 13 04:19:37.503563 sshd[3658]: Accepted publickey for core from 139.178.89.65 port 44328 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:37.505152 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:37.514702 systemd[1]: Started session-13.scope. Aug 13 04:19:37.516541 systemd-logind[1282]: New session 13 of user core. Aug 13 04:19:38.410013 sshd[3658]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:38.414274 systemd-logind[1282]: Session 13 logged out. Waiting for processes to exit. Aug 13 04:19:38.415680 systemd[1]: sshd@15-10.244.14.178:22-139.178.89.65:44328.service: Deactivated successfully. Aug 13 04:19:38.416876 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 04:19:38.419217 systemd-logind[1282]: Removed session 13. Aug 13 04:19:43.560883 systemd[1]: Started sshd@16-10.244.14.178:22-139.178.89.65:35574.service. Aug 13 04:19:44.477287 sshd[3672]: Accepted publickey for core from 139.178.89.65 port 35574 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:44.480235 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:44.487375 systemd-logind[1282]: New session 14 of user core. Aug 13 04:19:44.489187 systemd[1]: Started session-14.scope. Aug 13 04:19:45.283407 sshd[3672]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:45.287779 systemd[1]: sshd@16-10.244.14.178:22-139.178.89.65:35574.service: Deactivated successfully. Aug 13 04:19:45.288984 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 04:19:45.290272 systemd-logind[1282]: Session 14 logged out. Waiting for processes to exit. Aug 13 04:19:45.291392 systemd-logind[1282]: Removed session 14. Aug 13 04:19:45.429411 systemd[1]: Started sshd@17-10.244.14.178:22-139.178.89.65:35588.service. Aug 13 04:19:46.329620 sshd[3685]: Accepted publickey for core from 139.178.89.65 port 35588 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:46.332933 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:46.342595 systemd[1]: Started session-15.scope. Aug 13 04:19:46.343405 systemd-logind[1282]: New session 15 of user core. Aug 13 04:19:47.368480 sshd[3685]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:47.373870 systemd[1]: sshd@17-10.244.14.178:22-139.178.89.65:35588.service: Deactivated successfully. Aug 13 04:19:47.375779 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 04:19:47.375816 systemd-logind[1282]: Session 15 logged out. Waiting for processes to exit. Aug 13 04:19:47.378067 systemd-logind[1282]: Removed session 15. Aug 13 04:19:47.513823 systemd[1]: Started sshd@18-10.244.14.178:22-139.178.89.65:35598.service. Aug 13 04:19:48.420897 sshd[3696]: Accepted publickey for core from 139.178.89.65 port 35598 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:48.423090 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:48.431395 systemd-logind[1282]: New session 16 of user core. Aug 13 04:19:48.432387 systemd[1]: Started session-16.scope. Aug 13 04:19:51.079122 sshd[3696]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:51.086497 systemd[1]: sshd@18-10.244.14.178:22-139.178.89.65:35598.service: Deactivated successfully. Aug 13 04:19:51.088862 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 04:19:51.091502 systemd-logind[1282]: Session 16 logged out. Waiting for processes to exit. Aug 13 04:19:51.094295 systemd-logind[1282]: Removed session 16. Aug 13 04:19:51.228895 systemd[1]: Started sshd@19-10.244.14.178:22-139.178.89.65:57808.service. Aug 13 04:19:52.136115 sshd[3717]: Accepted publickey for core from 139.178.89.65 port 57808 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:52.139003 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:52.148711 systemd-logind[1282]: New session 17 of user core. Aug 13 04:19:52.150026 systemd[1]: Started session-17.scope. Aug 13 04:19:53.208903 sshd[3717]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:53.213828 systemd[1]: sshd@19-10.244.14.178:22-139.178.89.65:57808.service: Deactivated successfully. Aug 13 04:19:53.215663 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 04:19:53.217543 systemd-logind[1282]: Session 17 logged out. Waiting for processes to exit. Aug 13 04:19:53.219675 systemd-logind[1282]: Removed session 17. Aug 13 04:19:53.356257 systemd[1]: Started sshd@20-10.244.14.178:22-139.178.89.65:57816.service. Aug 13 04:19:54.257961 sshd[3728]: Accepted publickey for core from 139.178.89.65 port 57816 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:19:54.261068 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:19:54.274689 systemd[1]: Started session-18.scope. Aug 13 04:19:54.275547 systemd-logind[1282]: New session 18 of user core. Aug 13 04:19:54.973043 sshd[3728]: pam_unix(sshd:session): session closed for user core Aug 13 04:19:54.978013 systemd-logind[1282]: Session 18 logged out. Waiting for processes to exit. Aug 13 04:19:54.978635 systemd[1]: sshd@20-10.244.14.178:22-139.178.89.65:57816.service: Deactivated successfully. Aug 13 04:19:54.980439 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 04:19:54.981693 systemd-logind[1282]: Removed session 18. Aug 13 04:20:00.122222 systemd[1]: Started sshd@21-10.244.14.178:22-139.178.89.65:40270.service. Aug 13 04:20:01.026953 sshd[3741]: Accepted publickey for core from 139.178.89.65 port 40270 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:01.029826 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:01.039239 systemd[1]: Started session-19.scope. Aug 13 04:20:01.040217 systemd-logind[1282]: New session 19 of user core. Aug 13 04:20:01.747898 sshd[3741]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:01.752268 systemd[1]: sshd@21-10.244.14.178:22-139.178.89.65:40270.service: Deactivated successfully. Aug 13 04:20:01.754392 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 04:20:01.755051 systemd-logind[1282]: Session 19 logged out. Waiting for processes to exit. Aug 13 04:20:01.758253 systemd-logind[1282]: Removed session 19. Aug 13 04:20:06.898838 systemd[1]: Started sshd@22-10.244.14.178:22-139.178.89.65:40276.service. Aug 13 04:20:07.799297 sshd[3756]: Accepted publickey for core from 139.178.89.65 port 40276 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:07.801739 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:07.813394 systemd-logind[1282]: New session 20 of user core. Aug 13 04:20:07.813772 systemd[1]: Started session-20.scope. Aug 13 04:20:08.525103 sshd[3756]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:08.529365 systemd-logind[1282]: Session 20 logged out. Waiting for processes to exit. Aug 13 04:20:08.530453 systemd[1]: sshd@22-10.244.14.178:22-139.178.89.65:40276.service: Deactivated successfully. Aug 13 04:20:08.531759 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 04:20:08.533210 systemd-logind[1282]: Removed session 20. Aug 13 04:20:13.696002 systemd[1]: Started sshd@23-10.244.14.178:22-139.178.89.65:36024.service. Aug 13 04:20:14.652375 sshd[3769]: Accepted publickey for core from 139.178.89.65 port 36024 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:14.654218 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:14.661975 systemd-logind[1282]: New session 21 of user core. Aug 13 04:20:14.662974 systemd[1]: Started session-21.scope. Aug 13 04:20:15.410704 sshd[3769]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:15.415007 systemd[1]: sshd@23-10.244.14.178:22-139.178.89.65:36024.service: Deactivated successfully. Aug 13 04:20:15.416583 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 04:20:15.416602 systemd-logind[1282]: Session 21 logged out. Waiting for processes to exit. Aug 13 04:20:15.418435 systemd-logind[1282]: Removed session 21. Aug 13 04:20:15.549217 systemd[1]: Started sshd@24-10.244.14.178:22-139.178.89.65:36040.service. Aug 13 04:20:16.452905 sshd[3781]: Accepted publickey for core from 139.178.89.65 port 36040 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:16.454919 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:16.462272 systemd-logind[1282]: New session 22 of user core. Aug 13 04:20:16.463274 systemd[1]: Started session-22.scope. Aug 13 04:20:19.152987 systemd[1]: run-containerd-runc-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-runc.2vUxhJ.mount: Deactivated successfully. Aug 13 04:20:19.157143 env[1293]: time="2025-08-13T04:20:19.157032623Z" level=info msg="StopContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" with timeout 30 (s)" Aug 13 04:20:19.158649 env[1293]: time="2025-08-13T04:20:19.158428659Z" level=info msg="Stop container \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" with signal terminated" Aug 13 04:20:19.209581 env[1293]: time="2025-08-13T04:20:19.209441264Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 04:20:19.226937 env[1293]: time="2025-08-13T04:20:19.223125790Z" level=info msg="StopContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" with timeout 2 (s)" Aug 13 04:20:19.226937 env[1293]: time="2025-08-13T04:20:19.223538200Z" level=info msg="Stop container \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" with signal terminated" Aug 13 04:20:19.245579 systemd-networkd[1069]: lxc_health: Link DOWN Aug 13 04:20:19.245592 systemd-networkd[1069]: lxc_health: Lost carrier Aug 13 04:20:19.287002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a-rootfs.mount: Deactivated successfully. Aug 13 04:20:19.309948 env[1293]: time="2025-08-13T04:20:19.309882730Z" level=info msg="shim disconnected" id=08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a Aug 13 04:20:19.310199 env[1293]: time="2025-08-13T04:20:19.309946819Z" level=warning msg="cleaning up after shim disconnected" id=08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a namespace=k8s.io Aug 13 04:20:19.310199 env[1293]: time="2025-08-13T04:20:19.309975586Z" level=info msg="cleaning up dead shim" Aug 13 04:20:19.330726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a-rootfs.mount: Deactivated successfully. Aug 13 04:20:19.337135 env[1293]: time="2025-08-13T04:20:19.337065401Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" Aug 13 04:20:19.340938 env[1293]: time="2025-08-13T04:20:19.340839302Z" level=info msg="StopContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" returns successfully" Aug 13 04:20:19.342161 env[1293]: time="2025-08-13T04:20:19.342118869Z" level=info msg="StopPodSandbox for \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\"" Aug 13 04:20:19.342405 env[1293]: time="2025-08-13T04:20:19.342366507Z" level=info msg="Container to stop \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.348281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364-shm.mount: Deactivated successfully. Aug 13 04:20:19.350839 env[1293]: time="2025-08-13T04:20:19.350790121Z" level=info msg="shim disconnected" id=b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a Aug 13 04:20:19.351604 env[1293]: time="2025-08-13T04:20:19.351561551Z" level=warning msg="cleaning up after shim disconnected" id=b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a namespace=k8s.io Aug 13 04:20:19.351767 env[1293]: time="2025-08-13T04:20:19.351736432Z" level=info msg="cleaning up dead shim" Aug 13 04:20:19.380500 env[1293]: time="2025-08-13T04:20:19.380386490Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3866 runtime=io.containerd.runc.v2\n" Aug 13 04:20:19.386116 env[1293]: time="2025-08-13T04:20:19.386069675Z" level=info msg="StopContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" returns successfully" Aug 13 04:20:19.386991 env[1293]: time="2025-08-13T04:20:19.386947314Z" level=info msg="StopPodSandbox for \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\"" Aug 13 04:20:19.387135 env[1293]: time="2025-08-13T04:20:19.387057720Z" level=info msg="Container to stop \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.387135 env[1293]: time="2025-08-13T04:20:19.387089273Z" level=info msg="Container to stop \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.387135 env[1293]: time="2025-08-13T04:20:19.387109020Z" level=info msg="Container to stop \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.387320 env[1293]: time="2025-08-13T04:20:19.387139563Z" level=info msg="Container to stop \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.387320 env[1293]: time="2025-08-13T04:20:19.387158974Z" level=info msg="Container to stop \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:19.431949 env[1293]: time="2025-08-13T04:20:19.430127588Z" level=info msg="shim disconnected" id=4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364 Aug 13 04:20:19.432490 env[1293]: time="2025-08-13T04:20:19.432441804Z" level=warning msg="cleaning up after shim disconnected" id=4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364 namespace=k8s.io Aug 13 04:20:19.432635 env[1293]: time="2025-08-13T04:20:19.432604942Z" level=info msg="cleaning up dead shim" Aug 13 04:20:19.447282 env[1293]: time="2025-08-13T04:20:19.447098916Z" level=info msg="shim disconnected" id=012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6 Aug 13 04:20:19.447282 env[1293]: time="2025-08-13T04:20:19.447167825Z" level=warning msg="cleaning up after shim disconnected" id=012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6 namespace=k8s.io Aug 13 04:20:19.447282 env[1293]: time="2025-08-13T04:20:19.447186897Z" level=info msg="cleaning up dead shim" Aug 13 04:20:19.451707 env[1293]: time="2025-08-13T04:20:19.451660995Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3915 runtime=io.containerd.runc.v2\n" Aug 13 04:20:19.453218 env[1293]: time="2025-08-13T04:20:19.453174589Z" level=info msg="TearDown network for sandbox \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\" successfully" Aug 13 04:20:19.453383 env[1293]: time="2025-08-13T04:20:19.453216476Z" level=info msg="StopPodSandbox for \"4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364\" returns successfully" Aug 13 04:20:19.473459 env[1293]: time="2025-08-13T04:20:19.473357017Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T04:20:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Aug 13 04:20:19.474505 env[1293]: time="2025-08-13T04:20:19.474385960Z" level=info msg="TearDown network for sandbox \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" successfully" Aug 13 04:20:19.474616 env[1293]: time="2025-08-13T04:20:19.474443160Z" level=info msg="StopPodSandbox for \"012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6\" returns successfully" Aug 13 04:20:19.549256 kubelet[2119]: I0813 04:20:19.547972 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.550253 kubelet[2119]: I0813 04:20:19.550198 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-net\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550348 kubelet[2119]: I0813 04:20:19.550296 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hostproc\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550452 kubelet[2119]: I0813 04:20:19.550375 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hostproc" (OuterVolumeSpecName: "hostproc") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.550560 kubelet[2119]: I0813 04:20:19.550523 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-config-path\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550699 kubelet[2119]: I0813 04:20:19.550673 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8tcv\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-kube-api-access-v8tcv\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550776 kubelet[2119]: I0813 04:20:19.550720 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-lib-modules\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550776 kubelet[2119]: I0813 04:20:19.550757 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-etc-cni-netd\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550913 kubelet[2119]: I0813 04:20:19.550793 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-kernel\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.550913 kubelet[2119]: I0813 04:20:19.550881 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-clustermesh-secrets\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551059 kubelet[2119]: I0813 04:20:19.550926 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hubble-tls\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551059 kubelet[2119]: I0813 04:20:19.550969 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50e1e23-86e6-4d1a-bf02-638fb42dab18-cilium-config-path\") pod \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\" (UID: \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\") " Aug 13 04:20:19.551335 kubelet[2119]: I0813 04:20:19.551300 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-xtables-lock\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551406 kubelet[2119]: I0813 04:20:19.551343 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-bpf-maps\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551406 kubelet[2119]: I0813 04:20:19.551375 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlrwj\" (UniqueName: \"kubernetes.io/projected/f50e1e23-86e6-4d1a-bf02-638fb42dab18-kube-api-access-nlrwj\") pod \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\" (UID: \"f50e1e23-86e6-4d1a-bf02-638fb42dab18\") " Aug 13 04:20:19.551575 kubelet[2119]: I0813 04:20:19.551416 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-run\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551575 kubelet[2119]: I0813 04:20:19.551443 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cni-path\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.551575 kubelet[2119]: I0813 04:20:19.551496 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-cgroup\") pod \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\" (UID: \"dada73f5-f6ed-4e27-bc19-d43ce49f13f7\") " Aug 13 04:20:19.552326 kubelet[2119]: I0813 04:20:19.552287 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-net\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.552326 kubelet[2119]: I0813 04:20:19.552324 2119 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hostproc\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.552512 kubelet[2119]: I0813 04:20:19.552361 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.558391 kubelet[2119]: I0813 04:20:19.558340 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 04:20:19.563098 kubelet[2119]: I0813 04:20:19.562997 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.563098 kubelet[2119]: I0813 04:20:19.563070 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.564065 kubelet[2119]: I0813 04:20:19.563434 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.564065 kubelet[2119]: I0813 04:20:19.563499 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cni-path" (OuterVolumeSpecName: "cni-path") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.564653 kubelet[2119]: I0813 04:20:19.564594 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.564653 kubelet[2119]: I0813 04:20:19.564641 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.564816 kubelet[2119]: I0813 04:20:19.564677 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:19.568539 kubelet[2119]: I0813 04:20:19.568494 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f50e1e23-86e6-4d1a-bf02-638fb42dab18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f50e1e23-86e6-4d1a-bf02-638fb42dab18" (UID: "f50e1e23-86e6-4d1a-bf02-638fb42dab18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 04:20:19.569668 kubelet[2119]: I0813 04:20:19.569631 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-kube-api-access-v8tcv" (OuterVolumeSpecName: "kube-api-access-v8tcv") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "kube-api-access-v8tcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:20:19.574668 kubelet[2119]: I0813 04:20:19.574628 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50e1e23-86e6-4d1a-bf02-638fb42dab18-kube-api-access-nlrwj" (OuterVolumeSpecName: "kube-api-access-nlrwj") pod "f50e1e23-86e6-4d1a-bf02-638fb42dab18" (UID: "f50e1e23-86e6-4d1a-bf02-638fb42dab18"). InnerVolumeSpecName "kube-api-access-nlrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:20:19.574842 kubelet[2119]: I0813 04:20:19.574809 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 04:20:19.576980 kubelet[2119]: I0813 04:20:19.576939 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dada73f5-f6ed-4e27-bc19-d43ce49f13f7" (UID: "dada73f5-f6ed-4e27-bc19-d43ce49f13f7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:20:19.653130 kubelet[2119]: I0813 04:20:19.653068 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50e1e23-86e6-4d1a-bf02-638fb42dab18-cilium-config-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653130 kubelet[2119]: I0813 04:20:19.653129 2119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlrwj\" (UniqueName: \"kubernetes.io/projected/f50e1e23-86e6-4d1a-bf02-638fb42dab18-kube-api-access-nlrwj\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653130 kubelet[2119]: I0813 04:20:19.653151 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-run\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653172 2119 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cni-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653187 2119 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-xtables-lock\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653203 2119 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-bpf-maps\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653219 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-cgroup\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653234 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-cilium-config-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653254 2119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8tcv\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-kube-api-access-v8tcv\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653283 2119 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-lib-modules\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.653555 kubelet[2119]: I0813 04:20:19.653299 2119 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-etc-cni-netd\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.654046 kubelet[2119]: I0813 04:20:19.653316 2119 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-clustermesh-secrets\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.654046 kubelet[2119]: I0813 04:20:19.653332 2119 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-hubble-tls\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:19.654046 kubelet[2119]: I0813 04:20:19.653348 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada73f5-f6ed-4e27-bc19-d43ce49f13f7-host-proc-sys-kernel\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:20.148725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6-rootfs.mount: Deactivated successfully. Aug 13 04:20:20.149791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-012df5ff94bbac8f5c61dff35df4048b2878b08eb2e463971d6abf72c883f2e6-shm.mount: Deactivated successfully. Aug 13 04:20:20.150201 systemd[1]: var-lib-kubelet-pods-dada73f5\x2df6ed\x2d4e27\x2dbc19\x2dd43ce49f13f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8tcv.mount: Deactivated successfully. Aug 13 04:20:20.150644 systemd[1]: var-lib-kubelet-pods-dada73f5\x2df6ed\x2d4e27\x2dbc19\x2dd43ce49f13f7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 04:20:20.150999 systemd[1]: var-lib-kubelet-pods-dada73f5\x2df6ed\x2d4e27\x2dbc19\x2dd43ce49f13f7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 04:20:20.151356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f73e07ba78f16b1119721931f8a379b779fae9550fc32909e5747fa4b24b364-rootfs.mount: Deactivated successfully. Aug 13 04:20:20.151734 systemd[1]: var-lib-kubelet-pods-f50e1e23\x2d86e6\x2d4d1a\x2dbf02\x2d638fb42dab18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlrwj.mount: Deactivated successfully. Aug 13 04:20:20.215919 kubelet[2119]: I0813 04:20:20.215860 2119 scope.go:117] "RemoveContainer" containerID="08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a" Aug 13 04:20:20.224925 env[1293]: time="2025-08-13T04:20:20.224147399Z" level=info msg="RemoveContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\"" Aug 13 04:20:20.237380 env[1293]: time="2025-08-13T04:20:20.237298453Z" level=info msg="RemoveContainer for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" returns successfully" Aug 13 04:20:20.238705 kubelet[2119]: I0813 04:20:20.238636 2119 scope.go:117] "RemoveContainer" containerID="08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a" Aug 13 04:20:20.239471 env[1293]: time="2025-08-13T04:20:20.239242055Z" level=error msg="ContainerStatus for \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\": not found" Aug 13 04:20:20.242696 kubelet[2119]: E0813 04:20:20.242656 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\": not found" containerID="08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a" Aug 13 04:20:20.246894 kubelet[2119]: I0813 04:20:20.242905 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a"} err="failed to get container status \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\": rpc error: code = NotFound desc = an error occurred when try to find container \"08808319a2acf2e7552287b6ecb0624e749497b04352421040cab73f600e093a\": not found" Aug 13 04:20:20.246894 kubelet[2119]: I0813 04:20:20.245803 2119 scope.go:117] "RemoveContainer" containerID="b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a" Aug 13 04:20:20.258381 env[1293]: time="2025-08-13T04:20:20.256452018Z" level=info msg="RemoveContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\"" Aug 13 04:20:20.265693 env[1293]: time="2025-08-13T04:20:20.265607890Z" level=info msg="RemoveContainer for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" returns successfully" Aug 13 04:20:20.266968 kubelet[2119]: I0813 04:20:20.266932 2119 scope.go:117] "RemoveContainer" containerID="61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb" Aug 13 04:20:20.275800 env[1293]: time="2025-08-13T04:20:20.275730706Z" level=info msg="RemoveContainer for \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\"" Aug 13 04:20:20.283794 env[1293]: time="2025-08-13T04:20:20.283674694Z" level=info msg="RemoveContainer for \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\" returns successfully" Aug 13 04:20:20.283925 kubelet[2119]: I0813 04:20:20.283874 2119 scope.go:117] "RemoveContainer" containerID="5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296" Aug 13 04:20:20.286209 env[1293]: time="2025-08-13T04:20:20.286085817Z" level=info msg="RemoveContainer for \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\"" Aug 13 04:20:20.290418 env[1293]: time="2025-08-13T04:20:20.290337890Z" level=info msg="RemoveContainer for \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\" returns successfully" Aug 13 04:20:20.291062 kubelet[2119]: I0813 04:20:20.290986 2119 scope.go:117] "RemoveContainer" containerID="eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64" Aug 13 04:20:20.299271 env[1293]: time="2025-08-13T04:20:20.299148995Z" level=info msg="RemoveContainer for \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\"" Aug 13 04:20:20.303532 env[1293]: time="2025-08-13T04:20:20.303495451Z" level=info msg="RemoveContainer for \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\" returns successfully" Aug 13 04:20:20.304079 kubelet[2119]: I0813 04:20:20.303937 2119 scope.go:117] "RemoveContainer" containerID="c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5" Aug 13 04:20:20.306297 env[1293]: time="2025-08-13T04:20:20.306243376Z" level=info msg="RemoveContainer for \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\"" Aug 13 04:20:20.310345 env[1293]: time="2025-08-13T04:20:20.310288970Z" level=info msg="RemoveContainer for \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\" returns successfully" Aug 13 04:20:20.310810 kubelet[2119]: I0813 04:20:20.310767 2119 scope.go:117] "RemoveContainer" containerID="b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a" Aug 13 04:20:20.311320 env[1293]: time="2025-08-13T04:20:20.311199978Z" level=error msg="ContainerStatus for \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\": not found" Aug 13 04:20:20.311818 kubelet[2119]: E0813 04:20:20.311771 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\": not found" containerID="b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a" Aug 13 04:20:20.311919 kubelet[2119]: I0813 04:20:20.311845 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a"} err="failed to get container status \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9943fd44af9bc1b6d8a488e3a82447e254a9cf4d3121ef1889e55371c4d6c7a\": not found" Aug 13 04:20:20.311919 kubelet[2119]: I0813 04:20:20.311895 2119 scope.go:117] "RemoveContainer" containerID="61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb" Aug 13 04:20:20.312522 env[1293]: time="2025-08-13T04:20:20.312406692Z" level=error msg="ContainerStatus for \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\": not found" Aug 13 04:20:20.312958 kubelet[2119]: E0813 04:20:20.312914 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\": not found" containerID="61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb" Aug 13 04:20:20.313132 kubelet[2119]: I0813 04:20:20.313096 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb"} err="failed to get container status \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"61adb15924b1e1ddd5b7f62d6ccdc62229ece25fca36b0b1bd9d5b0d088272cb\": not found" Aug 13 04:20:20.313290 kubelet[2119]: I0813 04:20:20.313264 2119 scope.go:117] "RemoveContainer" containerID="5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296" Aug 13 04:20:20.313878 env[1293]: time="2025-08-13T04:20:20.313810151Z" level=error msg="ContainerStatus for \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\": not found" Aug 13 04:20:20.314141 kubelet[2119]: E0813 04:20:20.314089 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\": not found" containerID="5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296" Aug 13 04:20:20.314242 kubelet[2119]: I0813 04:20:20.314147 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296"} err="failed to get container status \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cca02ab0332b438da87f9efe87f4a15df83ab52b2fcb9a8aca0815219b56296\": not found" Aug 13 04:20:20.314242 kubelet[2119]: I0813 04:20:20.314171 2119 scope.go:117] "RemoveContainer" containerID="eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64" Aug 13 04:20:20.314827 env[1293]: time="2025-08-13T04:20:20.314754054Z" level=error msg="ContainerStatus for \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\": not found" Aug 13 04:20:20.315251 kubelet[2119]: E0813 04:20:20.315195 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\": not found" containerID="eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64" Aug 13 04:20:20.315399 kubelet[2119]: I0813 04:20:20.315255 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64"} err="failed to get container status \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb9babf590a7025bc182b3b44b004ec108ee09630abff19eca15e8ac33f8ea64\": not found" Aug 13 04:20:20.315399 kubelet[2119]: I0813 04:20:20.315279 2119 scope.go:117] "RemoveContainer" containerID="c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5" Aug 13 04:20:20.315891 env[1293]: time="2025-08-13T04:20:20.315803597Z" level=error msg="ContainerStatus for \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\": not found" Aug 13 04:20:20.316304 kubelet[2119]: E0813 04:20:20.316210 2119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\": not found" containerID="c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5" Aug 13 04:20:20.316304 kubelet[2119]: I0813 04:20:20.316244 2119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5"} err="failed to get container status \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2192233795231c2df80b9c0b8ea40e82c4214d5c368ca12de3e478695b8f2e5\": not found" Aug 13 04:20:20.756948 kubelet[2119]: E0813 04:20:20.756827 2119 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 04:20:21.146911 sshd[3781]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:21.152562 systemd[1]: sshd@24-10.244.14.178:22-139.178.89.65:36040.service: Deactivated successfully. Aug 13 04:20:21.153967 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 04:20:21.154830 systemd-logind[1282]: Session 22 logged out. Waiting for processes to exit. Aug 13 04:20:21.157002 systemd-logind[1282]: Removed session 22. Aug 13 04:20:21.313145 systemd[1]: Started sshd@25-10.244.14.178:22-139.178.89.65:53424.service. Aug 13 04:20:21.566494 kubelet[2119]: I0813 04:20:21.566423 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" path="/var/lib/kubelet/pods/dada73f5-f6ed-4e27-bc19-d43ce49f13f7/volumes" Aug 13 04:20:21.569012 kubelet[2119]: I0813 04:20:21.568983 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50e1e23-86e6-4d1a-bf02-638fb42dab18" path="/var/lib/kubelet/pods/f50e1e23-86e6-4d1a-bf02-638fb42dab18/volumes" Aug 13 04:20:22.290821 sshd[3950]: Accepted publickey for core from 139.178.89.65 port 53424 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:22.291810 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:22.300209 systemd-logind[1282]: New session 23 of user core. Aug 13 04:20:22.301156 systemd[1]: Started session-23.scope. Aug 13 04:20:23.713903 kubelet[2119]: E0813 04:20:23.713852 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f50e1e23-86e6-4d1a-bf02-638fb42dab18" containerName="cilium-operator" Aug 13 04:20:23.714681 kubelet[2119]: E0813 04:20:23.714653 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="apply-sysctl-overwrites" Aug 13 04:20:23.714822 kubelet[2119]: E0813 04:20:23.714796 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="mount-bpf-fs" Aug 13 04:20:23.714949 kubelet[2119]: E0813 04:20:23.714925 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="mount-cgroup" Aug 13 04:20:23.715094 kubelet[2119]: E0813 04:20:23.715069 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="clean-cilium-state" Aug 13 04:20:23.715228 kubelet[2119]: E0813 04:20:23.715203 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="cilium-agent" Aug 13 04:20:23.719811 kubelet[2119]: I0813 04:20:23.719759 2119 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50e1e23-86e6-4d1a-bf02-638fb42dab18" containerName="cilium-operator" Aug 13 04:20:23.720023 kubelet[2119]: I0813 04:20:23.719997 2119 memory_manager.go:354] "RemoveStaleState removing state" podUID="dada73f5-f6ed-4e27-bc19-d43ce49f13f7" containerName="cilium-agent" Aug 13 04:20:23.796831 kubelet[2119]: I0813 04:20:23.796765 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-run\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.797305 kubelet[2119]: I0813 04:20:23.797279 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-etc-cni-netd\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.797555 kubelet[2119]: I0813 04:20:23.797501 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-hubble-tls\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.797749 kubelet[2119]: I0813 04:20:23.797721 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-bpf-maps\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.797924 kubelet[2119]: I0813 04:20:23.797890 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-kernel\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798102 kubelet[2119]: I0813 04:20:23.798074 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-cgroup\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798589 kubelet[2119]: I0813 04:20:23.798312 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-clustermesh-secrets\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798589 kubelet[2119]: I0813 04:20:23.798373 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-net\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798589 kubelet[2119]: I0813 04:20:23.798431 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2deafeae-9104-40a2-9960-43c10a22b553-cilium-config-path\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798589 kubelet[2119]: I0813 04:20:23.798477 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-cilium-ipsec-secrets\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.798908 kubelet[2119]: I0813 04:20:23.798880 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cni-path\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.799077 kubelet[2119]: I0813 04:20:23.799049 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-xtables-lock\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.799237 kubelet[2119]: I0813 04:20:23.799210 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-hostproc\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.799388 kubelet[2119]: I0813 04:20:23.799361 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-lib-modules\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.799571 kubelet[2119]: I0813 04:20:23.799543 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxdk\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-kube-api-access-wbxdk\") pod \"cilium-527n7\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " pod="kube-system/cilium-527n7" Aug 13 04:20:23.883904 sshd[3950]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:23.887893 systemd[1]: sshd@25-10.244.14.178:22-139.178.89.65:53424.service: Deactivated successfully. Aug 13 04:20:23.889757 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 04:20:23.890253 systemd-logind[1282]: Session 23 logged out. Waiting for processes to exit. Aug 13 04:20:23.891757 systemd-logind[1282]: Removed session 23. Aug 13 04:20:24.039905 env[1293]: time="2025-08-13T04:20:24.035984588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-527n7,Uid:2deafeae-9104-40a2-9960-43c10a22b553,Namespace:kube-system,Attempt:0,}" Aug 13 04:20:24.040026 systemd[1]: Started sshd@26-10.244.14.178:22-139.178.89.65:53440.service. Aug 13 04:20:24.071760 env[1293]: time="2025-08-13T04:20:24.071632944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:20:24.072129 env[1293]: time="2025-08-13T04:20:24.071748544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:20:24.072129 env[1293]: time="2025-08-13T04:20:24.071768471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:20:24.072422 env[1293]: time="2025-08-13T04:20:24.072225745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de pid=3975 runtime=io.containerd.runc.v2 Aug 13 04:20:24.139903 env[1293]: time="2025-08-13T04:20:24.139845023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-527n7,Uid:2deafeae-9104-40a2-9960-43c10a22b553,Namespace:kube-system,Attempt:0,} returns sandbox id \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\"" Aug 13 04:20:24.148231 env[1293]: time="2025-08-13T04:20:24.148178105Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 04:20:24.163879 env[1293]: time="2025-08-13T04:20:24.163817874Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\"" Aug 13 04:20:24.166632 env[1293]: time="2025-08-13T04:20:24.164609374Z" level=info msg="StartContainer for \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\"" Aug 13 04:20:24.264545 env[1293]: time="2025-08-13T04:20:24.259893758Z" level=info msg="StartContainer for \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\" returns successfully" Aug 13 04:20:24.313113 env[1293]: time="2025-08-13T04:20:24.313029316Z" level=info msg="shim disconnected" id=c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756 Aug 13 04:20:24.313393 env[1293]: time="2025-08-13T04:20:24.313122445Z" level=warning msg="cleaning up after shim disconnected" id=c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756 namespace=k8s.io Aug 13 04:20:24.313393 env[1293]: time="2025-08-13T04:20:24.313143806Z" level=info msg="cleaning up dead shim" Aug 13 04:20:24.325299 env[1293]: time="2025-08-13T04:20:24.325248658Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Aug 13 04:20:25.006365 sshd[3965]: Accepted publickey for core from 139.178.89.65 port 53440 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:25.008715 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:25.018274 systemd[1]: Started session-24.scope. Aug 13 04:20:25.019240 systemd-logind[1282]: New session 24 of user core. Aug 13 04:20:25.246385 env[1293]: time="2025-08-13T04:20:25.246319511Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 04:20:25.263653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329540311.mount: Deactivated successfully. Aug 13 04:20:25.280325 env[1293]: time="2025-08-13T04:20:25.280189531Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\"" Aug 13 04:20:25.291196 env[1293]: time="2025-08-13T04:20:25.291138730Z" level=info msg="StartContainer for \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\"" Aug 13 04:20:25.376629 env[1293]: time="2025-08-13T04:20:25.375725885Z" level=info msg="StartContainer for \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\" returns successfully" Aug 13 04:20:25.413703 env[1293]: time="2025-08-13T04:20:25.413642883Z" level=info msg="shim disconnected" id=5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a Aug 13 04:20:25.414061 env[1293]: time="2025-08-13T04:20:25.414001319Z" level=warning msg="cleaning up after shim disconnected" id=5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a namespace=k8s.io Aug 13 04:20:25.414192 env[1293]: time="2025-08-13T04:20:25.414162321Z" level=info msg="cleaning up dead shim" Aug 13 04:20:25.444225 env[1293]: time="2025-08-13T04:20:25.444153402Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4120 runtime=io.containerd.runc.v2\n" Aug 13 04:20:25.758741 kubelet[2119]: E0813 04:20:25.758682 2119 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 04:20:25.855805 sshd[3965]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:25.859586 systemd[1]: sshd@26-10.244.14.178:22-139.178.89.65:53440.service: Deactivated successfully. Aug 13 04:20:25.861345 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 04:20:25.861391 systemd-logind[1282]: Session 24 logged out. Waiting for processes to exit. Aug 13 04:20:25.863701 systemd-logind[1282]: Removed session 24. Aug 13 04:20:26.009108 systemd[1]: Started sshd@27-10.244.14.178:22-139.178.89.65:53454.service. Aug 13 04:20:26.253650 env[1293]: time="2025-08-13T04:20:26.253399581Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 04:20:26.300957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017152713.mount: Deactivated successfully. Aug 13 04:20:26.311223 env[1293]: time="2025-08-13T04:20:26.310901024Z" level=info msg="CreateContainer within sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\"" Aug 13 04:20:26.314680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709976625.mount: Deactivated successfully. Aug 13 04:20:26.317451 env[1293]: time="2025-08-13T04:20:26.316445079Z" level=info msg="StartContainer for \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\"" Aug 13 04:20:26.427189 env[1293]: time="2025-08-13T04:20:26.427002288Z" level=info msg="StartContainer for \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\" returns successfully" Aug 13 04:20:26.471169 env[1293]: time="2025-08-13T04:20:26.471106888Z" level=info msg="shim disconnected" id=3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be Aug 13 04:20:26.471575 env[1293]: time="2025-08-13T04:20:26.471502015Z" level=warning msg="cleaning up after shim disconnected" id=3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be namespace=k8s.io Aug 13 04:20:26.471786 env[1293]: time="2025-08-13T04:20:26.471756886Z" level=info msg="cleaning up dead shim" Aug 13 04:20:26.483755 env[1293]: time="2025-08-13T04:20:26.483713941Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4190 runtime=io.containerd.runc.v2\n" Aug 13 04:20:26.922139 sshd[4141]: Accepted publickey for core from 139.178.89.65 port 53454 ssh2: RSA SHA256:IhAXCeSjxrdQ+RldUaiR6Aj3Gfh8Tjc1MdmRZxX3OLE Aug 13 04:20:26.925028 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 04:20:26.932057 systemd-logind[1282]: New session 25 of user core. Aug 13 04:20:26.933444 systemd[1]: Started session-25.scope. Aug 13 04:20:27.256639 env[1293]: time="2025-08-13T04:20:27.255535964Z" level=info msg="StopPodSandbox for \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\"" Aug 13 04:20:27.256639 env[1293]: time="2025-08-13T04:20:27.255677569Z" level=info msg="Container to stop \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:27.256639 env[1293]: time="2025-08-13T04:20:27.255725889Z" level=info msg="Container to stop \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:27.256639 env[1293]: time="2025-08-13T04:20:27.255750875Z" level=info msg="Container to stop \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 04:20:27.260580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de-shm.mount: Deactivated successfully. Aug 13 04:20:27.311771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de-rootfs.mount: Deactivated successfully. Aug 13 04:20:27.318185 env[1293]: time="2025-08-13T04:20:27.318114059Z" level=info msg="shim disconnected" id=b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de Aug 13 04:20:27.318388 env[1293]: time="2025-08-13T04:20:27.318183170Z" level=warning msg="cleaning up after shim disconnected" id=b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de namespace=k8s.io Aug 13 04:20:27.318388 env[1293]: time="2025-08-13T04:20:27.318201368Z" level=info msg="cleaning up dead shim" Aug 13 04:20:27.332955 env[1293]: time="2025-08-13T04:20:27.332884972Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\n" Aug 13 04:20:27.333691 env[1293]: time="2025-08-13T04:20:27.333650840Z" level=info msg="TearDown network for sandbox \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" successfully" Aug 13 04:20:27.333863 env[1293]: time="2025-08-13T04:20:27.333828484Z" level=info msg="StopPodSandbox for \"b51a58a23fed4425343d8092b64c4ed4c522b3c02080439eb5dbfaa42d75b9de\" returns successfully" Aug 13 04:20:27.430205 kubelet[2119]: I0813 04:20:27.430145 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-hubble-tls\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430226 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-bpf-maps\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430258 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-xtables-lock\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430283 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-hostproc\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430339 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-cilium-ipsec-secrets\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430368 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-etc-cni-netd\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431038 kubelet[2119]: I0813 04:20:27.430416 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-net\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431405 kubelet[2119]: I0813 04:20:27.430482 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-cgroup\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431405 kubelet[2119]: I0813 04:20:27.430520 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbxdk\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-kube-api-access-wbxdk\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431405 kubelet[2119]: I0813 04:20:27.430658 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2deafeae-9104-40a2-9960-43c10a22b553-cilium-config-path\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.431405 kubelet[2119]: I0813 04:20:27.430694 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-run\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.432728 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-kernel\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.432768 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cni-path\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.432823 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-clustermesh-secrets\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.432852 2119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-lib-modules\") pod \"2deafeae-9104-40a2-9960-43c10a22b553\" (UID: \"2deafeae-9104-40a2-9960-43c10a22b553\") " Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.430750 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-hostproc" (OuterVolumeSpecName: "hostproc") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433485 kubelet[2119]: I0813 04:20:27.431721 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433963 kubelet[2119]: I0813 04:20:27.432671 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433963 kubelet[2119]: I0813 04:20:27.432924 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433963 kubelet[2119]: I0813 04:20:27.432963 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433963 kubelet[2119]: I0813 04:20:27.433069 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.433963 kubelet[2119]: I0813 04:20:27.433105 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cni-path" (OuterVolumeSpecName: "cni-path") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.435159 kubelet[2119]: I0813 04:20:27.435123 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.435257 kubelet[2119]: I0813 04:20:27.435188 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.435257 kubelet[2119]: I0813 04:20:27.435220 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 04:20:27.441906 systemd[1]: var-lib-kubelet-pods-2deafeae\x2d9104\x2d40a2\x2d9960\x2d43c10a22b553-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 04:20:27.452920 systemd[1]: var-lib-kubelet-pods-2deafeae\x2d9104\x2d40a2\x2d9960\x2d43c10a22b553-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbxdk.mount: Deactivated successfully. Aug 13 04:20:27.463010 kubelet[2119]: I0813 04:20:27.462956 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-kube-api-access-wbxdk" (OuterVolumeSpecName: "kube-api-access-wbxdk") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "kube-api-access-wbxdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:20:27.463398 kubelet[2119]: I0813 04:20:27.463366 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 04:20:27.463664 kubelet[2119]: I0813 04:20:27.463633 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 04:20:27.477160 kubelet[2119]: I0813 04:20:27.477027 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2deafeae-9104-40a2-9960-43c10a22b553-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 04:20:27.492378 kubelet[2119]: I0813 04:20:27.492316 2119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2deafeae-9104-40a2-9960-43c10a22b553" (UID: "2deafeae-9104-40a2-9960-43c10a22b553"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 04:20:27.533712 kubelet[2119]: I0813 04:20:27.533497 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-cgroup\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534072 kubelet[2119]: I0813 04:20:27.534028 2119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbxdk\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-kube-api-access-wbxdk\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534212 kubelet[2119]: I0813 04:20:27.534186 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cilium-run\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534353 kubelet[2119]: I0813 04:20:27.534327 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-kernel\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534527 kubelet[2119]: I0813 04:20:27.534490 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2deafeae-9104-40a2-9960-43c10a22b553-cilium-config-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534678 kubelet[2119]: I0813 04:20:27.534644 2119 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-cni-path\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534818 kubelet[2119]: I0813 04:20:27.534795 2119 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-clustermesh-secrets\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.534985 kubelet[2119]: I0813 04:20:27.534959 2119 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-lib-modules\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535137 kubelet[2119]: I0813 04:20:27.535107 2119 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2deafeae-9104-40a2-9960-43c10a22b553-hubble-tls\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535273 kubelet[2119]: I0813 04:20:27.535249 2119 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-bpf-maps\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535409 kubelet[2119]: I0813 04:20:27.535384 2119 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-xtables-lock\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535584 kubelet[2119]: I0813 04:20:27.535560 2119 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-hostproc\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535743 kubelet[2119]: I0813 04:20:27.535712 2119 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2deafeae-9104-40a2-9960-43c10a22b553-cilium-ipsec-secrets\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.535905 kubelet[2119]: I0813 04:20:27.535881 2119 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-etc-cni-netd\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.536065 kubelet[2119]: I0813 04:20:27.536027 2119 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2deafeae-9104-40a2-9960-43c10a22b553-host-proc-sys-net\") on node \"srv-h1d3j.gb1.brightbox.com\" DevicePath \"\"" Aug 13 04:20:27.917008 systemd[1]: var-lib-kubelet-pods-2deafeae\x2d9104\x2d40a2\x2d9960\x2d43c10a22b553-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 04:20:27.917335 systemd[1]: var-lib-kubelet-pods-2deafeae\x2d9104\x2d40a2\x2d9960\x2d43c10a22b553-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 04:20:28.259984 kubelet[2119]: I0813 04:20:28.259475 2119 scope.go:117] "RemoveContainer" containerID="3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be" Aug 13 04:20:28.269576 env[1293]: time="2025-08-13T04:20:28.269426279Z" level=info msg="RemoveContainer for \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\"" Aug 13 04:20:28.276488 env[1293]: time="2025-08-13T04:20:28.275388674Z" level=info msg="RemoveContainer for \"3cd6f0a7718127289aecb8980fdfac4819d41ea4acdd764944c17d5c5cda36be\" returns successfully" Aug 13 04:20:28.276796 kubelet[2119]: I0813 04:20:28.276763 2119 scope.go:117] "RemoveContainer" containerID="5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a" Aug 13 04:20:28.278397 env[1293]: time="2025-08-13T04:20:28.278353459Z" level=info msg="RemoveContainer for \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\"" Aug 13 04:20:28.282492 env[1293]: time="2025-08-13T04:20:28.282419532Z" level=info msg="RemoveContainer for \"5d4623de079e4626f27b7603048edae373b8f85195bc95c0de449a55f697094a\" returns successfully" Aug 13 04:20:28.282820 kubelet[2119]: I0813 04:20:28.282795 2119 scope.go:117] "RemoveContainer" containerID="c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756" Aug 13 04:20:28.284614 env[1293]: time="2025-08-13T04:20:28.284565491Z" level=info msg="RemoveContainer for \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\"" Aug 13 04:20:28.294158 env[1293]: time="2025-08-13T04:20:28.292281965Z" level=info msg="RemoveContainer for \"c6be7460e8b4e251ace9a96673d879498d4966d5e4e63279e62252b820349756\" returns successfully" Aug 13 04:20:28.322550 kubelet[2119]: E0813 04:20:28.322489 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2deafeae-9104-40a2-9960-43c10a22b553" containerName="mount-cgroup" Aug 13 04:20:28.322550 kubelet[2119]: E0813 04:20:28.322535 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2deafeae-9104-40a2-9960-43c10a22b553" containerName="apply-sysctl-overwrites" Aug 13 04:20:28.322832 kubelet[2119]: E0813 04:20:28.322568 2119 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2deafeae-9104-40a2-9960-43c10a22b553" containerName="mount-bpf-fs" Aug 13 04:20:28.322832 kubelet[2119]: I0813 04:20:28.322649 2119 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deafeae-9104-40a2-9960-43c10a22b553" containerName="mount-bpf-fs" Aug 13 04:20:28.442085 kubelet[2119]: I0813 04:20:28.442007 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-bpf-maps\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.442965 kubelet[2119]: I0813 04:20:28.442934 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-cilium-cgroup\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.443171 kubelet[2119]: I0813 04:20:28.443142 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-xtables-lock\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.443357 kubelet[2119]: I0813 04:20:28.443322 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pqf\" (UniqueName: \"kubernetes.io/projected/ce68fdd6-4043-4b37-91eb-59cacb1ec568-kube-api-access-z9pqf\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.443584 kubelet[2119]: I0813 04:20:28.443548 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-etc-cni-netd\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.443777 kubelet[2119]: I0813 04:20:28.443734 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-lib-modules\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.443948 kubelet[2119]: I0813 04:20:28.443912 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce68fdd6-4043-4b37-91eb-59cacb1ec568-clustermesh-secrets\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.444142 kubelet[2119]: I0813 04:20:28.444104 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce68fdd6-4043-4b37-91eb-59cacb1ec568-cilium-ipsec-secrets\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.444328 kubelet[2119]: I0813 04:20:28.444294 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-hostproc\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.444529 kubelet[2119]: I0813 04:20:28.444503 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-cni-path\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.444731 kubelet[2119]: I0813 04:20:28.444694 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce68fdd6-4043-4b37-91eb-59cacb1ec568-cilium-config-path\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.444929 kubelet[2119]: I0813 04:20:28.444889 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-host-proc-sys-net\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.445125 kubelet[2119]: I0813 04:20:28.445089 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-host-proc-sys-kernel\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.445301 kubelet[2119]: I0813 04:20:28.445266 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce68fdd6-4043-4b37-91eb-59cacb1ec568-hubble-tls\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.445485 kubelet[2119]: I0813 04:20:28.445446 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce68fdd6-4043-4b37-91eb-59cacb1ec568-cilium-run\") pod \"cilium-4zqtj\" (UID: \"ce68fdd6-4043-4b37-91eb-59cacb1ec568\") " pod="kube-system/cilium-4zqtj" Aug 13 04:20:28.642602 env[1293]: time="2025-08-13T04:20:28.642512694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4zqtj,Uid:ce68fdd6-4043-4b37-91eb-59cacb1ec568,Namespace:kube-system,Attempt:0,}" Aug 13 04:20:28.663544 env[1293]: time="2025-08-13T04:20:28.663342950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 04:20:28.663544 env[1293]: time="2025-08-13T04:20:28.663396099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 04:20:28.663544 env[1293]: time="2025-08-13T04:20:28.663413364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 04:20:28.664210 env[1293]: time="2025-08-13T04:20:28.664133852Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9 pid=4262 runtime=io.containerd.runc.v2 Aug 13 04:20:28.724586 env[1293]: time="2025-08-13T04:20:28.724505520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4zqtj,Uid:ce68fdd6-4043-4b37-91eb-59cacb1ec568,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\"" Aug 13 04:20:28.728743 env[1293]: time="2025-08-13T04:20:28.728695115Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 04:20:28.741623 env[1293]: time="2025-08-13T04:20:28.741564302Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e36819e576a1b6bc870999891eeda9980fc037f7ece63a3fbda2af9a3e324afa\"" Aug 13 04:20:28.742547 env[1293]: time="2025-08-13T04:20:28.742510672Z" level=info msg="StartContainer for \"e36819e576a1b6bc870999891eeda9980fc037f7ece63a3fbda2af9a3e324afa\"" Aug 13 04:20:28.829999 env[1293]: time="2025-08-13T04:20:28.829887609Z" level=info msg="StartContainer for \"e36819e576a1b6bc870999891eeda9980fc037f7ece63a3fbda2af9a3e324afa\" returns successfully" Aug 13 04:20:28.871172 env[1293]: time="2025-08-13T04:20:28.871106628Z" level=info msg="shim disconnected" id=e36819e576a1b6bc870999891eeda9980fc037f7ece63a3fbda2af9a3e324afa Aug 13 04:20:28.871172 env[1293]: time="2025-08-13T04:20:28.871172150Z" level=warning msg="cleaning up after shim disconnected" id=e36819e576a1b6bc870999891eeda9980fc037f7ece63a3fbda2af9a3e324afa namespace=k8s.io Aug 13 04:20:28.871523 env[1293]: time="2025-08-13T04:20:28.871190349Z" level=info msg="cleaning up dead shim" Aug 13 04:20:28.883910 env[1293]: time="2025-08-13T04:20:28.883839329Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4345 runtime=io.containerd.runc.v2\n" Aug 13 04:20:29.279976 env[1293]: time="2025-08-13T04:20:29.279801092Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 04:20:29.303399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901812466.mount: Deactivated successfully. Aug 13 04:20:29.316300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935227747.mount: Deactivated successfully. Aug 13 04:20:29.320409 env[1293]: time="2025-08-13T04:20:29.320353922Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33b3f1fb8c07f52bd2afb9b8507bb311c36be3c0df4e11e837fff29795bb0b53\"" Aug 13 04:20:29.322075 env[1293]: time="2025-08-13T04:20:29.322027410Z" level=info msg="StartContainer for \"33b3f1fb8c07f52bd2afb9b8507bb311c36be3c0df4e11e837fff29795bb0b53\"" Aug 13 04:20:29.422197 env[1293]: time="2025-08-13T04:20:29.422138474Z" level=info msg="StartContainer for \"33b3f1fb8c07f52bd2afb9b8507bb311c36be3c0df4e11e837fff29795bb0b53\" returns successfully" Aug 13 04:20:29.469341 env[1293]: time="2025-08-13T04:20:29.469253256Z" level=info msg="shim disconnected" id=33b3f1fb8c07f52bd2afb9b8507bb311c36be3c0df4e11e837fff29795bb0b53 Aug 13 04:20:29.469832 env[1293]: time="2025-08-13T04:20:29.469790554Z" level=warning msg="cleaning up after shim disconnected" id=33b3f1fb8c07f52bd2afb9b8507bb311c36be3c0df4e11e837fff29795bb0b53 namespace=k8s.io Aug 13 04:20:29.469986 env[1293]: time="2025-08-13T04:20:29.469948485Z" level=info msg="cleaning up dead shim" Aug 13 04:20:29.482168 env[1293]: time="2025-08-13T04:20:29.482100433Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4409 runtime=io.containerd.runc.v2\n" Aug 13 04:20:29.566120 kubelet[2119]: I0813 04:20:29.566072 2119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2deafeae-9104-40a2-9960-43c10a22b553" path="/var/lib/kubelet/pods/2deafeae-9104-40a2-9960-43c10a22b553/volumes" Aug 13 04:20:29.635991 kubelet[2119]: I0813 04:20:29.635912 2119 setters.go:600] "Node became not ready" node="srv-h1d3j.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T04:20:29Z","lastTransitionTime":"2025-08-13T04:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 04:20:30.280778 env[1293]: time="2025-08-13T04:20:30.280704313Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 04:20:30.303989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301795034.mount: Deactivated successfully. Aug 13 04:20:30.315007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694463514.mount: Deactivated successfully. Aug 13 04:20:30.319109 env[1293]: time="2025-08-13T04:20:30.319006209Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1d27d2dd511b295bc88716651cefd295f4f830838a99fddbbe0fe623534d488\"" Aug 13 04:20:30.325021 env[1293]: time="2025-08-13T04:20:30.323739732Z" level=info msg="StartContainer for \"a1d27d2dd511b295bc88716651cefd295f4f830838a99fddbbe0fe623534d488\"" Aug 13 04:20:30.412793 env[1293]: time="2025-08-13T04:20:30.412738876Z" level=info msg="StartContainer for \"a1d27d2dd511b295bc88716651cefd295f4f830838a99fddbbe0fe623534d488\" returns successfully" Aug 13 04:20:30.455840 env[1293]: time="2025-08-13T04:20:30.455775307Z" level=info msg="shim disconnected" id=a1d27d2dd511b295bc88716651cefd295f4f830838a99fddbbe0fe623534d488 Aug 13 04:20:30.456240 env[1293]: time="2025-08-13T04:20:30.456196331Z" level=warning msg="cleaning up after shim disconnected" id=a1d27d2dd511b295bc88716651cefd295f4f830838a99fddbbe0fe623534d488 namespace=k8s.io Aug 13 04:20:30.456442 env[1293]: time="2025-08-13T04:20:30.456404195Z" level=info msg="cleaning up dead shim" Aug 13 04:20:30.478917 env[1293]: time="2025-08-13T04:20:30.478780646Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4465 runtime=io.containerd.runc.v2\n" Aug 13 04:20:30.762082 kubelet[2119]: E0813 04:20:30.761923 2119 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 04:20:31.293157 env[1293]: time="2025-08-13T04:20:31.293078334Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 04:20:31.322813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509252577.mount: Deactivated successfully. Aug 13 04:20:31.334601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502206419.mount: Deactivated successfully. Aug 13 04:20:31.337822 env[1293]: time="2025-08-13T04:20:31.337740967Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"293bbdcaef3d6bc9908ec48de509db468b7f528b9102305c0731da3030793ae0\"" Aug 13 04:20:31.339725 env[1293]: time="2025-08-13T04:20:31.339653624Z" level=info msg="StartContainer for \"293bbdcaef3d6bc9908ec48de509db468b7f528b9102305c0731da3030793ae0\"" Aug 13 04:20:31.413216 env[1293]: time="2025-08-13T04:20:31.413155377Z" level=info msg="StartContainer for \"293bbdcaef3d6bc9908ec48de509db468b7f528b9102305c0731da3030793ae0\" returns successfully" Aug 13 04:20:31.441717 env[1293]: time="2025-08-13T04:20:31.441652624Z" level=info msg="shim disconnected" id=293bbdcaef3d6bc9908ec48de509db468b7f528b9102305c0731da3030793ae0 Aug 13 04:20:31.441717 env[1293]: time="2025-08-13T04:20:31.441727988Z" level=warning msg="cleaning up after shim disconnected" id=293bbdcaef3d6bc9908ec48de509db468b7f528b9102305c0731da3030793ae0 namespace=k8s.io Aug 13 04:20:31.442120 env[1293]: time="2025-08-13T04:20:31.441747400Z" level=info msg="cleaning up dead shim" Aug 13 04:20:31.456253 env[1293]: time="2025-08-13T04:20:31.456181262Z" level=warning msg="cleanup warnings time=\"2025-08-13T04:20:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4521 runtime=io.containerd.runc.v2\n" Aug 13 04:20:32.295487 env[1293]: time="2025-08-13T04:20:32.295416766Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 04:20:32.311433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75981974.mount: Deactivated successfully. Aug 13 04:20:32.333420 env[1293]: time="2025-08-13T04:20:32.333347155Z" level=info msg="CreateContainer within sandbox \"3a5ed0dc684a2f290fe3fe6b2487ee23c379cd3cc84277450f258da255a3c3c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6\"" Aug 13 04:20:32.343709 env[1293]: time="2025-08-13T04:20:32.343643451Z" level=info msg="StartContainer for \"e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6\"" Aug 13 04:20:32.435957 env[1293]: time="2025-08-13T04:20:32.435571252Z" level=info msg="StartContainer for \"e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6\" returns successfully" Aug 13 04:20:33.218678 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 04:20:33.326091 kubelet[2119]: I0813 04:20:33.325970 2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4zqtj" podStartSLOduration=5.325925083 podStartE2EDuration="5.325925083s" podCreationTimestamp="2025-08-13 04:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 04:20:33.325477811 +0000 UTC m=+198.089774924" watchObservedRunningTime="2025-08-13 04:20:33.325925083 +0000 UTC m=+198.090222161" Aug 13 04:20:33.895285 systemd[1]: run-containerd-runc-k8s.io-e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6-runc.B40S1H.mount: Deactivated successfully. Aug 13 04:20:36.233964 systemd[1]: run-containerd-runc-k8s.io-e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6-runc.3weYK8.mount: Deactivated successfully. Aug 13 04:20:36.840450 systemd-networkd[1069]: lxc_health: Link UP Aug 13 04:20:36.898075 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 04:20:36.902547 systemd-networkd[1069]: lxc_health: Gained carrier Aug 13 04:20:38.169944 systemd-networkd[1069]: lxc_health: Gained IPv6LL Aug 13 04:20:38.507905 systemd[1]: run-containerd-runc-k8s.io-e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6-runc.MQOm0S.mount: Deactivated successfully. Aug 13 04:20:40.778146 systemd[1]: run-containerd-runc-k8s.io-e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6-runc.6fQqiQ.mount: Deactivated successfully. Aug 13 04:20:43.028540 systemd[1]: run-containerd-runc-k8s.io-e07fccc03e92f0a17fcd3e5c46a46504374bb9aae37747e33369ff77a4c57ae6-runc.r6ZAVK.mount: Deactivated successfully. Aug 13 04:20:43.358666 sshd[4141]: pam_unix(sshd:session): session closed for user core Aug 13 04:20:43.366608 systemd[1]: sshd@27-10.244.14.178:22-139.178.89.65:53454.service: Deactivated successfully. Aug 13 04:20:43.369814 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 04:20:43.369833 systemd-logind[1282]: Session 25 logged out. Waiting for processes to exit. Aug 13 04:20:43.375204 systemd-logind[1282]: Removed session 25.