Sep 6 01:54:55.928578 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 01:54:55.928620 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:54:55.928640 kernel: BIOS-provided physical RAM map: Sep 6 01:54:55.928662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 01:54:55.928673 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 01:54:55.928683 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 01:54:55.928695 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 6 01:54:55.928706 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 6 01:54:55.928716 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 01:54:55.928726 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 01:54:55.928741 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 01:54:55.928751 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 01:54:55.928762 kernel: NX (Execute Disable) protection: active Sep 6 01:54:55.928772 kernel: SMBIOS 2.8 present. Sep 6 01:54:55.928785 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 6 01:54:55.928796 kernel: Hypervisor detected: KVM Sep 6 01:54:55.928811 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 01:54:55.928823 kernel: kvm-clock: cpu 0, msr 4419f001, primary cpu clock Sep 6 01:54:55.928834 kernel: kvm-clock: using sched offset of 4803026273 cycles Sep 6 01:54:55.929973 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 01:54:55.929993 kernel: tsc: Detected 2499.998 MHz processor Sep 6 01:54:55.930005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 01:54:55.930017 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 01:54:55.930028 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 6 01:54:55.930040 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 01:54:55.930058 kernel: Using GB pages for direct mapping Sep 6 01:54:55.930069 kernel: ACPI: Early table checksum verification disabled Sep 6 01:54:55.930080 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 6 01:54:55.930092 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930103 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930114 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930125 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 6 01:54:55.930137 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930148 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930163 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930175 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 01:54:55.930186 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 6 01:54:55.930197 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 6 01:54:55.930208 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 6 01:54:55.930219 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 6 01:54:55.930236 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 6 01:54:55.930252 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 6 01:54:55.930264 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 6 01:54:55.930276 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 01:54:55.930287 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 01:54:55.930299 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 6 01:54:55.930311 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 6 01:54:55.930323 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 6 01:54:55.930338 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 6 01:54:55.930350 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 6 01:54:55.930362 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 6 01:54:55.930374 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 6 01:54:55.930386 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 6 01:54:55.930397 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 6 01:54:55.930409 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 6 01:54:55.930421 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 6 01:54:55.930432 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 6 01:54:55.930444 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 6 01:54:55.930459 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 6 01:54:55.930471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 01:54:55.930483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 01:54:55.930495 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 6 01:54:55.930507 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 6 01:54:55.930519 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 6 01:54:55.930531 kernel: Zone ranges: Sep 6 01:54:55.930543 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 01:54:55.930556 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 6 01:54:55.930571 kernel: Normal empty Sep 6 01:54:55.930583 kernel: Movable zone start for each node Sep 6 01:54:55.930595 kernel: Early memory node ranges Sep 6 01:54:55.930607 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 01:54:55.930619 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 6 01:54:55.930630 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 6 01:54:55.930642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 01:54:55.930667 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 01:54:55.930679 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 6 01:54:55.930695 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 01:54:55.930708 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 01:54:55.930720 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 01:54:55.930732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 01:54:55.930744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 01:54:55.930756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 01:54:55.930767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 01:54:55.930779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 01:54:55.930791 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 01:54:55.930807 kernel: TSC deadline timer available Sep 6 01:54:55.930819 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 6 01:54:55.930831 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 01:54:55.930843 kernel: Booting paravirtualized kernel on KVM Sep 6 01:54:55.930868 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 01:54:55.930881 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 6 01:54:55.930893 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 6 01:54:55.930905 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 6 01:54:55.930916 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 6 01:54:55.930933 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Sep 6 01:54:55.930945 kernel: kvm-guest: PV spinlocks enabled Sep 6 01:54:55.930957 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 01:54:55.930969 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 6 01:54:55.930981 kernel: Policy zone: DMA32 Sep 6 01:54:55.930994 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:54:55.931006 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:54:55.931018 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:54:55.931034 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 01:54:55.931047 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:54:55.931059 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 192524K reserved, 0K cma-reserved) Sep 6 01:54:55.931071 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 6 01:54:55.931083 kernel: Kernel/User page tables isolation: enabled Sep 6 01:54:55.931095 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 01:54:55.931106 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 01:54:55.931118 kernel: rcu: Hierarchical RCU implementation. Sep 6 01:54:55.931131 kernel: rcu: RCU event tracing is enabled. Sep 6 01:54:55.931147 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 6 01:54:55.931159 kernel: Rude variant of Tasks RCU enabled. Sep 6 01:54:55.931171 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:54:55.931183 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:54:55.931195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 6 01:54:55.931207 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 6 01:54:55.931219 kernel: random: crng init done Sep 6 01:54:55.931242 kernel: Console: colour VGA+ 80x25 Sep 6 01:54:55.931255 kernel: printk: console [tty0] enabled Sep 6 01:54:55.931267 kernel: printk: console [ttyS0] enabled Sep 6 01:54:55.931280 kernel: ACPI: Core revision 20210730 Sep 6 01:54:55.931292 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 01:54:55.931308 kernel: x2apic enabled Sep 6 01:54:55.931321 kernel: Switched APIC routing to physical x2apic. Sep 6 01:54:55.931333 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 6 01:54:55.931346 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 6 01:54:55.931359 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 01:54:55.931376 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 01:54:55.931388 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 01:54:55.931401 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 01:54:55.931413 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 01:54:55.931425 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 01:54:55.931438 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 01:54:55.931450 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 01:54:55.931462 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 01:54:55.931475 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 01:54:55.931487 kernel: MMIO Stale Data: Unknown: No mitigations Sep 6 01:54:55.931499 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 6 01:54:55.931515 kernel: active return thunk: its_return_thunk Sep 6 01:54:55.931527 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 01:54:55.931540 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 01:54:55.931552 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 01:54:55.931565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 01:54:55.931577 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 01:54:55.931589 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 01:54:55.931602 kernel: Freeing SMP alternatives memory: 32K Sep 6 01:54:55.931614 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:54:55.931626 kernel: LSM: Security Framework initializing Sep 6 01:54:55.931638 kernel: SELinux: Initializing. Sep 6 01:54:55.931664 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 01:54:55.931678 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 01:54:55.931690 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 6 01:54:55.931703 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 6 01:54:55.931716 kernel: signal: max sigframe size: 1776 Sep 6 01:54:55.931728 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:54:55.931741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 01:54:55.931753 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:54:55.931765 kernel: x86: Booting SMP configuration: Sep 6 01:54:55.931778 kernel: .... node #0, CPUs: #1 Sep 6 01:54:55.931794 kernel: kvm-clock: cpu 1, msr 4419f041, secondary cpu clock Sep 6 01:54:55.931807 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 6 01:54:55.931819 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Sep 6 01:54:55.931832 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:54:55.931854 kernel: smpboot: Max logical packages: 16 Sep 6 01:54:55.931868 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 6 01:54:55.931881 kernel: devtmpfs: initialized Sep 6 01:54:55.931893 kernel: x86/mm: Memory block size: 128MB Sep 6 01:54:55.931906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:54:55.931924 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 6 01:54:55.931936 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:54:55.931949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:54:55.931962 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:54:55.931974 kernel: audit: type=2000 audit(1757123694.961:1): state=initialized audit_enabled=0 res=1 Sep 6 01:54:55.931986 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:54:55.931999 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 01:54:55.932011 kernel: cpuidle: using governor menu Sep 6 01:54:55.932024 kernel: ACPI: bus type PCI registered Sep 6 01:54:55.932040 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:54:55.932053 kernel: dca service started, version 1.12.1 Sep 6 01:54:55.932065 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 01:54:55.932078 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 01:54:55.932091 kernel: PCI: Using configuration type 1 for base access Sep 6 01:54:55.932103 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 01:54:55.932116 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:54:55.932128 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:54:55.932141 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:54:55.932157 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:54:55.932169 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:54:55.932182 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:54:55.932194 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:54:55.932207 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:54:55.932219 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:54:55.932232 kernel: ACPI: Interpreter enabled Sep 6 01:54:55.932244 kernel: ACPI: PM: (supports S0 S5) Sep 6 01:54:55.932257 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 01:54:55.932273 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 01:54:55.932286 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 01:54:55.932299 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 01:54:55.932578 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 01:54:55.932774 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 01:54:55.935906 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 01:54:55.935931 kernel: PCI host bridge to bus 0000:00 Sep 6 01:54:55.936106 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 01:54:55.936257 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 01:54:55.936416 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 01:54:55.936574 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 6 01:54:55.936744 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 01:54:55.939982 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 6 01:54:55.940145 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 01:54:55.940347 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 01:54:55.940538 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 6 01:54:55.940722 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 6 01:54:55.940904 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 6 01:54:55.941069 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 6 01:54:55.941232 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 01:54:55.941415 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.941579 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 6 01:54:55.941788 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.942012 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 6 01:54:55.942198 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.942363 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 6 01:54:55.942536 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.942721 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 6 01:54:55.942912 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.943075 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 6 01:54:55.943260 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.943425 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 6 01:54:55.943596 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.943780 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 6 01:54:55.943968 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 6 01:54:55.944131 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 6 01:54:55.944303 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 01:54:55.944464 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 6 01:54:55.944626 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 6 01:54:55.944810 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 6 01:54:55.945005 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 6 01:54:55.945181 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 6 01:54:55.945345 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 01:54:55.945508 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 6 01:54:55.945686 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 6 01:54:55.954913 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 01:54:55.955148 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 01:54:55.955341 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 01:54:55.955509 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 6 01:54:55.955688 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 6 01:54:55.955884 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 01:54:55.956065 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 6 01:54:55.956260 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 6 01:54:55.956460 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 6 01:54:55.956660 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 6 01:54:55.956872 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 6 01:54:55.957077 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:54:55.957291 kernel: pci_bus 0000:02: extended config space not accessible Sep 6 01:54:55.957531 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 6 01:54:55.957738 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 6 01:54:55.957934 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 6 01:54:55.958105 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 6 01:54:55.958284 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 6 01:54:55.958457 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 6 01:54:55.958622 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 6 01:54:55.958807 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 6 01:54:55.958993 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:54:55.959173 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 6 01:54:55.959342 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 6 01:54:55.959519 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 6 01:54:55.959695 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 6 01:54:55.959873 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:54:55.960037 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 6 01:54:55.960209 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 6 01:54:55.960371 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:54:55.960539 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 6 01:54:55.960716 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 6 01:54:55.960893 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:54:55.961060 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 6 01:54:55.961221 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 6 01:54:55.961379 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:54:55.961554 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 6 01:54:55.961731 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 6 01:54:55.974753 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:54:55.974978 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 6 01:54:55.975146 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 6 01:54:55.975310 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:54:55.975330 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 01:54:55.975345 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 01:54:55.975368 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 01:54:55.975381 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 01:54:55.975394 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 01:54:55.975407 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 01:54:55.975420 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 01:54:55.975433 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 01:54:55.975445 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 01:54:55.975458 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 01:54:55.975471 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 01:54:55.975488 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 01:54:55.975501 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 01:54:55.975514 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 01:54:55.975527 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 01:54:55.975540 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 01:54:55.975552 kernel: iommu: Default domain type: Translated Sep 6 01:54:55.975565 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 01:54:55.975741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 01:54:55.975919 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 01:54:55.976089 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 01:54:55.976109 kernel: vgaarb: loaded Sep 6 01:54:55.976122 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:54:55.976135 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:54:55.976147 kernel: PTP clock support registered Sep 6 01:54:55.976160 kernel: PCI: Using ACPI for IRQ routing Sep 6 01:54:55.976173 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 01:54:55.976185 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 01:54:55.976204 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 6 01:54:55.976217 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 01:54:55.976230 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:54:55.976243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:54:55.976255 kernel: pnp: PnP ACPI init Sep 6 01:54:55.976454 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 01:54:55.976476 kernel: pnp: PnP ACPI: found 5 devices Sep 6 01:54:55.976490 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 01:54:55.976503 kernel: NET: Registered PF_INET protocol family Sep 6 01:54:55.976522 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:54:55.976535 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 01:54:55.976548 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:54:55.976561 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 01:54:55.976574 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 01:54:55.976587 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 01:54:55.976600 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 01:54:55.976612 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 01:54:55.976629 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:54:55.976643 kernel: NET: Registered PF_XDP protocol family Sep 6 01:54:55.976818 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 6 01:54:55.977000 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 6 01:54:55.977168 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 6 01:54:55.977332 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 6 01:54:55.977498 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 6 01:54:55.977687 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 6 01:54:55.977864 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 6 01:54:55.978031 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 6 01:54:55.978194 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 6 01:54:55.978355 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 6 01:54:55.978517 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 6 01:54:55.978694 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 6 01:54:55.978893 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 6 01:54:55.979069 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 6 01:54:55.979243 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 6 01:54:55.979408 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 6 01:54:55.979582 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 6 01:54:55.979766 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 6 01:54:55.979943 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 6 01:54:55.980107 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 6 01:54:55.980278 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 6 01:54:55.980449 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:54:55.980635 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 6 01:54:55.980821 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 6 01:54:55.981010 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 6 01:54:55.981185 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:54:55.981348 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 6 01:54:55.981512 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 6 01:54:55.981689 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 6 01:54:55.981879 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:54:55.982044 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 6 01:54:55.982205 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 6 01:54:55.982370 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 6 01:54:55.982534 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:54:55.982712 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 6 01:54:55.982906 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 6 01:54:55.983071 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 6 01:54:55.983235 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:54:55.983397 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 6 01:54:55.983558 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 6 01:54:55.983736 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 6 01:54:55.984011 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:54:55.984173 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 6 01:54:55.984340 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 6 01:54:55.984500 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 6 01:54:55.984671 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:54:55.984834 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 6 01:54:55.985014 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 6 01:54:55.985182 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 6 01:54:55.985342 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:54:55.985496 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 01:54:55.985644 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 01:54:55.985804 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 01:54:55.985966 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 6 01:54:55.986113 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 01:54:55.986257 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 6 01:54:55.986424 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 6 01:54:55.986587 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 6 01:54:55.986763 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 6 01:54:55.986944 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 6 01:54:55.987113 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 6 01:54:55.987270 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 6 01:54:55.987424 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 6 01:54:55.987597 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 6 01:54:55.987765 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 6 01:54:55.987941 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 6 01:54:55.988108 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 6 01:54:55.988264 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 6 01:54:55.988418 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 6 01:54:55.988592 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 6 01:54:55.988769 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 6 01:54:55.988948 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 6 01:54:55.989113 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 6 01:54:55.989278 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 6 01:54:55.989442 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 6 01:54:55.989618 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 6 01:54:55.989794 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 6 01:54:55.989971 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 6 01:54:55.990137 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 6 01:54:55.990304 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 6 01:54:55.990467 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 6 01:54:55.990488 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 01:54:55.990502 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:54:55.990516 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 01:54:55.990536 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 6 01:54:55.990550 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 01:54:55.990564 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 6 01:54:55.990577 kernel: Initialise system trusted keyrings Sep 6 01:54:55.990591 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 01:54:55.990604 kernel: Key type asymmetric registered Sep 6 01:54:55.990617 kernel: Asymmetric key parser 'x509' registered Sep 6 01:54:55.990631 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:54:55.990644 kernel: io scheduler mq-deadline registered Sep 6 01:54:55.990675 kernel: io scheduler kyber registered Sep 6 01:54:55.990689 kernel: io scheduler bfq registered Sep 6 01:54:55.997430 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 6 01:54:55.997678 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 6 01:54:55.997867 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:55.998038 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 6 01:54:55.998212 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 6 01:54:55.998392 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:55.998557 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 6 01:54:55.998733 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 6 01:54:55.998910 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:55.999073 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 6 01:54:55.999234 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 6 01:54:55.999404 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:55.999567 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 6 01:54:55.999745 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 6 01:54:55.999924 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:56.000088 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 6 01:54:56.000250 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 6 01:54:56.000420 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:56.000583 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 6 01:54:56.000760 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 6 01:54:56.000944 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:56.001112 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 6 01:54:56.001276 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 6 01:54:56.001447 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 6 01:54:56.001469 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 01:54:56.001484 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 01:54:56.001498 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 01:54:56.001512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:54:56.001525 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 01:54:56.001538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 01:54:56.001558 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 01:54:56.001576 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 01:54:56.001768 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 01:54:56.001791 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 01:54:56.007882 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 01:54:56.008048 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T01:54:55 UTC (1757123695) Sep 6 01:54:56.008201 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 01:54:56.008222 kernel: intel_pstate: CPU model not supported Sep 6 01:54:56.008236 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:54:56.008258 kernel: Segment Routing with IPv6 Sep 6 01:54:56.008272 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:54:56.008285 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:54:56.008299 kernel: Key type dns_resolver registered Sep 6 01:54:56.008312 kernel: IPI shorthand broadcast: enabled Sep 6 01:54:56.008326 kernel: sched_clock: Marking stable (1000002026, 226041129)->(1521788045, -295744890) Sep 6 01:54:56.008339 kernel: registered taskstats version 1 Sep 6 01:54:56.008352 kernel: Loading compiled-in X.509 certificates Sep 6 01:54:56.008366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 01:54:56.008384 kernel: Key type .fscrypt registered Sep 6 01:54:56.008397 kernel: Key type fscrypt-provisioning registered Sep 6 01:54:56.008411 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:54:56.008424 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:54:56.008437 kernel: ima: No architecture policies found Sep 6 01:54:56.008462 kernel: clk: Disabling unused clocks Sep 6 01:54:56.008475 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 01:54:56.008488 kernel: Write protecting the kernel read-only data: 28672k Sep 6 01:54:56.008506 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 01:54:56.008532 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 01:54:56.008545 kernel: Run /init as init process Sep 6 01:54:56.008559 kernel: with arguments: Sep 6 01:54:56.008573 kernel: /init Sep 6 01:54:56.008586 kernel: with environment: Sep 6 01:54:56.008599 kernel: HOME=/ Sep 6 01:54:56.008623 kernel: TERM=linux Sep 6 01:54:56.008636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:54:56.008670 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:54:56.008707 systemd[1]: Detected virtualization kvm. Sep 6 01:54:56.008722 systemd[1]: Detected architecture x86-64. Sep 6 01:54:56.008735 systemd[1]: Running in initrd. Sep 6 01:54:56.008749 systemd[1]: No hostname configured, using default hostname. Sep 6 01:54:56.008762 systemd[1]: Hostname set to . Sep 6 01:54:56.008776 systemd[1]: Initializing machine ID from VM UUID. Sep 6 01:54:56.008790 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:54:56.008809 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:54:56.008822 systemd[1]: Reached target cryptsetup.target. Sep 6 01:54:56.008836 systemd[1]: Reached target paths.target. Sep 6 01:54:56.008850 systemd[1]: Reached target slices.target. Sep 6 01:54:56.008877 systemd[1]: Reached target swap.target. Sep 6 01:54:56.008891 systemd[1]: Reached target timers.target. Sep 6 01:54:56.008906 systemd[1]: Listening on iscsid.socket. Sep 6 01:54:56.008925 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:54:56.008939 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:54:56.008953 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:54:56.008968 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:54:56.008982 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:54:56.008995 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:54:56.009009 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:54:56.009024 systemd[1]: Reached target sockets.target. Sep 6 01:54:56.009038 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:54:56.009056 systemd[1]: Finished network-cleanup.service. Sep 6 01:54:56.009070 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:54:56.009084 systemd[1]: Starting systemd-journald.service... Sep 6 01:54:56.009102 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:54:56.009117 systemd[1]: Starting systemd-resolved.service... Sep 6 01:54:56.009130 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:54:56.009144 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:54:56.009158 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:54:56.009176 kernel: Bridge firewalling registered Sep 6 01:54:56.009205 systemd-journald[202]: Journal started Sep 6 01:54:56.009279 systemd-journald[202]: Runtime Journal (/run/log/journal/c9d878375c81426aa0e27c91077ef16c) is 4.7M, max 38.1M, 33.3M free. Sep 6 01:54:55.925890 systemd-modules-load[203]: Inserted module 'overlay' Sep 6 01:54:55.976604 systemd-resolved[204]: Positive Trust Anchors: Sep 6 01:54:56.039700 systemd[1]: Started systemd-resolved.service. Sep 6 01:54:56.039741 kernel: audit: type=1130 audit(1757123696.023:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.039775 systemd[1]: Started systemd-journald.service. Sep 6 01:54:56.039795 kernel: audit: type=1130 audit(1757123696.030:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.039813 kernel: SCSI subsystem initialized Sep 6 01:54:56.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:55.976627 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:54:56.045788 kernel: audit: type=1130 audit(1757123696.039:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:55.976685 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:54:56.057433 kernel: audit: type=1130 audit(1757123696.045:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.057463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:54:56.057482 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:54:56.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:55.984745 systemd-resolved[204]: Defaulting to hostname 'linux'. Sep 6 01:54:56.066795 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:54:56.066823 kernel: audit: type=1130 audit(1757123696.058:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.009686 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 6 01:54:56.040659 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:54:56.046714 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:54:56.059173 systemd[1]: Reached target nss-lookup.target. Sep 6 01:54:56.069814 systemd-modules-load[203]: Inserted module 'dm_multipath' Sep 6 01:54:56.073078 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:54:56.074715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:54:56.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.077453 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:54:56.084498 kernel: audit: type=1130 audit(1757123696.077:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.084933 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:54:56.095731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:54:56.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.102886 kernel: audit: type=1130 audit(1757123696.096:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.103200 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:54:56.109424 kernel: audit: type=1130 audit(1757123696.102:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.109966 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:54:56.131475 kernel: audit: type=1130 audit(1757123696.109:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.120094 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:54:56.144212 dracut-cmdline[224]: dracut-dracut-053 Sep 6 01:54:56.147389 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:54:56.234888 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:54:56.256884 kernel: iscsi: registered transport (tcp) Sep 6 01:54:56.286326 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:54:56.286394 kernel: QLogic iSCSI HBA Driver Sep 6 01:54:56.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.335394 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:54:56.337444 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:54:56.396914 kernel: raid6: sse2x4 gen() 13284 MB/s Sep 6 01:54:56.414899 kernel: raid6: sse2x4 xor() 7650 MB/s Sep 6 01:54:56.432897 kernel: raid6: sse2x2 gen() 9604 MB/s Sep 6 01:54:56.450911 kernel: raid6: sse2x2 xor() 8024 MB/s Sep 6 01:54:56.468922 kernel: raid6: sse2x1 gen() 9949 MB/s Sep 6 01:54:56.487570 kernel: raid6: sse2x1 xor() 7152 MB/s Sep 6 01:54:56.487678 kernel: raid6: using algorithm sse2x4 gen() 13284 MB/s Sep 6 01:54:56.487703 kernel: raid6: .... xor() 7650 MB/s, rmw enabled Sep 6 01:54:56.488862 kernel: raid6: using ssse3x2 recovery algorithm Sep 6 01:54:56.505895 kernel: xor: automatically using best checksumming function avx Sep 6 01:54:56.623893 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 01:54:56.636798 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:54:56.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.637000 audit: BPF prog-id=7 op=LOAD Sep 6 01:54:56.637000 audit: BPF prog-id=8 op=LOAD Sep 6 01:54:56.638841 systemd[1]: Starting systemd-udevd.service... Sep 6 01:54:56.656431 systemd-udevd[402]: Using default interface naming scheme 'v252'. Sep 6 01:54:56.665947 systemd[1]: Started systemd-udevd.service. Sep 6 01:54:56.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.671479 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:54:56.689233 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 6 01:54:56.731294 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:54:56.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.733106 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:54:56.826388 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:54:56.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:56.915874 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 01:54:56.962767 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 01:54:56.962795 kernel: GPT:17805311 != 125829119 Sep 6 01:54:56.962813 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 01:54:56.962840 kernel: GPT:17805311 != 125829119 Sep 6 01:54:56.962883 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 01:54:56.962901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:54:56.962918 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:54:56.964869 kernel: AVX version of gcm_enc/dec engaged. Sep 6 01:54:56.964912 kernel: AES CTR mode by8 optimization enabled Sep 6 01:54:56.968596 kernel: ACPI: bus type USB registered Sep 6 01:54:56.968630 kernel: usbcore: registered new interface driver usbfs Sep 6 01:54:56.971370 kernel: usbcore: registered new interface driver hub Sep 6 01:54:56.972872 kernel: usbcore: registered new device driver usb Sep 6 01:54:57.007963 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:54:57.142175 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (455) Sep 6 01:54:57.142209 kernel: libata version 3.00 loaded. Sep 6 01:54:57.142227 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 6 01:54:57.142466 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 6 01:54:57.142663 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 6 01:54:57.142874 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 01:54:57.143070 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 01:54:57.143092 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 01:54:57.143262 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 01:54:57.143431 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 6 01:54:57.143664 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 6 01:54:57.143863 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 6 01:54:57.144056 kernel: hub 1-0:1.0: USB hub found Sep 6 01:54:57.144308 kernel: hub 1-0:1.0: 4 ports detected Sep 6 01:54:57.144504 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 6 01:54:57.144800 kernel: hub 2-0:1.0: USB hub found Sep 6 01:54:57.145060 kernel: hub 2-0:1.0: 4 ports detected Sep 6 01:54:57.145290 kernel: scsi host0: ahci Sep 6 01:54:57.145494 kernel: scsi host1: ahci Sep 6 01:54:57.145714 kernel: scsi host2: ahci Sep 6 01:54:57.145922 kernel: scsi host3: ahci Sep 6 01:54:57.146122 kernel: scsi host4: ahci Sep 6 01:54:57.146310 kernel: scsi host5: ahci Sep 6 01:54:57.146496 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Sep 6 01:54:57.146516 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Sep 6 01:54:57.146541 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Sep 6 01:54:57.146559 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Sep 6 01:54:57.146576 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Sep 6 01:54:57.146594 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Sep 6 01:54:57.145349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:54:57.146074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:54:57.162288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:54:57.167732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:54:57.170680 systemd[1]: Starting disk-uuid.service... Sep 6 01:54:57.177538 disk-uuid[530]: Primary Header is updated. Sep 6 01:54:57.177538 disk-uuid[530]: Secondary Entries is updated. Sep 6 01:54:57.177538 disk-uuid[530]: Secondary Header is updated. Sep 6 01:54:57.181877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:54:57.188883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:54:57.194903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:54:57.301546 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 6 01:54:57.392890 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.392970 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.395251 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.396938 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.398609 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.400304 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 01:54:57.441888 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:54:57.448084 kernel: usbcore: registered new interface driver usbhid Sep 6 01:54:57.448149 kernel: usbhid: USB HID core driver Sep 6 01:54:57.457481 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 6 01:54:57.457547 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 6 01:54:58.193923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 01:54:58.194571 disk-uuid[531]: The operation has completed successfully. Sep 6 01:54:58.256792 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:54:58.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.256969 systemd[1]: Finished disk-uuid.service. Sep 6 01:54:58.263361 systemd[1]: Starting verity-setup.service... Sep 6 01:54:58.282884 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 6 01:54:58.338771 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:54:58.340485 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:54:58.341663 systemd[1]: Finished verity-setup.service. Sep 6 01:54:58.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.434918 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:54:58.435516 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:54:58.436369 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:54:58.437383 systemd[1]: Starting ignition-setup.service... Sep 6 01:54:58.440272 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:54:58.455972 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:54:58.456040 kernel: BTRFS info (device vda6): using free space tree Sep 6 01:54:58.456061 kernel: BTRFS info (device vda6): has skinny extents Sep 6 01:54:58.475991 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:54:58.483065 systemd[1]: Finished ignition-setup.service. Sep 6 01:54:58.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.484942 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:54:58.606750 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:54:58.609795 systemd[1]: Starting systemd-networkd.service... Sep 6 01:54:58.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.608000 audit: BPF prog-id=9 op=LOAD Sep 6 01:54:58.643836 systemd-networkd[712]: lo: Link UP Sep 6 01:54:58.643866 systemd-networkd[712]: lo: Gained carrier Sep 6 01:54:58.645260 systemd-networkd[712]: Enumeration completed Sep 6 01:54:58.645393 systemd[1]: Started systemd-networkd.service. Sep 6 01:54:58.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.647204 systemd[1]: Reached target network.target. Sep 6 01:54:58.647301 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:54:58.649625 systemd[1]: Starting iscsiuio.service... Sep 6 01:54:58.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.660002 systemd-networkd[712]: eth0: Link UP Sep 6 01:54:58.669292 ignition[625]: Ignition 2.14.0 Sep 6 01:54:58.660010 systemd-networkd[712]: eth0: Gained carrier Sep 6 01:54:58.669310 ignition[625]: Stage: fetch-offline Sep 6 01:54:58.676641 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:54:58.676641 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:54:58.676641 iscsid[717]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:54:58.676641 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:54:58.676641 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:54:58.676641 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:54:58.676641 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:54:58.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.669221 systemd[1]: Started iscsiuio.service. Sep 6 01:54:58.669414 ignition[625]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:58.671092 systemd[1]: Starting iscsid.service... Sep 6 01:54:58.669455 ignition[625]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:58.678561 systemd[1]: Started iscsid.service. Sep 6 01:54:58.670944 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:58.680585 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:54:58.671102 ignition[625]: parsed url from cmdline: "" Sep 6 01:54:58.683953 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:54:58.671110 ignition[625]: no config URL provided Sep 6 01:54:58.686789 systemd[1]: Starting ignition-fetch.service... Sep 6 01:54:58.671121 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:54:58.698472 systemd-networkd[712]: eth0: DHCPv4 address 10.244.19.198/30, gateway 10.244.19.197 acquired from 10.244.19.197 Sep 6 01:54:58.671138 ignition[625]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:54:58.671148 ignition[625]: failed to fetch config: resource requires networking Sep 6 01:54:58.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.709513 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:54:58.671314 ignition[625]: Ignition finished successfully Sep 6 01:54:58.710450 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:54:58.700099 ignition[719]: Ignition 2.14.0 Sep 6 01:54:58.711117 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:54:58.700111 ignition[719]: Stage: fetch Sep 6 01:54:58.713672 systemd[1]: Reached target remote-fs.target. Sep 6 01:54:58.700324 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:58.715436 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:54:58.700363 ignition[719]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:58.704139 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:58.704319 ignition[719]: parsed url from cmdline: "" Sep 6 01:54:58.704327 ignition[719]: no config URL provided Sep 6 01:54:58.704338 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:54:58.704355 ignition[719]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:54:58.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.728754 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:54:58.713042 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 6 01:54:58.713078 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 6 01:54:58.713938 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 6 01:54:58.733518 ignition[719]: GET result: OK Sep 6 01:54:58.733768 ignition[719]: parsing config with SHA512: d03915c578cd092881a29b8564f58c02b9720d6a9944bba3eeb215a78ba5beceaca75cbb12a5902ec22bad1db60ac750a119e74ff6dceb0c82c2433323006d49 Sep 6 01:54:58.744271 unknown[719]: fetched base config from "system" Sep 6 01:54:58.744313 unknown[719]: fetched base config from "system" Sep 6 01:54:58.744869 ignition[719]: fetch: fetch complete Sep 6 01:54:58.744323 unknown[719]: fetched user config from "openstack" Sep 6 01:54:58.744879 ignition[719]: fetch: fetch passed Sep 6 01:54:58.744938 ignition[719]: Ignition finished successfully Sep 6 01:54:58.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.750328 systemd[1]: Finished ignition-fetch.service. Sep 6 01:54:58.752183 systemd[1]: Starting ignition-kargs.service... Sep 6 01:54:58.764984 ignition[737]: Ignition 2.14.0 Sep 6 01:54:58.765005 ignition[737]: Stage: kargs Sep 6 01:54:58.765172 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:58.765209 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:58.766516 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:58.768165 ignition[737]: kargs: kargs passed Sep 6 01:54:58.769501 systemd[1]: Finished ignition-kargs.service. Sep 6 01:54:58.768232 ignition[737]: Ignition finished successfully Sep 6 01:54:58.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.771768 systemd[1]: Starting ignition-disks.service... Sep 6 01:54:58.784349 ignition[742]: Ignition 2.14.0 Sep 6 01:54:58.785454 ignition[742]: Stage: disks Sep 6 01:54:58.786341 ignition[742]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:58.787377 ignition[742]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:58.788830 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:58.791513 ignition[742]: disks: disks passed Sep 6 01:54:58.792342 ignition[742]: Ignition finished successfully Sep 6 01:54:58.794067 systemd[1]: Finished ignition-disks.service. Sep 6 01:54:58.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.794926 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:54:58.796111 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:54:58.797382 systemd[1]: Reached target local-fs.target. Sep 6 01:54:58.798672 systemd[1]: Reached target sysinit.target. Sep 6 01:54:58.799883 systemd[1]: Reached target basic.target. Sep 6 01:54:58.802382 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:54:58.821394 systemd-fsck[749]: ROOT: clean, 629/1628000 files, 124065/1617920 blocks Sep 6 01:54:58.827033 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:54:58.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.828864 systemd[1]: Mounting sysroot.mount... Sep 6 01:54:58.842891 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:54:58.843145 systemd[1]: Mounted sysroot.mount. Sep 6 01:54:58.843952 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:54:58.846711 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:54:58.847968 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 01:54:58.848965 systemd[1]: Starting flatcar-openstack-hostname.service... Sep 6 01:54:58.853106 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:54:58.853152 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:54:58.856170 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:54:58.858463 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:54:58.866473 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:54:58.882041 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:54:58.891399 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:54:58.902460 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:54:58.970933 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:54:58.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:58.972919 systemd[1]: Starting ignition-mount.service... Sep 6 01:54:58.974486 systemd[1]: Starting sysroot-boot.service... Sep 6 01:54:58.984651 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 01:54:58.996694 ignition[804]: INFO : Ignition 2.14.0 Sep 6 01:54:58.996694 ignition[804]: INFO : Stage: mount Sep 6 01:54:58.998371 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:58.998371 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:58.998371 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:59.002259 ignition[804]: INFO : mount: mount passed Sep 6 01:54:59.002259 ignition[804]: INFO : Ignition finished successfully Sep 6 01:54:59.000958 systemd[1]: Finished ignition-mount.service. Sep 6 01:54:59.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:59.015136 coreos-metadata[755]: Sep 06 01:54:59.015 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 6 01:54:59.022379 systemd[1]: Finished sysroot-boot.service. Sep 6 01:54:59.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:59.032323 coreos-metadata[755]: Sep 06 01:54:59.032 INFO Fetch successful Sep 6 01:54:59.033396 coreos-metadata[755]: Sep 06 01:54:59.033 INFO wrote hostname srv-jrxph.gb1.brightbox.com to /sysroot/etc/hostname Sep 6 01:54:59.036519 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 6 01:54:59.036684 systemd[1]: Finished flatcar-openstack-hostname.service. Sep 6 01:54:59.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:59.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:54:59.361695 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:54:59.375257 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (813) Sep 6 01:54:59.379166 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:54:59.379212 kernel: BTRFS info (device vda6): using free space tree Sep 6 01:54:59.379232 kernel: BTRFS info (device vda6): has skinny extents Sep 6 01:54:59.386460 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:54:59.389197 systemd[1]: Starting ignition-files.service... Sep 6 01:54:59.410267 ignition[833]: INFO : Ignition 2.14.0 Sep 6 01:54:59.410267 ignition[833]: INFO : Stage: files Sep 6 01:54:59.412007 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:54:59.412007 ignition[833]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:54:59.412007 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:54:59.416089 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:54:59.416089 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:54:59.416089 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:54:59.419119 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:54:59.420321 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:54:59.421683 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:54:59.421448 unknown[833]: wrote ssh authorized keys file for user: core Sep 6 01:54:59.425431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 01:54:59.425431 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 6 01:54:59.625253 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 01:54:59.864681 systemd-networkd[712]: eth0: Gained IPv6LL Sep 6 01:54:59.889428 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 01:54:59.891230 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:54:59.892418 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 01:55:00.245737 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:55:00.794246 systemd-networkd[712]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4f1:24:19ff:fef4:13c6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4f1:24:19ff:fef4:13c6/64 assigned by NDisc. Sep 6 01:55:00.794269 systemd-networkd[712]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 6 01:55:00.924685 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:55:00.928687 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 01:55:00.940188 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 6 01:55:01.190269 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 01:55:03.407670 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 01:55:03.409610 ignition[833]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:55:03.409610 ignition[833]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:55:03.409610 ignition[833]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 6 01:55:03.409610 ignition[833]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:55:03.413648 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:55:03.429465 kernel: kauditd_printk_skb: 28 callbacks suppressed Sep 6 01:55:03.429499 kernel: audit: type=1130 audit(1757123703.420:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.418462 systemd[1]: Finished ignition-files.service. Sep 6 01:55:03.431981 ignition[833]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:55:03.431981 ignition[833]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:55:03.431981 ignition[833]: INFO : files: files passed Sep 6 01:55:03.431981 ignition[833]: INFO : Ignition finished successfully Sep 6 01:55:03.449904 kernel: audit: type=1130 audit(1757123703.438:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.449939 kernel: audit: type=1131 audit(1757123703.439:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.423294 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:55:03.430749 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:55:03.452210 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:55:03.433361 systemd[1]: Starting ignition-quench.service... Sep 6 01:55:03.437799 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:55:03.437967 systemd[1]: Finished ignition-quench.service. Sep 6 01:55:03.440206 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:55:03.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.461253 systemd[1]: Reached target ignition-complete.target. Sep 6 01:55:03.463714 kernel: audit: type=1130 audit(1757123703.456:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.464108 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:55:03.482982 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:55:03.484049 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:55:03.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.488083 systemd[1]: Reached target initrd-fs.target. Sep 6 01:55:03.495922 kernel: audit: type=1130 audit(1757123703.484:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.495955 kernel: audit: type=1131 audit(1757123703.487:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.496508 systemd[1]: Reached target initrd.target. Sep 6 01:55:03.497297 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:55:03.498499 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:55:03.514570 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:55:03.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.516523 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:55:03.535354 kernel: audit: type=1130 audit(1757123703.514:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.548378 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:55:03.549444 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:55:03.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.551839 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:55:03.561284 kernel: audit: type=1130 audit(1757123703.550:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.561318 kernel: audit: type=1131 audit(1757123703.550:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.562073 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:55:03.563479 systemd[1]: Stopped target timers.target. Sep 6 01:55:03.564838 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:55:03.565760 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:55:03.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.567229 systemd[1]: Stopped target initrd.target. Sep 6 01:55:03.573476 kernel: audit: type=1131 audit(1757123703.566:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.573570 systemd[1]: Stopped target basic.target. Sep 6 01:55:03.574918 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:55:03.576312 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:55:03.577680 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:55:03.579125 systemd[1]: Stopped target remote-fs.target. Sep 6 01:55:03.580471 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:55:03.581838 systemd[1]: Stopped target sysinit.target. Sep 6 01:55:03.582557 systemd[1]: Stopped target local-fs.target. Sep 6 01:55:03.583834 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:55:03.585112 systemd[1]: Stopped target swap.target. Sep 6 01:55:03.586337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:55:03.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.586429 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:55:03.587638 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:55:03.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.588893 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:55:03.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.588959 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:55:03.590221 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:55:03.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.590292 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:55:03.591604 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:55:03.591668 systemd[1]: Stopped ignition-files.service. Sep 6 01:55:03.594033 systemd[1]: Stopping ignition-mount.service... Sep 6 01:55:03.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.598653 systemd[1]: Stopping iscsid.service... Sep 6 01:55:03.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.608968 ignition[871]: INFO : Ignition 2.14.0 Sep 6 01:55:03.608968 ignition[871]: INFO : Stage: umount Sep 6 01:55:03.608968 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:55:03.608968 ignition[871]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Sep 6 01:55:03.608968 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 6 01:55:03.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.617695 iscsid[717]: iscsid shutting down. Sep 6 01:55:03.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.599253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:55:03.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.622391 ignition[871]: INFO : umount: umount passed Sep 6 01:55:03.622391 ignition[871]: INFO : Ignition finished successfully Sep 6 01:55:03.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.599343 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:55:03.600830 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:55:03.604140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:55:03.604266 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:55:03.605111 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:55:03.605174 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:55:03.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.607185 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:55:03.607326 systemd[1]: Stopped iscsid.service. Sep 6 01:55:03.610206 systemd[1]: Stopping iscsiuio.service... Sep 6 01:55:03.613199 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:55:03.613349 systemd[1]: Stopped iscsiuio.service. Sep 6 01:55:03.618409 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:55:03.618555 systemd[1]: Stopped ignition-mount.service. Sep 6 01:55:03.619669 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:55:03.619737 systemd[1]: Stopped ignition-disks.service. Sep 6 01:55:03.620455 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:55:03.620515 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:55:03.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.621982 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:55:03.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.622042 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:55:03.623752 systemd[1]: Stopped target network.target. Sep 6 01:55:03.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.648000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:55:03.624392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:55:03.624458 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:55:03.625142 systemd[1]: Stopped target paths.target. Sep 6 01:55:03.625709 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:55:03.627337 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:55:03.628184 systemd[1]: Stopped target slices.target. Sep 6 01:55:03.628757 systemd[1]: Stopped target sockets.target. Sep 6 01:55:03.629452 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:55:03.629514 systemd[1]: Closed iscsid.socket. Sep 6 01:55:03.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.633184 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:55:03.633245 systemd[1]: Closed iscsiuio.socket. Sep 6 01:55:03.633827 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:55:03.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.633908 systemd[1]: Stopped ignition-setup.service. Sep 6 01:55:03.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.635812 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:55:03.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.636773 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:55:03.639904 systemd-networkd[712]: eth0: DHCPv6 lease lost Sep 6 01:55:03.668000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:55:03.643009 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:55:03.644113 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:55:03.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.644306 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:55:03.645990 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:55:03.646159 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:55:03.647808 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:55:03.648003 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:55:03.649225 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:55:03.649289 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:55:03.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.650317 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:55:03.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.650378 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:55:03.658749 systemd[1]: Stopping network-cleanup.service... Sep 6 01:55:03.661729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:55:03.661802 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:55:03.663149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:55:03.663229 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:55:03.664855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:55:03.664934 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:55:03.666011 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:55:03.668960 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:55:03.669718 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:55:03.669958 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:55:03.672465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:55:03.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.672555 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:55:03.675987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:55:03.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:03.676045 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:55:03.677147 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:55:03.677217 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:55:03.678742 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:55:03.678813 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:55:03.679520 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:55:03.679597 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:55:03.682017 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:55:03.704910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:55:03.705039 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:55:03.707269 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:55:03.707441 systemd[1]: Stopped network-cleanup.service. Sep 6 01:55:03.709107 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:55:03.709243 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:55:03.710129 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:55:03.712459 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:55:03.729766 systemd[1]: Switching root. Sep 6 01:55:03.750416 systemd-journald[202]: Journal stopped Sep 6 01:55:07.977130 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 6 01:55:07.977245 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:55:07.977273 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:55:07.977300 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:55:07.977320 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:55:07.977340 kernel: SELinux: policy capability open_perms=1 Sep 6 01:55:07.977360 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:55:07.977386 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:55:07.977411 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:55:07.977432 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:55:07.977451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:55:07.977494 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:55:07.977523 systemd[1]: Successfully loaded SELinux policy in 58.392ms. Sep 6 01:55:07.977554 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.475ms. Sep 6 01:55:07.977577 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:55:07.977599 systemd[1]: Detected virtualization kvm. Sep 6 01:55:07.977625 systemd[1]: Detected architecture x86-64. Sep 6 01:55:07.977647 systemd[1]: Detected first boot. Sep 6 01:55:07.977675 systemd[1]: Hostname set to . Sep 6 01:55:07.977696 systemd[1]: Initializing machine ID from VM UUID. Sep 6 01:55:07.977718 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:55:07.977739 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:55:07.977761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:55:07.977783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:55:07.977811 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:55:07.977834 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 01:55:07.977888 systemd[1]: Stopped initrd-switch-root.service. Sep 6 01:55:07.977913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 01:55:07.977935 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:55:07.977956 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:55:07.977978 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 01:55:07.978005 systemd[1]: Created slice system-getty.slice. Sep 6 01:55:07.978027 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:55:07.978048 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:55:07.978069 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:55:07.978090 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:55:07.978120 systemd[1]: Created slice user.slice. Sep 6 01:55:07.978143 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:55:07.978164 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:55:07.978186 systemd[1]: Set up automount boot.automount. Sep 6 01:55:07.978213 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:55:07.978235 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 01:55:07.978257 systemd[1]: Stopped target initrd-fs.target. Sep 6 01:55:07.978278 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 01:55:07.978298 systemd[1]: Reached target integritysetup.target. Sep 6 01:55:07.978319 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:55:07.978345 systemd[1]: Reached target remote-fs.target. Sep 6 01:55:07.978367 systemd[1]: Reached target slices.target. Sep 6 01:55:07.978389 systemd[1]: Reached target swap.target. Sep 6 01:55:07.978411 systemd[1]: Reached target torcx.target. Sep 6 01:55:07.978433 systemd[1]: Reached target veritysetup.target. Sep 6 01:55:07.978464 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:55:07.978490 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:55:07.978511 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:55:07.978540 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:55:07.978561 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:55:07.978588 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:55:07.978610 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:55:07.978631 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:55:07.978653 systemd[1]: Mounting media.mount... Sep 6 01:55:07.978683 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:07.978707 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:55:07.978751 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:55:07.978774 systemd[1]: Mounting tmp.mount... Sep 6 01:55:07.978795 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:55:07.978822 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:55:07.983812 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:55:07.983877 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:55:07.983904 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:55:07.983926 systemd[1]: Starting modprobe@drm.service... Sep 6 01:55:07.983947 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:55:07.983968 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:55:07.983991 systemd[1]: Starting modprobe@loop.service... Sep 6 01:55:07.984014 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:55:07.984043 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 01:55:07.984065 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 01:55:07.984086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 01:55:07.984107 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 01:55:07.984129 systemd[1]: Stopped systemd-journald.service. Sep 6 01:55:07.984150 systemd[1]: Starting systemd-journald.service... Sep 6 01:55:07.984183 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:55:07.984205 kernel: loop: module loaded Sep 6 01:55:07.984226 kernel: fuse: init (API version 7.34) Sep 6 01:55:07.984253 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:55:07.984274 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:55:07.984295 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:55:07.984324 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 01:55:07.984345 systemd[1]: Stopped verity-setup.service. Sep 6 01:55:07.984367 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:07.984388 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:55:07.984410 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:55:07.984431 systemd[1]: Mounted media.mount. Sep 6 01:55:07.984468 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:55:07.984493 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:55:07.984515 systemd[1]: Mounted tmp.mount. Sep 6 01:55:07.984535 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:55:07.984557 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:55:07.984577 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:55:07.984598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:55:07.984619 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:55:07.984642 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:55:07.984669 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:55:07.984691 systemd[1]: Finished modprobe@drm.service. Sep 6 01:55:07.984712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:55:07.984745 systemd-journald[982]: Journal started Sep 6 01:55:07.984832 systemd-journald[982]: Runtime Journal (/run/log/journal/c9d878375c81426aa0e27c91077ef16c) is 4.7M, max 38.1M, 33.3M free. Sep 6 01:55:07.987128 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:55:03.918000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:55:03.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:55:03.998000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:55:03.998000 audit: BPF prog-id=10 op=LOAD Sep 6 01:55:03.998000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:55:03.998000 audit: BPF prog-id=11 op=LOAD Sep 6 01:55:03.998000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:55:04.155000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:55:04.155000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178bc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=887 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:55:04.155000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:55:04.158000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:55:04.158000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000117995 a2=1ed a3=0 items=2 ppid=887 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:55:04.158000 audit: CWD cwd="/" Sep 6 01:55:04.158000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:04.158000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:04.158000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:55:07.675000 audit: BPF prog-id=12 op=LOAD Sep 6 01:55:07.675000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:55:07.675000 audit: BPF prog-id=13 op=LOAD Sep 6 01:55:07.676000 audit: BPF prog-id=14 op=LOAD Sep 6 01:55:07.676000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:55:07.676000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:55:07.678000 audit: BPF prog-id=15 op=LOAD Sep 6 01:55:07.678000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:55:07.678000 audit: BPF prog-id=16 op=LOAD Sep 6 01:55:07.678000 audit: BPF prog-id=17 op=LOAD Sep 6 01:55:07.678000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:55:07.678000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:55:07.679000 audit: BPF prog-id=18 op=LOAD Sep 6 01:55:07.679000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:55:07.679000 audit: BPF prog-id=19 op=LOAD Sep 6 01:55:07.679000 audit: BPF prog-id=20 op=LOAD Sep 6 01:55:07.679000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:55:07.679000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:55:07.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.693000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:55:07.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.990906 systemd[1]: Started systemd-journald.service. Sep 6 01:55:07.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.866000 audit: BPF prog-id=21 op=LOAD Sep 6 01:55:07.867000 audit: BPF prog-id=22 op=LOAD Sep 6 01:55:07.867000 audit: BPF prog-id=23 op=LOAD Sep 6 01:55:07.867000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:55:07.867000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:55:07.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.966000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:55:07.966000 audit[982]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd06ed2f90 a2=4000 a3=7ffd06ed302c items=0 ppid=1 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:55:07.966000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:55:07.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.673746 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:55:04.144814 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:55:07.673766 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 01:55:04.148028 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:55:07.681966 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 01:55:04.148083 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:55:07.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:07.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:04.148139 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 01:55:07.994156 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:55:07.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:04.148158 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 01:55:07.994387 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:55:04.148214 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 01:55:07.995414 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:55:08.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:04.148238 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 01:55:07.995627 systemd[1]: Finished modprobe@loop.service. Sep 6 01:55:04.148643 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 01:55:07.996687 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:55:04.148712 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:55:07.997812 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:55:04.148737 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:55:07.999756 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:55:04.150912 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 01:55:08.002115 systemd[1]: Reached target network-pre.target. Sep 6 01:55:04.150973 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 01:55:04.151006 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 01:55:04.151035 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 01:55:04.151068 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 01:55:04.151094 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 01:55:07.045319 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:55:07.046024 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:55:07.046259 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:55:07.046648 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:55:07.046773 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 01:55:07.046936 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-09-06T01:55:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 01:55:08.007817 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:55:08.010218 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:55:08.010906 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:55:08.014473 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:55:08.018157 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:55:08.020410 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:55:08.055009 systemd-journald[982]: Time spent on flushing to /var/log/journal/c9d878375c81426aa0e27c91077ef16c is 62.533ms for 1296 entries. Sep 6 01:55:08.055009 systemd-journald[982]: System Journal (/var/log/journal/c9d878375c81426aa0e27c91077ef16c) is 8.0M, max 584.8M, 576.8M free. Sep 6 01:55:08.135191 systemd-journald[982]: Received client request to flush runtime journal. Sep 6 01:55:08.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.024172 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:55:08.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.025090 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:55:08.027272 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:55:08.030974 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:55:08.039580 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:55:08.138552 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 01:55:08.040449 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:55:08.059760 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:55:08.060620 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:55:08.082094 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:55:08.106006 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:55:08.113996 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:55:08.116560 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:55:08.136355 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:55:08.700452 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:55:08.706532 kernel: kauditd_printk_skb: 106 callbacks suppressed Sep 6 01:55:08.706604 kernel: audit: type=1130 audit(1757123708.700:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.704164 systemd[1]: Starting systemd-udevd.service... Sep 6 01:55:08.713594 kernel: audit: type=1334 audit(1757123708.701:147): prog-id=24 op=LOAD Sep 6 01:55:08.713654 kernel: audit: type=1334 audit(1757123708.702:148): prog-id=25 op=LOAD Sep 6 01:55:08.713691 kernel: audit: type=1334 audit(1757123708.702:149): prog-id=7 op=UNLOAD Sep 6 01:55:08.701000 audit: BPF prog-id=24 op=LOAD Sep 6 01:55:08.702000 audit: BPF prog-id=25 op=LOAD Sep 6 01:55:08.702000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:55:08.715882 kernel: audit: type=1334 audit(1757123708.702:150): prog-id=8 op=UNLOAD Sep 6 01:55:08.702000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:55:08.742833 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Sep 6 01:55:08.775180 systemd[1]: Started systemd-udevd.service. Sep 6 01:55:08.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.781873 kernel: audit: type=1130 audit(1757123708.775:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.780000 audit: BPF prog-id=26 op=LOAD Sep 6 01:55:08.784873 kernel: audit: type=1334 audit(1757123708.780:152): prog-id=26 op=LOAD Sep 6 01:55:08.784939 systemd[1]: Starting systemd-networkd.service... Sep 6 01:55:08.795000 audit: BPF prog-id=27 op=LOAD Sep 6 01:55:08.802216 kernel: audit: type=1334 audit(1757123708.795:153): prog-id=27 op=LOAD Sep 6 01:55:08.802263 kernel: audit: type=1334 audit(1757123708.798:154): prog-id=28 op=LOAD Sep 6 01:55:08.802296 kernel: audit: type=1334 audit(1757123708.800:155): prog-id=29 op=LOAD Sep 6 01:55:08.798000 audit: BPF prog-id=28 op=LOAD Sep 6 01:55:08.800000 audit: BPF prog-id=29 op=LOAD Sep 6 01:55:08.803257 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:55:08.854649 systemd[1]: Started systemd-userdbd.service. Sep 6 01:55:08.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.881575 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 01:55:08.960760 systemd-networkd[1019]: lo: Link UP Sep 6 01:55:08.960775 systemd-networkd[1019]: lo: Gained carrier Sep 6 01:55:08.961658 systemd-networkd[1019]: Enumeration completed Sep 6 01:55:08.961799 systemd[1]: Started systemd-networkd.service. Sep 6 01:55:08.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:08.964189 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:55:08.966789 systemd-networkd[1019]: eth0: Link UP Sep 6 01:55:08.966803 systemd-networkd[1019]: eth0: Gained carrier Sep 6 01:55:08.999902 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:55:09.005065 systemd-networkd[1019]: eth0: DHCPv4 address 10.244.19.198/30, gateway 10.244.19.197 acquired from 10.244.19.197 Sep 6 01:55:09.011879 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 6 01:55:09.019449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:55:09.026878 kernel: ACPI: button: Power Button [PWRF] Sep 6 01:55:09.062000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:55:09.062000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ae78d08500 a1=338ec a2=7fe3fc9d1bc5 a3=5 items=110 ppid=1015 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:55:09.062000 audit: CWD cwd="/" Sep 6 01:55:09.062000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=1 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=2 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=3 name=(null) inode=15250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=4 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=5 name=(null) inode=15251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=6 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=7 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=8 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=9 name=(null) inode=15253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=10 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=11 name=(null) inode=15254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=12 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=13 name=(null) inode=15255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=14 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=15 name=(null) inode=15256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=16 name=(null) inode=15252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=17 name=(null) inode=15257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=18 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=19 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=20 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=21 name=(null) inode=15259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=22 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=23 name=(null) inode=15260 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=24 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=25 name=(null) inode=15261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=26 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=27 name=(null) inode=15262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=28 name=(null) inode=15258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=29 name=(null) inode=15263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=30 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=31 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=32 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=33 name=(null) inode=15265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=34 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=35 name=(null) inode=15266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=36 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=37 name=(null) inode=15267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=38 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=39 name=(null) inode=15268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=40 name=(null) inode=15264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=41 name=(null) inode=15269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=42 name=(null) inode=15249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=43 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=44 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=45 name=(null) inode=15271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=46 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=47 name=(null) inode=15272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=48 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=49 name=(null) inode=15273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=50 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=51 name=(null) inode=15274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=52 name=(null) inode=15270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=53 name=(null) inode=15275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=55 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=56 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=57 name=(null) inode=15277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=58 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=59 name=(null) inode=15278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=60 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=61 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=62 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=63 name=(null) inode=15280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=64 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=65 name=(null) inode=15281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=66 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=67 name=(null) inode=15282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=68 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=69 name=(null) inode=15283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=70 name=(null) inode=15279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=71 name=(null) inode=15284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=72 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=73 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=74 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=75 name=(null) inode=15286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=76 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=77 name=(null) inode=15287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=78 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=79 name=(null) inode=15288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=80 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=81 name=(null) inode=15289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=82 name=(null) inode=15285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=83 name=(null) inode=15290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=84 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=85 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=86 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=87 name=(null) inode=15292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=88 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=89 name=(null) inode=15293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=90 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=91 name=(null) inode=15294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=92 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=93 name=(null) inode=15295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=94 name=(null) inode=15291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=95 name=(null) inode=15296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=96 name=(null) inode=15276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=97 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=98 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=99 name=(null) inode=15298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=100 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=101 name=(null) inode=15299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=102 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=103 name=(null) inode=15300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=104 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=105 name=(null) inode=15301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=106 name=(null) inode=15297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=107 name=(null) inode=15302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PATH item=109 name=(null) inode=15303 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:55:09.062000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:55:09.086869 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 01:55:09.121635 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 01:55:09.121936 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 01:55:09.122138 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 6 01:55:09.294551 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:55:09.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.297315 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:55:09.320127 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:55:09.351493 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:55:09.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.352471 systemd[1]: Reached target cryptsetup.target. Sep 6 01:55:09.354980 systemd[1]: Starting lvm2-activation.service... Sep 6 01:55:09.360617 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:55:09.387384 systemd[1]: Finished lvm2-activation.service. Sep 6 01:55:09.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.389735 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:55:09.390414 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:55:09.390471 systemd[1]: Reached target local-fs.target. Sep 6 01:55:09.391166 systemd[1]: Reached target machines.target. Sep 6 01:55:09.394060 systemd[1]: Starting ldconfig.service... Sep 6 01:55:09.395346 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:55:09.395403 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:09.398109 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:55:09.401859 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:55:09.406586 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:55:09.410226 systemd[1]: Starting systemd-sysext.service... Sep 6 01:55:09.422147 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Sep 6 01:55:09.423977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:55:09.430142 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:55:09.457746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:55:09.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.551529 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:55:09.551806 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:55:09.581037 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:55:09.583282 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:55:09.585451 kernel: loop0: detected capacity change from 0 to 224512 Sep 6 01:55:09.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.613879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:55:09.631911 kernel: loop1: detected capacity change from 0 to 224512 Sep 6 01:55:09.656656 (sd-sysext)[1059]: Using extensions 'kubernetes'. Sep 6 01:55:09.658047 (sd-sysext)[1059]: Merged extensions into '/usr'. Sep 6 01:55:09.673505 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Sep 6 01:55:09.673505 systemd-fsck[1056]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 01:55:09.687495 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:55:09.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.690746 systemd[1]: Mounting boot.mount... Sep 6 01:55:09.694294 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:09.696567 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:55:09.697504 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:55:09.699439 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:55:09.704533 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:55:09.706961 systemd[1]: Starting modprobe@loop.service... Sep 6 01:55:09.707810 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:55:09.708009 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:09.708186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:09.712685 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:55:09.714117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:55:09.714398 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:55:09.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.722352 systemd[1]: Mounted boot.mount. Sep 6 01:55:09.724184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:55:09.724493 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:55:09.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.727738 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:55:09.728138 systemd[1]: Finished modprobe@loop.service. Sep 6 01:55:09.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.732370 systemd[1]: Finished systemd-sysext.service. Sep 6 01:55:09.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:09.742465 systemd[1]: Starting ensure-sysext.service... Sep 6 01:55:09.744038 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:55:09.744128 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:55:09.750270 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:55:09.760415 systemd[1]: Reloading. Sep 6 01:55:09.788375 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:55:09.804367 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:55:09.821993 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:55:09.874134 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2025-09-06T01:55:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:55:09.874187 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2025-09-06T01:55:09Z" level=info msg="torcx already run" Sep 6 01:55:09.991613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:55:09.991996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:55:10.006824 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:55:10.022431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:55:10.101000 audit: BPF prog-id=30 op=LOAD Sep 6 01:55:10.101000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:55:10.104000 audit: BPF prog-id=31 op=LOAD Sep 6 01:55:10.104000 audit: BPF prog-id=27 op=UNLOAD Sep 6 01:55:10.104000 audit: BPF prog-id=32 op=LOAD Sep 6 01:55:10.104000 audit: BPF prog-id=33 op=LOAD Sep 6 01:55:10.104000 audit: BPF prog-id=28 op=UNLOAD Sep 6 01:55:10.104000 audit: BPF prog-id=29 op=UNLOAD Sep 6 01:55:10.105000 audit: BPF prog-id=34 op=LOAD Sep 6 01:55:10.105000 audit: BPF prog-id=35 op=LOAD Sep 6 01:55:10.105000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:55:10.105000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:55:10.111000 audit: BPF prog-id=36 op=LOAD Sep 6 01:55:10.111000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:55:10.111000 audit: BPF prog-id=37 op=LOAD Sep 6 01:55:10.111000 audit: BPF prog-id=38 op=LOAD Sep 6 01:55:10.111000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:55:10.111000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:55:10.116130 systemd[1]: Finished ldconfig.service. Sep 6 01:55:10.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.117264 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:55:10.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.130094 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.132337 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:55:10.135580 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:55:10.138652 systemd[1]: Starting modprobe@loop.service... Sep 6 01:55:10.140960 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.141199 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:10.142711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:55:10.142976 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:55:10.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.144344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:55:10.144552 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:55:10.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.146739 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:55:10.146971 systemd[1]: Finished modprobe@loop.service. Sep 6 01:55:10.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.150002 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.151816 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:55:10.155687 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:55:10.157994 systemd[1]: Starting modprobe@loop.service... Sep 6 01:55:10.158725 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.158909 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:10.160313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:55:10.160545 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:55:10.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.161792 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:55:10.162861 systemd[1]: Finished modprobe@loop.service. Sep 6 01:55:10.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.164236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:55:10.164454 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:55:10.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.169364 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.171555 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:55:10.175944 systemd[1]: Starting modprobe@drm.service... Sep 6 01:55:10.178214 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:55:10.180591 systemd[1]: Starting modprobe@loop.service... Sep 6 01:55:10.183043 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.183233 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:10.185223 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:55:10.188404 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:55:10.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.189801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:55:10.190030 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:55:10.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.191316 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:55:10.191518 systemd[1]: Finished modprobe@drm.service. Sep 6 01:55:10.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.192937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:55:10.193128 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:55:10.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.194446 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:55:10.194628 systemd[1]: Finished modprobe@loop.service. Sep 6 01:55:10.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.197973 systemd[1]: Finished ensure-sysext.service. Sep 6 01:55:10.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.200666 systemd[1]: Starting audit-rules.service... Sep 6 01:55:10.203750 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:55:10.210000 audit: BPF prog-id=39 op=LOAD Sep 6 01:55:10.209075 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:55:10.209865 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:55:10.209989 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:55:10.212678 systemd[1]: Starting systemd-resolved.service... Sep 6 01:55:10.215000 audit: BPF prog-id=40 op=LOAD Sep 6 01:55:10.218122 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:55:10.221828 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:55:10.235914 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:55:10.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.236889 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:55:10.241000 audit[1152]: SYSTEM_BOOT pid=1152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.244971 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:55:10.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.277392 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:55:10.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.280157 systemd[1]: Starting systemd-update-done.service... Sep 6 01:55:10.293357 systemd[1]: Finished systemd-update-done.service. Sep 6 01:55:10.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:55:10.328000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:55:10.328000 audit[1166]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffceff4f320 a2=420 a3=0 items=0 ppid=1145 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:55:10.328000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:55:10.330715 augenrules[1166]: No rules Sep 6 01:55:10.331074 systemd[1]: Finished audit-rules.service. Sep 6 01:55:10.336251 systemd-resolved[1149]: Positive Trust Anchors: Sep 6 01:55:10.337038 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:55:10.337207 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:55:10.344542 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:55:10.345388 systemd[1]: Reached target time-set.target. Sep 6 01:55:10.345624 systemd-resolved[1149]: Using system hostname 'srv-jrxph.gb1.brightbox.com'. Sep 6 01:55:10.350188 systemd[1]: Started systemd-resolved.service. Sep 6 01:55:10.350961 systemd[1]: Reached target network.target. Sep 6 01:55:10.351579 systemd[1]: Reached target nss-lookup.target. Sep 6 01:55:10.352237 systemd[1]: Reached target sysinit.target. Sep 6 01:55:10.352977 systemd[1]: Started motdgen.path. Sep 6 01:55:10.353606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:55:10.354636 systemd[1]: Started logrotate.timer. Sep 6 01:55:10.355376 systemd[1]: Started mdadm.timer. Sep 6 01:55:10.355973 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:55:10.356624 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:55:10.356663 systemd[1]: Reached target paths.target. Sep 6 01:55:10.357284 systemd[1]: Reached target timers.target. Sep 6 01:55:10.358383 systemd[1]: Listening on dbus.socket. Sep 6 01:55:11.236516 systemd-timesyncd[1150]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Sep 6 01:55:11.236635 systemd-timesyncd[1150]: Initial clock synchronization to Sat 2025-09-06 01:55:11.236279 UTC. Sep 6 01:55:11.237441 systemd-resolved[1149]: Clock change detected. Flushing caches. Sep 6 01:55:11.238780 systemd[1]: Starting docker.socket... Sep 6 01:55:11.243284 systemd[1]: Listening on sshd.socket. Sep 6 01:55:11.244049 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:11.244786 systemd[1]: Listening on docker.socket. Sep 6 01:55:11.245713 systemd[1]: Reached target sockets.target. Sep 6 01:55:11.246351 systemd[1]: Reached target basic.target. Sep 6 01:55:11.247008 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:55:11.247061 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:55:11.248603 systemd[1]: Starting containerd.service... Sep 6 01:55:11.251756 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 01:55:11.257558 systemd[1]: Starting dbus.service... Sep 6 01:55:11.260876 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:55:11.269808 systemd[1]: Starting extend-filesystems.service... Sep 6 01:55:11.280314 jq[1177]: false Sep 6 01:55:11.270866 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:55:11.273539 systemd[1]: Starting motdgen.service... Sep 6 01:55:11.280788 systemd[1]: Starting prepare-helm.service... Sep 6 01:55:11.287431 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:55:11.294630 systemd[1]: Starting sshd-keygen.service... Sep 6 01:55:11.302951 systemd[1]: Starting systemd-logind.service... Sep 6 01:55:11.303955 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:55:11.304208 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:55:11.305024 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 01:55:11.308271 systemd[1]: Starting update-engine.service... Sep 6 01:55:11.310432 dbus-daemon[1176]: [system] SELinux support is enabled Sep 6 01:55:11.311344 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:55:11.315187 systemd[1]: Started dbus.service. Sep 6 01:55:11.316939 jq[1193]: true Sep 6 01:55:11.324034 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:55:11.327225 dbus-daemon[1176]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1019 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 01:55:11.327600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:55:11.332065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:55:11.332623 systemd[1]: Reached target system-config.target. Sep 6 01:55:11.333484 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:55:11.333534 systemd[1]: Reached target user-config.target. Sep 6 01:55:11.337828 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 01:55:11.341765 tar[1197]: linux-amd64/LICENSE Sep 6 01:55:11.342435 tar[1197]: linux-amd64/helm Sep 6 01:55:11.345074 systemd[1]: Starting systemd-hostnamed.service... Sep 6 01:55:11.354582 extend-filesystems[1180]: Found loop1 Sep 6 01:55:11.366583 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:55:11.366869 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:55:11.368123 extend-filesystems[1180]: Found vda Sep 6 01:55:11.368933 extend-filesystems[1180]: Found vda1 Sep 6 01:55:11.369749 extend-filesystems[1180]: Found vda2 Sep 6 01:55:11.374223 extend-filesystems[1180]: Found vda3 Sep 6 01:55:11.375506 extend-filesystems[1180]: Found usr Sep 6 01:55:11.375506 extend-filesystems[1180]: Found vda4 Sep 6 01:55:11.375506 extend-filesystems[1180]: Found vda6 Sep 6 01:55:11.375506 extend-filesystems[1180]: Found vda7 Sep 6 01:55:11.375506 extend-filesystems[1180]: Found vda9 Sep 6 01:55:11.375506 extend-filesystems[1180]: Checking size of /dev/vda9 Sep 6 01:55:11.375447 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:55:11.390473 jq[1198]: true Sep 6 01:55:11.375721 systemd[1]: Finished motdgen.service. Sep 6 01:55:11.406403 extend-filesystems[1180]: Resized partition /dev/vda9 Sep 6 01:55:11.419844 extend-filesystems[1216]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 01:55:11.430311 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:11.430364 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:55:11.431123 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 6 01:55:11.500586 update_engine[1192]: I0906 01:55:11.500015 1192 main.cc:92] Flatcar Update Engine starting Sep 6 01:55:11.505439 systemd[1]: Started update-engine.service. Sep 6 01:55:11.509054 systemd[1]: Started locksmithd.service. Sep 6 01:55:11.510462 update_engine[1192]: I0906 01:55:11.510411 1192 update_check_scheduler.cc:74] Next update check in 4m30s Sep 6 01:55:11.521808 env[1200]: time="2025-09-06T01:55:11.521530511Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:55:11.557692 systemd-networkd[1019]: eth0: Gained IPv6LL Sep 6 01:55:11.561753 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:55:11.562694 systemd[1]: Reached target network-online.target. Sep 6 01:55:11.571645 systemd[1]: Starting kubelet.service... Sep 6 01:55:11.597119 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:55:11.598134 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:55:11.611925 env[1200]: time="2025-09-06T01:55:11.611469401Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:55:11.613037 env[1200]: time="2025-09-06T01:55:11.612865721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.618122 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.621652655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.621732160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.633696361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.633742180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.633766838Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.633786830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.633942246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.634412020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.634596913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:55:11.634992 env[1200]: time="2025-09-06T01:55:11.634628438Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:55:11.635670 env[1200]: time="2025-09-06T01:55:11.634734351Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:55:11.635670 env[1200]: time="2025-09-06T01:55:11.634758116Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:55:11.638838 extend-filesystems[1216]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 01:55:11.638838 extend-filesystems[1216]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 01:55:11.638838 extend-filesystems[1216]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 01:55:11.644054 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Sep 6 01:55:11.639338 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:55:11.639569 systemd[1]: Finished extend-filesystems.service. Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648432843Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648505620Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648530237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648638693Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648761314Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648790303Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648832764Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648859540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648883047Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648924811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648947887Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.648996052Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.649234461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:55:11.651702 env[1200]: time="2025-09-06T01:55:11.649417584Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.649931571Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650014527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650057146Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650179817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650212963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650327654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650360155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650384087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650403969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650422713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650442896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650465075Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650712645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650741729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.652430 env[1200]: time="2025-09-06T01:55:11.650762535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.653036 env[1200]: time="2025-09-06T01:55:11.650781424Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:55:11.653036 env[1200]: time="2025-09-06T01:55:11.650805620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:55:11.653036 env[1200]: time="2025-09-06T01:55:11.650825297Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:55:11.653036 env[1200]: time="2025-09-06T01:55:11.650877614Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:55:11.653036 env[1200]: time="2025-09-06T01:55:11.650944129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:55:11.653289 env[1200]: time="2025-09-06T01:55:11.651228196Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:55:11.653289 env[1200]: time="2025-09-06T01:55:11.651329410Z" level=info msg="Connect containerd service" Sep 6 01:55:11.653289 env[1200]: time="2025-09-06T01:55:11.651414121Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.654338063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.654803734Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.654877532Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.654979044Z" level=info msg="containerd successfully booted in 0.148083s" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655440413Z" level=info msg="Start subscribing containerd event" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655525365Z" level=info msg="Start recovering state" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655651586Z" level=info msg="Start event monitor" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655730306Z" level=info msg="Start snapshots syncer" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655752588Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:55:11.666500 env[1200]: time="2025-09-06T01:55:11.655771216Z" level=info msg="Start streaming server" Sep 6 01:55:11.655091 systemd[1]: Started containerd.service. Sep 6 01:55:11.656778 systemd-logind[1189]: Watching system buttons on /dev/input/event2 (Power Button) Sep 6 01:55:11.656827 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 01:55:11.671178 systemd-logind[1189]: New seat seat0. Sep 6 01:55:11.686951 systemd[1]: Started systemd-logind.service. Sep 6 01:55:11.722757 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 01:55:11.723015 systemd[1]: Started systemd-hostnamed.service. Sep 6 01:55:11.723293 dbus-daemon[1176]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1201 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 01:55:11.728197 systemd[1]: Starting polkit.service... Sep 6 01:55:11.755952 polkitd[1240]: Started polkitd version 121 Sep 6 01:55:11.779040 polkitd[1240]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 01:55:11.779394 polkitd[1240]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 01:55:11.782136 polkitd[1240]: Finished loading, compiling and executing 2 rules Sep 6 01:55:11.784151 dbus-daemon[1176]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 01:55:11.784449 systemd[1]: Started polkit.service. Sep 6 01:55:11.785983 polkitd[1240]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 01:55:11.813547 systemd-hostnamed[1201]: Hostname set to (static) Sep 6 01:55:12.236266 tar[1197]: linux-amd64/README.md Sep 6 01:55:12.247374 systemd[1]: Finished prepare-helm.service. Sep 6 01:55:12.440917 locksmithd[1230]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:55:13.052421 systemd[1]: Started kubelet.service. Sep 6 01:55:13.069288 systemd-networkd[1019]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4f1:24:19ff:fef4:13c6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4f1:24:19ff:fef4:13c6/64 assigned by NDisc. Sep 6 01:55:13.069304 systemd-networkd[1019]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 6 01:55:13.684842 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:55:13.694245 kubelet[1256]: E0906 01:55:13.694182 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:55:13.697183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:55:13.697414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:55:13.697957 systemd[1]: kubelet.service: Consumed 1.139s CPU time. Sep 6 01:55:13.717811 systemd[1]: Finished sshd-keygen.service. Sep 6 01:55:13.721481 systemd[1]: Starting issuegen.service... Sep 6 01:55:13.729317 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:55:13.729569 systemd[1]: Finished issuegen.service. Sep 6 01:55:13.732601 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:55:13.743184 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:55:13.746670 systemd[1]: Started getty@tty1.service. Sep 6 01:55:13.749950 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 01:55:13.751077 systemd[1]: Reached target getty.target. Sep 6 01:55:18.713857 coreos-metadata[1175]: Sep 06 01:55:18.713 WARN failed to locate config-drive, using the metadata service API instead Sep 6 01:55:18.778934 coreos-metadata[1175]: Sep 06 01:55:18.778 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 6 01:55:18.809172 coreos-metadata[1175]: Sep 06 01:55:18.809 INFO Fetch successful Sep 6 01:55:18.809360 coreos-metadata[1175]: Sep 06 01:55:18.809 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 01:55:18.841934 coreos-metadata[1175]: Sep 06 01:55:18.841 INFO Fetch successful Sep 6 01:55:18.844247 unknown[1175]: wrote ssh authorized keys file for user: core Sep 6 01:55:18.874719 update-ssh-keys[1279]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:55:18.875811 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 01:55:18.876594 systemd[1]: Reached target multi-user.target. Sep 6 01:55:18.879466 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:55:18.890700 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:55:18.890959 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:55:18.891478 systemd[1]: Startup finished in 1.158s (kernel) + 8.175s (initrd) + 14.165s (userspace) = 23.499s. Sep 6 01:55:21.340746 systemd[1]: Created slice system-sshd.slice. Sep 6 01:55:21.342803 systemd[1]: Started sshd@0-10.244.19.198:22-139.178.89.65:45624.service. Sep 6 01:55:22.374441 sshd[1282]: Accepted publickey for core from 139.178.89.65 port 45624 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:55:22.376791 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:22.393599 systemd[1]: Created slice user-500.slice. Sep 6 01:55:22.395586 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:55:22.404211 systemd-logind[1189]: New session 1 of user core. Sep 6 01:55:22.411392 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:55:22.414641 systemd[1]: Starting user@500.service... Sep 6 01:55:22.420322 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:22.526007 systemd[1285]: Queued start job for default target default.target. Sep 6 01:55:22.527511 systemd[1285]: Reached target paths.target. Sep 6 01:55:22.527723 systemd[1285]: Reached target sockets.target. Sep 6 01:55:22.528169 systemd[1285]: Reached target timers.target. Sep 6 01:55:22.528364 systemd[1285]: Reached target basic.target. Sep 6 01:55:22.528604 systemd[1285]: Reached target default.target. Sep 6 01:55:22.528708 systemd[1]: Started user@500.service. Sep 6 01:55:22.528947 systemd[1285]: Startup finished in 98ms. Sep 6 01:55:22.532648 systemd[1]: Started session-1.scope. Sep 6 01:55:23.183172 systemd[1]: Started sshd@1-10.244.19.198:22-139.178.89.65:45636.service. Sep 6 01:55:23.923831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:55:23.924344 systemd[1]: Stopped kubelet.service. Sep 6 01:55:23.924507 systemd[1]: kubelet.service: Consumed 1.139s CPU time. Sep 6 01:55:23.927700 systemd[1]: Starting kubelet.service... Sep 6 01:55:24.079661 sshd[1294]: Accepted publickey for core from 139.178.89.65 port 45636 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:55:24.082015 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:24.092438 systemd-logind[1189]: New session 2 of user core. Sep 6 01:55:24.095050 systemd[1]: Started session-2.scope. Sep 6 01:55:24.102426 systemd[1]: Started kubelet.service. Sep 6 01:55:24.210555 kubelet[1300]: E0906 01:55:24.209847 1300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:55:24.214799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:55:24.215080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:55:24.702024 sshd[1294]: pam_unix(sshd:session): session closed for user core Sep 6 01:55:24.706235 systemd[1]: sshd@1-10.244.19.198:22-139.178.89.65:45636.service: Deactivated successfully. Sep 6 01:55:24.707372 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 01:55:24.708287 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Sep 6 01:55:24.709986 systemd-logind[1189]: Removed session 2. Sep 6 01:55:24.850164 systemd[1]: Started sshd@2-10.244.19.198:22-139.178.89.65:45648.service. Sep 6 01:55:25.745598 sshd[1310]: Accepted publickey for core from 139.178.89.65 port 45648 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:55:25.747685 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:25.755035 systemd[1]: Started session-3.scope. Sep 6 01:55:25.755509 systemd-logind[1189]: New session 3 of user core. Sep 6 01:55:26.363506 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 6 01:55:26.367999 systemd[1]: sshd@2-10.244.19.198:22-139.178.89.65:45648.service: Deactivated successfully. Sep 6 01:55:26.369251 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 01:55:26.370159 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Sep 6 01:55:26.371502 systemd-logind[1189]: Removed session 3. Sep 6 01:55:26.511583 systemd[1]: Started sshd@3-10.244.19.198:22-139.178.89.65:45656.service. Sep 6 01:55:27.420885 sshd[1316]: Accepted publickey for core from 139.178.89.65 port 45656 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:55:27.422999 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:27.430958 systemd[1]: Started session-4.scope. Sep 6 01:55:27.431187 systemd-logind[1189]: New session 4 of user core. Sep 6 01:55:28.045170 sshd[1316]: pam_unix(sshd:session): session closed for user core Sep 6 01:55:28.049505 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:55:28.049863 systemd[1]: sshd@3-10.244.19.198:22-139.178.89.65:45656.service: Deactivated successfully. Sep 6 01:55:28.050833 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:55:28.051855 systemd-logind[1189]: Removed session 4. Sep 6 01:55:28.201791 systemd[1]: Started sshd@4-10.244.19.198:22-139.178.89.65:45670.service. Sep 6 01:55:29.153945 sshd[1322]: Accepted publickey for core from 139.178.89.65 port 45670 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:55:29.156575 sshd[1322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:55:29.163894 systemd[1]: Started session-5.scope. Sep 6 01:55:29.164515 systemd-logind[1189]: New session 5 of user core. Sep 6 01:55:29.674785 sudo[1325]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:55:29.675202 sudo[1325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:55:29.720419 systemd[1]: Starting docker.service... Sep 6 01:55:29.784163 env[1335]: time="2025-09-06T01:55:29.784014049Z" level=info msg="Starting up" Sep 6 01:55:29.786416 env[1335]: time="2025-09-06T01:55:29.786381081Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:55:29.786596 env[1335]: time="2025-09-06T01:55:29.786565370Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:55:29.786726 env[1335]: time="2025-09-06T01:55:29.786693596Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:55:29.786859 env[1335]: time="2025-09-06T01:55:29.786830288Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:55:29.791200 env[1335]: time="2025-09-06T01:55:29.791155690Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:55:29.791340 env[1335]: time="2025-09-06T01:55:29.791311132Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:55:29.791497 env[1335]: time="2025-09-06T01:55:29.791464860Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:55:29.791604 env[1335]: time="2025-09-06T01:55:29.791576861Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:55:29.847041 env[1335]: time="2025-09-06T01:55:29.846983575Z" level=info msg="Loading containers: start." Sep 6 01:55:30.016155 kernel: Initializing XFRM netlink socket Sep 6 01:55:30.073158 env[1335]: time="2025-09-06T01:55:30.073079772Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:55:30.171891 systemd-networkd[1019]: docker0: Link UP Sep 6 01:55:30.208253 env[1335]: time="2025-09-06T01:55:30.208203446Z" level=info msg="Loading containers: done." Sep 6 01:55:30.229344 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1376559845-merged.mount: Deactivated successfully. Sep 6 01:55:30.233277 env[1335]: time="2025-09-06T01:55:30.233214920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:55:30.233578 env[1335]: time="2025-09-06T01:55:30.233539116Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:55:30.233759 env[1335]: time="2025-09-06T01:55:30.233725359Z" level=info msg="Daemon has completed initialization" Sep 6 01:55:30.253132 systemd[1]: Started docker.service. Sep 6 01:55:30.265121 env[1335]: time="2025-09-06T01:55:30.265023722Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:55:31.537325 env[1200]: time="2025-09-06T01:55:31.537138414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 6 01:55:32.538275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787627378.mount: Deactivated successfully. Sep 6 01:55:34.319523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:55:34.319959 systemd[1]: Stopped kubelet.service. Sep 6 01:55:34.323419 systemd[1]: Starting kubelet.service... Sep 6 01:55:34.508578 systemd[1]: Started kubelet.service. Sep 6 01:55:34.619593 kubelet[1466]: E0906 01:55:34.619347 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:55:34.621815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:55:34.622057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:55:34.855449 env[1200]: time="2025-09-06T01:55:34.855226125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:34.857986 env[1200]: time="2025-09-06T01:55:34.857950365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:34.862533 env[1200]: time="2025-09-06T01:55:34.861240989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:34.864258 env[1200]: time="2025-09-06T01:55:34.864211016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:34.865909 env[1200]: time="2025-09-06T01:55:34.865807975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 6 01:55:34.867750 env[1200]: time="2025-09-06T01:55:34.867712468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 6 01:55:40.487821 env[1200]: time="2025-09-06T01:55:40.487706185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:40.491285 env[1200]: time="2025-09-06T01:55:40.491238599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:40.496537 env[1200]: time="2025-09-06T01:55:40.496487925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:40.500275 env[1200]: time="2025-09-06T01:55:40.500232335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:40.501716 env[1200]: time="2025-09-06T01:55:40.501668047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 6 01:55:40.507122 env[1200]: time="2025-09-06T01:55:40.507025989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 6 01:55:42.719827 env[1200]: time="2025-09-06T01:55:42.719687449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:42.722982 env[1200]: time="2025-09-06T01:55:42.722939551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:42.725864 env[1200]: time="2025-09-06T01:55:42.725812770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:42.728404 env[1200]: time="2025-09-06T01:55:42.728368327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:42.729633 env[1200]: time="2025-09-06T01:55:42.729574998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 6 01:55:42.731283 env[1200]: time="2025-09-06T01:55:42.731219607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 01:55:43.087355 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 01:55:44.443358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082983065.mount: Deactivated successfully. Sep 6 01:55:44.819233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 01:55:44.819549 systemd[1]: Stopped kubelet.service. Sep 6 01:55:44.823032 systemd[1]: Starting kubelet.service... Sep 6 01:55:45.033270 systemd[1]: Started kubelet.service. Sep 6 01:55:45.122564 kubelet[1478]: E0906 01:55:45.121965 1478 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:55:45.123709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:55:45.123958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:55:45.543972 env[1200]: time="2025-09-06T01:55:45.543269026Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:45.547624 env[1200]: time="2025-09-06T01:55:45.547587819Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:45.550188 env[1200]: time="2025-09-06T01:55:45.550141101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:45.551959 env[1200]: time="2025-09-06T01:55:45.551923248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:45.552511 env[1200]: time="2025-09-06T01:55:45.552423880Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 6 01:55:45.554735 env[1200]: time="2025-09-06T01:55:45.554701226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:55:46.256377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708400809.mount: Deactivated successfully. Sep 6 01:55:47.951303 env[1200]: time="2025-09-06T01:55:47.951087244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:47.958766 env[1200]: time="2025-09-06T01:55:47.958700091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:47.976141 env[1200]: time="2025-09-06T01:55:47.976063462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:47.978676 env[1200]: time="2025-09-06T01:55:47.978632867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:47.980244 env[1200]: time="2025-09-06T01:55:47.980170567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 01:55:47.983344 env[1200]: time="2025-09-06T01:55:47.983175685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:55:48.723391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331509243.mount: Deactivated successfully. Sep 6 01:55:48.749377 env[1200]: time="2025-09-06T01:55:48.749309151Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:48.751308 env[1200]: time="2025-09-06T01:55:48.751272279Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:48.756548 env[1200]: time="2025-09-06T01:55:48.756492843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:48.760244 env[1200]: time="2025-09-06T01:55:48.760159113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:48.761612 env[1200]: time="2025-09-06T01:55:48.761565250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 01:55:48.762959 env[1200]: time="2025-09-06T01:55:48.762921861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 6 01:55:49.523374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119173431.mount: Deactivated successfully. Sep 6 01:55:53.436664 env[1200]: time="2025-09-06T01:55:53.436435537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:53.440154 env[1200]: time="2025-09-06T01:55:53.440093870Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:53.443250 env[1200]: time="2025-09-06T01:55:53.443213436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:53.446182 env[1200]: time="2025-09-06T01:55:53.446126436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:55:53.448093 env[1200]: time="2025-09-06T01:55:53.448028314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 6 01:55:55.319729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 01:55:55.320109 systemd[1]: Stopped kubelet.service. Sep 6 01:55:55.323440 systemd[1]: Starting kubelet.service... Sep 6 01:55:55.919463 systemd[1]: Started kubelet.service. Sep 6 01:55:55.985244 kubelet[1505]: E0906 01:55:55.985137 1505 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:55:55.986874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:55:55.987158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:55:56.406110 update_engine[1192]: I0906 01:55:56.405987 1192 update_attempter.cc:509] Updating boot flags... Sep 6 01:55:58.097613 systemd[1]: Stopped kubelet.service. Sep 6 01:55:58.101806 systemd[1]: Starting kubelet.service... Sep 6 01:55:58.140965 systemd[1]: Reloading. Sep 6 01:55:58.283872 /usr/lib/systemd/system-generators/torcx-generator[1554]: time="2025-09-06T01:55:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:55:58.284742 /usr/lib/systemd/system-generators/torcx-generator[1554]: time="2025-09-06T01:55:58Z" level=info msg="torcx already run" Sep 6 01:55:58.399216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:55:58.399889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:55:58.429186 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:55:58.585937 systemd[1]: Started kubelet.service. Sep 6 01:55:58.588140 systemd[1]: Stopping kubelet.service... Sep 6 01:55:58.588958 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:55:58.589291 systemd[1]: Stopped kubelet.service. Sep 6 01:55:58.591939 systemd[1]: Starting kubelet.service... Sep 6 01:55:58.860277 systemd[1]: Started kubelet.service. Sep 6 01:55:58.973536 kubelet[1605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:55:58.973536 kubelet[1605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:55:58.973536 kubelet[1605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:55:58.974348 kubelet[1605]: I0906 01:55:58.973659 1605 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:55:59.451760 kubelet[1605]: I0906 01:55:59.451709 1605 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 01:55:59.451997 kubelet[1605]: I0906 01:55:59.451972 1605 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:55:59.452559 kubelet[1605]: I0906 01:55:59.452531 1605 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 01:55:59.511714 kubelet[1605]: E0906 01:55:59.511657 1605 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.19.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:55:59.513419 kubelet[1605]: I0906 01:55:59.513388 1605 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:55:59.528506 kubelet[1605]: E0906 01:55:59.528435 1605 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:55:59.528506 kubelet[1605]: I0906 01:55:59.528493 1605 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:55:59.535845 kubelet[1605]: I0906 01:55:59.535810 1605 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:55:59.537408 kubelet[1605]: I0906 01:55:59.537346 1605 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:55:59.537701 kubelet[1605]: I0906 01:55:59.537404 1605 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jrxph.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:55:59.537975 kubelet[1605]: I0906 01:55:59.537726 1605 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:55:59.537975 kubelet[1605]: I0906 01:55:59.537754 1605 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 01:55:59.538145 kubelet[1605]: I0906 01:55:59.537991 1605 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:55:59.543134 kubelet[1605]: I0906 01:55:59.543063 1605 kubelet.go:446] "Attempting to sync node with API server" Sep 6 01:55:59.543134 kubelet[1605]: I0906 01:55:59.543135 1605 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:55:59.543328 kubelet[1605]: I0906 01:55:59.543176 1605 kubelet.go:352] "Adding apiserver pod source" Sep 6 01:55:59.543328 kubelet[1605]: I0906 01:55:59.543200 1605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:55:59.548761 kubelet[1605]: I0906 01:55:59.548728 1605 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:55:59.549385 kubelet[1605]: I0906 01:55:59.549354 1605 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:55:59.551908 kubelet[1605]: W0906 01:55:59.551848 1605 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:55:59.554721 kubelet[1605]: I0906 01:55:59.554684 1605 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:55:59.554811 kubelet[1605]: I0906 01:55:59.554747 1605 server.go:1287] "Started kubelet" Sep 6 01:55:59.555565 kubelet[1605]: W0906 01:55:59.554964 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:55:59.555565 kubelet[1605]: E0906 01:55:59.555063 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.19.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:55:59.555565 kubelet[1605]: W0906 01:55:59.555199 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrxph.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:55:59.555565 kubelet[1605]: E0906 01:55:59.555247 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.19.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrxph.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:55:59.568496 kubelet[1605]: I0906 01:55:59.568422 1605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:55:59.569389 kubelet[1605]: I0906 01:55:59.569362 1605 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:55:59.571201 kubelet[1605]: E0906 01:55:59.569852 1605 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.19.198:6443/api/v1/namespaces/default/events\": dial tcp 10.244.19.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jrxph.gb1.brightbox.com.18628ebfeaa861a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jrxph.gb1.brightbox.com,UID:srv-jrxph.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jrxph.gb1.brightbox.com,},FirstTimestamp:2025-09-06 01:55:59.554716072 +0000 UTC m=+0.683238456,LastTimestamp:2025-09-06 01:55:59.554716072 +0000 UTC m=+0.683238456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jrxph.gb1.brightbox.com,}" Sep 6 01:55:59.571384 kubelet[1605]: I0906 01:55:59.571239 1605 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:55:59.576034 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:55:59.576190 kubelet[1605]: I0906 01:55:59.573657 1605 server.go:479] "Adding debug handlers to kubelet server" Sep 6 01:55:59.576484 kubelet[1605]: I0906 01:55:59.576445 1605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:55:59.580138 kubelet[1605]: I0906 01:55:59.580077 1605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:55:59.583042 kubelet[1605]: I0906 01:55:59.583008 1605 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:55:59.583549 kubelet[1605]: E0906 01:55:59.583515 1605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jrxph.gb1.brightbox.com\" not found" Sep 6 01:55:59.584584 kubelet[1605]: I0906 01:55:59.584558 1605 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:55:59.584826 kubelet[1605]: I0906 01:55:59.584803 1605 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:55:59.586179 kubelet[1605]: W0906 01:55:59.586091 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:55:59.586442 kubelet[1605]: E0906 01:55:59.586399 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.19.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:55:59.586611 kubelet[1605]: E0906 01:55:59.586576 1605 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:55:59.586739 kubelet[1605]: E0906 01:55:59.586704 1605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrxph.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.198:6443: connect: connection refused" interval="200ms" Sep 6 01:55:59.587294 kubelet[1605]: I0906 01:55:59.587238 1605 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:55:59.587853 kubelet[1605]: I0906 01:55:59.587820 1605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:55:59.590009 kubelet[1605]: I0906 01:55:59.589981 1605 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:55:59.620912 kubelet[1605]: I0906 01:55:59.620871 1605 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:55:59.620912 kubelet[1605]: I0906 01:55:59.620900 1605 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:55:59.621246 kubelet[1605]: I0906 01:55:59.620956 1605 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:55:59.622879 kubelet[1605]: I0906 01:55:59.622823 1605 policy_none.go:49] "None policy: Start" Sep 6 01:55:59.622879 kubelet[1605]: I0906 01:55:59.622863 1605 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:55:59.623026 kubelet[1605]: I0906 01:55:59.622896 1605 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:55:59.631627 systemd[1]: Created slice kubepods.slice. Sep 6 01:55:59.638579 kubelet[1605]: I0906 01:55:59.638536 1605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:55:59.639709 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 01:55:59.643999 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 01:55:59.645816 kubelet[1605]: I0906 01:55:59.645780 1605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:55:59.645938 kubelet[1605]: I0906 01:55:59.645836 1605 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 01:55:59.645938 kubelet[1605]: I0906 01:55:59.645872 1605 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:55:59.645938 kubelet[1605]: I0906 01:55:59.645889 1605 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 01:55:59.646180 kubelet[1605]: E0906 01:55:59.645984 1605 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:55:59.648766 kubelet[1605]: W0906 01:55:59.648727 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:55:59.648955 kubelet[1605]: E0906 01:55:59.648918 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.19.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:55:59.651598 kubelet[1605]: I0906 01:55:59.651569 1605 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:55:59.651945 kubelet[1605]: I0906 01:55:59.651920 1605 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:55:59.652125 kubelet[1605]: I0906 01:55:59.652058 1605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:55:59.653127 kubelet[1605]: I0906 01:55:59.653089 1605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:55:59.656559 kubelet[1605]: E0906 01:55:59.656532 1605 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:55:59.656787 kubelet[1605]: E0906 01:55:59.656761 1605 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jrxph.gb1.brightbox.com\" not found" Sep 6 01:55:59.757711 kubelet[1605]: I0906 01:55:59.757580 1605 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.760323 kubelet[1605]: E0906 01:55:59.760276 1605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.198:6443/api/v1/nodes\": dial tcp 10.244.19.198:6443: connect: connection refused" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.763421 systemd[1]: Created slice kubepods-burstable-podf7681bd196c82e88e753c2be93d0ee7b.slice. Sep 6 01:55:59.776295 kubelet[1605]: E0906 01:55:59.776083 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.779900 systemd[1]: Created slice kubepods-burstable-pod4c5d57f09af8aeeba4f6e1c52b1579a4.slice. Sep 6 01:55:59.784447 kubelet[1605]: E0906 01:55:59.784420 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787832 kubelet[1605]: I0906 01:55:59.786684 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-flexvolume-dir\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787832 kubelet[1605]: I0906 01:55:59.786732 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-k8s-certs\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787832 kubelet[1605]: I0906 01:55:59.786766 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c5d57f09af8aeeba4f6e1c52b1579a4-kubeconfig\") pod \"kube-scheduler-srv-jrxph.gb1.brightbox.com\" (UID: \"4c5d57f09af8aeeba4f6e1c52b1579a4\") " pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787832 kubelet[1605]: I0906 01:55:59.786791 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-kubeconfig\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787832 kubelet[1605]: I0906 01:55:59.786822 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.787555 systemd[1]: Created slice kubepods-burstable-podd6cc5f22e5c5504a51b93994cb29d7e0.slice. Sep 6 01:55:59.788355 kubelet[1605]: I0906 01:55:59.786851 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-ca-certs\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.788355 kubelet[1605]: I0906 01:55:59.786875 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-k8s-certs\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.788355 kubelet[1605]: I0906 01:55:59.786901 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.788355 kubelet[1605]: I0906 01:55:59.786931 1605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-ca-certs\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.788355 kubelet[1605]: E0906 01:55:59.787418 1605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrxph.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.198:6443: connect: connection refused" interval="400ms" Sep 6 01:55:59.790952 kubelet[1605]: E0906 01:55:59.790662 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.965059 kubelet[1605]: I0906 01:55:59.964996 1605 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:55:59.965950 kubelet[1605]: E0906 01:55:59.965887 1605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.198:6443/api/v1/nodes\": dial tcp 10.244.19.198:6443: connect: connection refused" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:00.079136 env[1200]: time="2025-09-06T01:56:00.078977209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jrxph.gb1.brightbox.com,Uid:f7681bd196c82e88e753c2be93d0ee7b,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:00.085776 env[1200]: time="2025-09-06T01:56:00.085513585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jrxph.gb1.brightbox.com,Uid:4c5d57f09af8aeeba4f6e1c52b1579a4,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:00.092443 env[1200]: time="2025-09-06T01:56:00.092311559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jrxph.gb1.brightbox.com,Uid:d6cc5f22e5c5504a51b93994cb29d7e0,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:00.188960 kubelet[1605]: E0906 01:56:00.188887 1605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrxph.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.198:6443: connect: connection refused" interval="800ms" Sep 6 01:56:00.369908 kubelet[1605]: I0906 01:56:00.369549 1605 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:00.370645 kubelet[1605]: E0906 01:56:00.370599 1605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.198:6443/api/v1/nodes\": dial tcp 10.244.19.198:6443: connect: connection refused" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:00.808458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338528313.mount: Deactivated successfully. Sep 6 01:56:00.826622 env[1200]: time="2025-09-06T01:56:00.826544087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.828458 env[1200]: time="2025-09-06T01:56:00.828418278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.831270 env[1200]: time="2025-09-06T01:56:00.831230794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.833376 env[1200]: time="2025-09-06T01:56:00.833340137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.835478 env[1200]: time="2025-09-06T01:56:00.835441950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.836590 env[1200]: time="2025-09-06T01:56:00.836554564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.838530 env[1200]: time="2025-09-06T01:56:00.838470028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.840584 env[1200]: time="2025-09-06T01:56:00.840548748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.842517 kubelet[1605]: W0906 01:56:00.842433 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrxph.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:56:00.842671 kubelet[1605]: E0906 01:56:00.842539 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.19.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrxph.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:56:00.842973 env[1200]: time="2025-09-06T01:56:00.842931729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.847008 env[1200]: time="2025-09-06T01:56:00.846968473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.853871 env[1200]: time="2025-09-06T01:56:00.853829572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.855277 env[1200]: time="2025-09-06T01:56:00.855242315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:00.878447 kubelet[1605]: W0906 01:56:00.878369 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:56:00.878619 kubelet[1605]: E0906 01:56:00.878458 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.19.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:56:00.901932 env[1200]: time="2025-09-06T01:56:00.901785652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:00.902120 env[1200]: time="2025-09-06T01:56:00.902017469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:00.902234 env[1200]: time="2025-09-06T01:56:00.902162404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:00.902322 env[1200]: time="2025-09-06T01:56:00.902240136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:00.902488 env[1200]: time="2025-09-06T01:56:00.901897914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:00.902488 env[1200]: time="2025-09-06T01:56:00.902453021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:00.903077 env[1200]: time="2025-09-06T01:56:00.903006004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cf83e642169992050e6657f4c6811288c8c2813f955fa7e30a71397fb534a0f pid=1653 runtime=io.containerd.runc.v2 Sep 6 01:56:00.903291 env[1200]: time="2025-09-06T01:56:00.903219410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6a5fbcd63cd6f7ce439033dee034c08bc7abda723b0ac772faf146058dbccc3 pid=1654 runtime=io.containerd.runc.v2 Sep 6 01:56:00.912150 kubelet[1605]: W0906 01:56:00.911072 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:56:00.912150 kubelet[1605]: E0906 01:56:00.911152 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.19.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:56:00.931225 env[1200]: time="2025-09-06T01:56:00.931062480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:00.931225 env[1200]: time="2025-09-06T01:56:00.931170402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:00.931607 env[1200]: time="2025-09-06T01:56:00.931190137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:00.931869 env[1200]: time="2025-09-06T01:56:00.931811292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9a88b645d5b919f23bc35ac707b926d88624ca18e185a548935c9f15e8c662a pid=1683 runtime=io.containerd.runc.v2 Sep 6 01:56:00.940498 systemd[1]: Started cri-containerd-e6a5fbcd63cd6f7ce439033dee034c08bc7abda723b0ac772faf146058dbccc3.scope. Sep 6 01:56:00.962452 systemd[1]: Started cri-containerd-5cf83e642169992050e6657f4c6811288c8c2813f955fa7e30a71397fb534a0f.scope. Sep 6 01:56:00.984410 systemd[1]: Started cri-containerd-e9a88b645d5b919f23bc35ac707b926d88624ca18e185a548935c9f15e8c662a.scope. Sep 6 01:56:00.990225 kubelet[1605]: E0906 01:56:00.990168 1605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrxph.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.198:6443: connect: connection refused" interval="1.6s" Sep 6 01:56:01.110613 env[1200]: time="2025-09-06T01:56:01.110492298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jrxph.gb1.brightbox.com,Uid:d6cc5f22e5c5504a51b93994cb29d7e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6a5fbcd63cd6f7ce439033dee034c08bc7abda723b0ac772faf146058dbccc3\"" Sep 6 01:56:01.120700 env[1200]: time="2025-09-06T01:56:01.120216639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jrxph.gb1.brightbox.com,Uid:f7681bd196c82e88e753c2be93d0ee7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cf83e642169992050e6657f4c6811288c8c2813f955fa7e30a71397fb534a0f\"" Sep 6 01:56:01.122449 env[1200]: time="2025-09-06T01:56:01.122399195Z" level=info msg="CreateContainer within sandbox \"e6a5fbcd63cd6f7ce439033dee034c08bc7abda723b0ac772faf146058dbccc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:56:01.128016 env[1200]: time="2025-09-06T01:56:01.127972520Z" level=info msg="CreateContainer within sandbox \"5cf83e642169992050e6657f4c6811288c8c2813f955fa7e30a71397fb534a0f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:56:01.151490 env[1200]: time="2025-09-06T01:56:01.151443199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jrxph.gb1.brightbox.com,Uid:4c5d57f09af8aeeba4f6e1c52b1579a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9a88b645d5b919f23bc35ac707b926d88624ca18e185a548935c9f15e8c662a\"" Sep 6 01:56:01.152280 env[1200]: time="2025-09-06T01:56:01.150651021Z" level=info msg="CreateContainer within sandbox \"e6a5fbcd63cd6f7ce439033dee034c08bc7abda723b0ac772faf146058dbccc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a255494973c1e88af124c44853e89269be9f638d17c66f7e8d9a81b4748ec4cb\"" Sep 6 01:56:01.154603 env[1200]: time="2025-09-06T01:56:01.154561686Z" level=info msg="StartContainer for \"a255494973c1e88af124c44853e89269be9f638d17c66f7e8d9a81b4748ec4cb\"" Sep 6 01:56:01.156394 env[1200]: time="2025-09-06T01:56:01.156346906Z" level=info msg="CreateContainer within sandbox \"e9a88b645d5b919f23bc35ac707b926d88624ca18e185a548935c9f15e8c662a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:56:01.159444 env[1200]: time="2025-09-06T01:56:01.159374010Z" level=info msg="CreateContainer within sandbox \"5cf83e642169992050e6657f4c6811288c8c2813f955fa7e30a71397fb534a0f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf08706fcd815cb4d5b50a677461fa0115e15d2ddc79e0b41a24088174de8cba\"" Sep 6 01:56:01.161179 kubelet[1605]: W0906 01:56:01.160992 1605 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.198:6443: connect: connection refused Sep 6 01:56:01.161179 kubelet[1605]: E0906 01:56:01.161129 1605 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.19.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:56:01.161371 env[1200]: time="2025-09-06T01:56:01.161197498Z" level=info msg="StartContainer for \"cf08706fcd815cb4d5b50a677461fa0115e15d2ddc79e0b41a24088174de8cba\"" Sep 6 01:56:01.175236 kubelet[1605]: I0906 01:56:01.174622 1605 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:01.175236 kubelet[1605]: E0906 01:56:01.175172 1605 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.198:6443/api/v1/nodes\": dial tcp 10.244.19.198:6443: connect: connection refused" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:01.177412 env[1200]: time="2025-09-06T01:56:01.177363256Z" level=info msg="CreateContainer within sandbox \"e9a88b645d5b919f23bc35ac707b926d88624ca18e185a548935c9f15e8c662a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5553edd48bde6b6e50b3997f4ecac4253eff7f66b90d78c679e44b19c2c5bde1\"" Sep 6 01:56:01.179337 env[1200]: time="2025-09-06T01:56:01.179284432Z" level=info msg="StartContainer for \"5553edd48bde6b6e50b3997f4ecac4253eff7f66b90d78c679e44b19c2c5bde1\"" Sep 6 01:56:01.193116 systemd[1]: Started cri-containerd-a255494973c1e88af124c44853e89269be9f638d17c66f7e8d9a81b4748ec4cb.scope. Sep 6 01:56:01.201559 systemd[1]: Started cri-containerd-cf08706fcd815cb4d5b50a677461fa0115e15d2ddc79e0b41a24088174de8cba.scope. Sep 6 01:56:01.236163 systemd[1]: Started cri-containerd-5553edd48bde6b6e50b3997f4ecac4253eff7f66b90d78c679e44b19c2c5bde1.scope. Sep 6 01:56:01.334176 env[1200]: time="2025-09-06T01:56:01.334081114Z" level=info msg="StartContainer for \"5553edd48bde6b6e50b3997f4ecac4253eff7f66b90d78c679e44b19c2c5bde1\" returns successfully" Sep 6 01:56:01.342636 env[1200]: time="2025-09-06T01:56:01.342579958Z" level=info msg="StartContainer for \"cf08706fcd815cb4d5b50a677461fa0115e15d2ddc79e0b41a24088174de8cba\" returns successfully" Sep 6 01:56:01.360310 env[1200]: time="2025-09-06T01:56:01.360233761Z" level=info msg="StartContainer for \"a255494973c1e88af124c44853e89269be9f638d17c66f7e8d9a81b4748ec4cb\" returns successfully" Sep 6 01:56:01.556319 kubelet[1605]: E0906 01:56:01.556073 1605 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.19.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.19.198:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:56:01.662864 kubelet[1605]: E0906 01:56:01.662815 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:01.668258 kubelet[1605]: E0906 01:56:01.668091 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:01.679120 kubelet[1605]: E0906 01:56:01.677696 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:02.210179 kubelet[1605]: E0906 01:56:02.209926 1605 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.19.198:6443/api/v1/namespaces/default/events\": dial tcp 10.244.19.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jrxph.gb1.brightbox.com.18628ebfeaa861a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jrxph.gb1.brightbox.com,UID:srv-jrxph.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jrxph.gb1.brightbox.com,},FirstTimestamp:2025-09-06 01:55:59.554716072 +0000 UTC m=+0.683238456,LastTimestamp:2025-09-06 01:55:59.554716072 +0000 UTC m=+0.683238456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jrxph.gb1.brightbox.com,}" Sep 6 01:56:02.679495 kubelet[1605]: E0906 01:56:02.679447 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:02.680376 kubelet[1605]: E0906 01:56:02.679749 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:02.680678 kubelet[1605]: E0906 01:56:02.680124 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:02.778168 kubelet[1605]: I0906 01:56:02.778131 1605 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:03.681275 kubelet[1605]: E0906 01:56:03.681236 1605 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.320353 kubelet[1605]: E0906 01:56:04.320258 1605 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jrxph.gb1.brightbox.com\" not found" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.464769 kubelet[1605]: I0906 01:56:04.464688 1605 kubelet_node_status.go:78] "Successfully registered node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.464769 kubelet[1605]: E0906 01:56:04.464753 1605 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-jrxph.gb1.brightbox.com\": node \"srv-jrxph.gb1.brightbox.com\" not found" Sep 6 01:56:04.485255 kubelet[1605]: I0906 01:56:04.485209 1605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.505562 kubelet[1605]: E0906 01:56:04.505499 1605 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.505864 kubelet[1605]: I0906 01:56:04.505838 1605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.509639 kubelet[1605]: E0906 01:56:04.509567 1605 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.509639 kubelet[1605]: I0906 01:56:04.509601 1605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.512934 kubelet[1605]: E0906 01:56:04.512882 1605 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jrxph.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:04.549578 kubelet[1605]: I0906 01:56:04.549530 1605 apiserver.go:52] "Watching apiserver" Sep 6 01:56:04.585531 kubelet[1605]: I0906 01:56:04.585321 1605 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:56:06.382018 systemd[1]: Reloading. Sep 6 01:56:06.499652 /usr/lib/systemd/system-generators/torcx-generator[1897]: time="2025-09-06T01:56:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:56:06.499716 /usr/lib/systemd/system-generators/torcx-generator[1897]: time="2025-09-06T01:56:06Z" level=info msg="torcx already run" Sep 6 01:56:06.637348 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:56:06.638054 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:56:06.668076 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:56:06.843339 systemd[1]: Stopping kubelet.service... Sep 6 01:56:06.863389 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:56:06.864170 systemd[1]: Stopped kubelet.service. Sep 6 01:56:06.864308 systemd[1]: kubelet.service: Consumed 1.219s CPU time. Sep 6 01:56:06.867550 systemd[1]: Starting kubelet.service... Sep 6 01:56:08.242989 systemd[1]: Started kubelet.service. Sep 6 01:56:08.401329 kubelet[1948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:56:08.402093 kubelet[1948]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:56:08.402246 kubelet[1948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:56:08.402618 kubelet[1948]: I0906 01:56:08.402559 1948 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:56:08.437567 kubelet[1948]: I0906 01:56:08.436623 1948 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 01:56:08.438782 kubelet[1948]: I0906 01:56:08.437994 1948 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:56:08.441453 kubelet[1948]: I0906 01:56:08.440336 1948 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 01:56:08.450293 kubelet[1948]: I0906 01:56:08.450235 1948 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:56:08.452137 sudo[1963]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 01:56:08.452671 sudo[1963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 01:56:08.461334 kubelet[1948]: I0906 01:56:08.461272 1948 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:56:08.499046 kubelet[1948]: E0906 01:56:08.498891 1948 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:56:08.499328 kubelet[1948]: I0906 01:56:08.499287 1948 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:56:08.511934 kubelet[1948]: I0906 01:56:08.511885 1948 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:56:08.512737 kubelet[1948]: I0906 01:56:08.512668 1948 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:56:08.513274 kubelet[1948]: I0906 01:56:08.512859 1948 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jrxph.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:56:08.514631 kubelet[1948]: I0906 01:56:08.514601 1948 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:56:08.514814 kubelet[1948]: I0906 01:56:08.514789 1948 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 01:56:08.515089 kubelet[1948]: I0906 01:56:08.515064 1948 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:56:08.519085 kubelet[1948]: I0906 01:56:08.519049 1948 kubelet.go:446] "Attempting to sync node with API server" Sep 6 01:56:08.520245 kubelet[1948]: I0906 01:56:08.520219 1948 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:56:08.520594 kubelet[1948]: I0906 01:56:08.520561 1948 kubelet.go:352] "Adding apiserver pod source" Sep 6 01:56:08.528784 kubelet[1948]: I0906 01:56:08.528729 1948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:56:08.541206 kubelet[1948]: I0906 01:56:08.541160 1948 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:56:08.542243 kubelet[1948]: I0906 01:56:08.542206 1948 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:56:08.543499 kubelet[1948]: I0906 01:56:08.543475 1948 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:56:08.543713 kubelet[1948]: I0906 01:56:08.543689 1948 server.go:1287] "Started kubelet" Sep 6 01:56:08.564699 kubelet[1948]: I0906 01:56:08.564655 1948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:56:08.577237 kubelet[1948]: I0906 01:56:08.577155 1948 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:56:08.580307 kubelet[1948]: I0906 01:56:08.580278 1948 server.go:479] "Adding debug handlers to kubelet server" Sep 6 01:56:08.582056 kubelet[1948]: I0906 01:56:08.581978 1948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:56:08.607363 kubelet[1948]: I0906 01:56:08.607323 1948 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:56:08.607646 kubelet[1948]: I0906 01:56:08.584651 1948 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:56:08.613072 kubelet[1948]: I0906 01:56:08.613038 1948 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:56:08.616945 kubelet[1948]: I0906 01:56:08.614214 1948 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:56:08.618509 kubelet[1948]: I0906 01:56:08.617378 1948 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:56:08.618509 kubelet[1948]: I0906 01:56:08.618196 1948 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:56:08.618509 kubelet[1948]: I0906 01:56:08.618327 1948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:56:08.624392 kubelet[1948]: E0906 01:56:08.624354 1948 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:56:08.630248 kubelet[1948]: I0906 01:56:08.630216 1948 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:56:08.653523 kubelet[1948]: I0906 01:56:08.653455 1948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:56:08.663414 kubelet[1948]: I0906 01:56:08.660190 1948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:56:08.663414 kubelet[1948]: I0906 01:56:08.660264 1948 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 01:56:08.663414 kubelet[1948]: I0906 01:56:08.660308 1948 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:56:08.663414 kubelet[1948]: I0906 01:56:08.660330 1948 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 01:56:08.663414 kubelet[1948]: E0906 01:56:08.660419 1948 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:56:08.752074 kubelet[1948]: I0906 01:56:08.751958 1948 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:56:08.752423 kubelet[1948]: I0906 01:56:08.752394 1948 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:56:08.752705 kubelet[1948]: I0906 01:56:08.752680 1948 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:56:08.753171 kubelet[1948]: I0906 01:56:08.753144 1948 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:56:08.753361 kubelet[1948]: I0906 01:56:08.753307 1948 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:56:08.753506 kubelet[1948]: I0906 01:56:08.753483 1948 policy_none.go:49] "None policy: Start" Sep 6 01:56:08.753662 kubelet[1948]: I0906 01:56:08.753638 1948 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:56:08.753816 kubelet[1948]: I0906 01:56:08.753793 1948 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:56:08.754174 kubelet[1948]: I0906 01:56:08.754150 1948 state_mem.go:75] "Updated machine memory state" Sep 6 01:56:08.762512 kubelet[1948]: E0906 01:56:08.762339 1948 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 01:56:08.767614 kubelet[1948]: I0906 01:56:08.767586 1948 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:56:08.768736 kubelet[1948]: I0906 01:56:08.768710 1948 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:56:08.768923 kubelet[1948]: I0906 01:56:08.768848 1948 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:56:08.771041 kubelet[1948]: I0906 01:56:08.771016 1948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:56:08.776274 kubelet[1948]: E0906 01:56:08.776231 1948 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:56:08.917678 kubelet[1948]: I0906 01:56:08.917631 1948 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.927400 kubelet[1948]: I0906 01:56:08.927352 1948 kubelet_node_status.go:124] "Node was previously registered" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.927592 kubelet[1948]: I0906 01:56:08.927467 1948 kubelet_node_status.go:78] "Successfully registered node" node="srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.965010 kubelet[1948]: I0906 01:56:08.964919 1948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.965489 kubelet[1948]: I0906 01:56:08.965461 1948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.965830 kubelet[1948]: I0906 01:56:08.965601 1948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:08.972511 kubelet[1948]: W0906 01:56:08.972474 1948 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:56:08.976715 kubelet[1948]: W0906 01:56:08.976684 1948 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:56:08.978030 kubelet[1948]: W0906 01:56:08.977997 1948 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:56:09.124052 kubelet[1948]: I0906 01:56:09.123972 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-k8s-certs\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.124455 kubelet[1948]: I0906 01:56:09.124393 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.124775 kubelet[1948]: I0906 01:56:09.124724 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c5d57f09af8aeeba4f6e1c52b1579a4-kubeconfig\") pod \"kube-scheduler-srv-jrxph.gb1.brightbox.com\" (UID: \"4c5d57f09af8aeeba4f6e1c52b1579a4\") " pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.124977 kubelet[1948]: I0906 01:56:09.124951 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-ca-certs\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.125173 kubelet[1948]: I0906 01:56:09.125125 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-k8s-certs\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.125384 kubelet[1948]: I0906 01:56:09.125345 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7681bd196c82e88e753c2be93d0ee7b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jrxph.gb1.brightbox.com\" (UID: \"f7681bd196c82e88e753c2be93d0ee7b\") " pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.125562 kubelet[1948]: I0906 01:56:09.125524 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-flexvolume-dir\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.125746 kubelet[1948]: I0906 01:56:09.125708 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-kubeconfig\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.125949 kubelet[1948]: I0906 01:56:09.125911 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cc5f22e5c5504a51b93994cb29d7e0-ca-certs\") pod \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" (UID: \"d6cc5f22e5c5504a51b93994cb29d7e0\") " pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.327957 sudo[1963]: pam_unix(sudo:session): session closed for user root Sep 6 01:56:09.536460 kubelet[1948]: I0906 01:56:09.536277 1948 apiserver.go:52] "Watching apiserver" Sep 6 01:56:09.618507 kubelet[1948]: I0906 01:56:09.618434 1948 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:56:09.681003 kubelet[1948]: I0906 01:56:09.680949 1948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.702108 kubelet[1948]: W0906 01:56:09.702046 1948 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:56:09.702370 kubelet[1948]: E0906 01:56:09.702158 1948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jrxph.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" Sep 6 01:56:09.729954 kubelet[1948]: I0906 01:56:09.729823 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jrxph.gb1.brightbox.com" podStartSLOduration=1.729788501 podStartE2EDuration="1.729788501s" podCreationTimestamp="2025-09-06 01:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:09.715401436 +0000 UTC m=+1.444767134" watchObservedRunningTime="2025-09-06 01:56:09.729788501 +0000 UTC m=+1.459154204" Sep 6 01:56:09.745016 kubelet[1948]: I0906 01:56:09.744936 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jrxph.gb1.brightbox.com" podStartSLOduration=1.7448925659999999 podStartE2EDuration="1.744892566s" podCreationTimestamp="2025-09-06 01:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:09.731301221 +0000 UTC m=+1.460666935" watchObservedRunningTime="2025-09-06 01:56:09.744892566 +0000 UTC m=+1.474258285" Sep 6 01:56:09.774540 kubelet[1948]: I0906 01:56:09.774457 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jrxph.gb1.brightbox.com" podStartSLOduration=1.77441603 podStartE2EDuration="1.77441603s" podCreationTimestamp="2025-09-06 01:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:09.748801495 +0000 UTC m=+1.478167205" watchObservedRunningTime="2025-09-06 01:56:09.77441603 +0000 UTC m=+1.503781749" Sep 6 01:56:11.484947 sudo[1325]: pam_unix(sudo:session): session closed for user root Sep 6 01:56:11.643699 sshd[1322]: pam_unix(sshd:session): session closed for user core Sep 6 01:56:11.649465 systemd[1]: sshd@4-10.244.19.198:22-139.178.89.65:45670.service: Deactivated successfully. Sep 6 01:56:11.651893 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:56:11.652336 systemd[1]: session-5.scope: Consumed 6.806s CPU time. Sep 6 01:56:11.653494 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:56:11.656357 systemd-logind[1189]: Removed session 5. Sep 6 01:56:12.914886 kubelet[1948]: I0906 01:56:12.914842 1948 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:56:12.916889 env[1200]: time="2025-09-06T01:56:12.916786263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:56:12.917809 kubelet[1948]: I0906 01:56:12.917776 1948 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:56:13.525753 systemd[1]: Created slice kubepods-burstable-pod7a4eefa1_acd7_400f_b803_d1074eec62ec.slice. Sep 6 01:56:13.543766 systemd[1]: Created slice kubepods-besteffort-pod17a5e41e_f8a1_40c4_bfc7_98a3d2b09a05.slice. Sep 6 01:56:13.555647 kubelet[1948]: I0906 01:56:13.555572 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-config-path\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.555959 kubelet[1948]: I0906 01:56:13.555920 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-run\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.556171 kubelet[1948]: I0906 01:56:13.556128 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-etc-cni-netd\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.556385 kubelet[1948]: I0906 01:56:13.556348 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-lib-modules\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.556550 kubelet[1948]: I0906 01:56:13.556522 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-kernel\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.556732 kubelet[1948]: I0906 01:56:13.556704 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-bpf-maps\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.556940 kubelet[1948]: I0906 01:56:13.556867 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-hostproc\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.557123 kubelet[1948]: I0906 01:56:13.557080 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05-kube-proxy\") pod \"kube-proxy-kkbsg\" (UID: \"17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05\") " pod="kube-system/kube-proxy-kkbsg" Sep 6 01:56:13.557301 kubelet[1948]: I0906 01:56:13.557273 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05-lib-modules\") pod \"kube-proxy-kkbsg\" (UID: \"17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05\") " pod="kube-system/kube-proxy-kkbsg" Sep 6 01:56:13.557525 kubelet[1948]: I0906 01:56:13.557497 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-xtables-lock\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.557694 kubelet[1948]: I0906 01:56:13.557664 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlb5s\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-kube-api-access-mlb5s\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.557869 kubelet[1948]: I0906 01:56:13.557839 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-hubble-tls\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.558078 kubelet[1948]: I0906 01:56:13.558040 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-cgroup\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.558300 kubelet[1948]: I0906 01:56:13.558254 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a4eefa1-acd7-400f-b803-d1074eec62ec-clustermesh-secrets\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.558501 kubelet[1948]: I0906 01:56:13.558457 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wq4\" (UniqueName: \"kubernetes.io/projected/17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05-kube-api-access-p4wq4\") pod \"kube-proxy-kkbsg\" (UID: \"17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05\") " pod="kube-system/kube-proxy-kkbsg" Sep 6 01:56:13.558692 kubelet[1948]: I0906 01:56:13.558655 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cni-path\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.558864 kubelet[1948]: I0906 01:56:13.558826 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-net\") pod \"cilium-vfpks\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " pod="kube-system/cilium-vfpks" Sep 6 01:56:13.559043 kubelet[1948]: I0906 01:56:13.559008 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05-xtables-lock\") pod \"kube-proxy-kkbsg\" (UID: \"17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05\") " pod="kube-system/kube-proxy-kkbsg" Sep 6 01:56:13.663960 kubelet[1948]: I0906 01:56:13.663883 1948 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:56:13.836802 env[1200]: time="2025-09-06T01:56:13.836706819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfpks,Uid:7a4eefa1-acd7-400f-b803-d1074eec62ec,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:13.856005 env[1200]: time="2025-09-06T01:56:13.855944887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkbsg,Uid:17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:13.880078 env[1200]: time="2025-09-06T01:56:13.878869393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:13.881228 env[1200]: time="2025-09-06T01:56:13.880478404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:13.881228 env[1200]: time="2025-09-06T01:56:13.880566229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:13.881228 env[1200]: time="2025-09-06T01:56:13.881117218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455 pid=2030 runtime=io.containerd.runc.v2 Sep 6 01:56:13.894071 env[1200]: time="2025-09-06T01:56:13.893956344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:13.894375 env[1200]: time="2025-09-06T01:56:13.894126435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:13.894375 env[1200]: time="2025-09-06T01:56:13.894206032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:13.894769 env[1200]: time="2025-09-06T01:56:13.894662283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b8e14f62f6255453f62cfa6de7c8e2aba3f6279060250d4a63446f8b589a8f0 pid=2048 runtime=io.containerd.runc.v2 Sep 6 01:56:13.925137 systemd[1]: Started cri-containerd-f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455.scope. Sep 6 01:56:13.937737 systemd[1]: Started cri-containerd-2b8e14f62f6255453f62cfa6de7c8e2aba3f6279060250d4a63446f8b589a8f0.scope. Sep 6 01:56:14.013616 env[1200]: time="2025-09-06T01:56:14.013538788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfpks,Uid:7a4eefa1-acd7-400f-b803-d1074eec62ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\"" Sep 6 01:56:14.018543 env[1200]: time="2025-09-06T01:56:14.018353017Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:56:14.027204 env[1200]: time="2025-09-06T01:56:14.027135560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkbsg,Uid:17a5e41e-f8a1-40c4-bfc7-98a3d2b09a05,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b8e14f62f6255453f62cfa6de7c8e2aba3f6279060250d4a63446f8b589a8f0\"" Sep 6 01:56:14.034835 systemd[1]: Created slice kubepods-besteffort-pod11bbf4f4_d10e_4105_8763_afb77a3c5fc0.slice. Sep 6 01:56:14.040686 env[1200]: time="2025-09-06T01:56:14.038899571Z" level=info msg="CreateContainer within sandbox \"2b8e14f62f6255453f62cfa6de7c8e2aba3f6279060250d4a63446f8b589a8f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:56:14.055758 kubelet[1948]: I0906 01:56:14.055672 1948 status_manager.go:890] "Failed to get status for pod" podUID="11bbf4f4-d10e-4105-8763-afb77a3c5fc0" pod="kube-system/cilium-operator-6c4d7847fc-vxbkr" err="pods \"cilium-operator-6c4d7847fc-vxbkr\" is forbidden: User \"system:node:srv-jrxph.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jrxph.gb1.brightbox.com' and this object" Sep 6 01:56:14.062675 kubelet[1948]: I0906 01:56:14.062052 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7hjx\" (UniqueName: \"kubernetes.io/projected/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-kube-api-access-h7hjx\") pod \"cilium-operator-6c4d7847fc-vxbkr\" (UID: \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\") " pod="kube-system/cilium-operator-6c4d7847fc-vxbkr" Sep 6 01:56:14.062675 kubelet[1948]: I0906 01:56:14.062204 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vxbkr\" (UID: \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\") " pod="kube-system/cilium-operator-6c4d7847fc-vxbkr" Sep 6 01:56:14.077655 env[1200]: time="2025-09-06T01:56:14.077562412Z" level=info msg="CreateContainer within sandbox \"2b8e14f62f6255453f62cfa6de7c8e2aba3f6279060250d4a63446f8b589a8f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2457fd4b12be6fddd1751e34fe3a2e6f8c486d8b5732b013be3aa17ada76f3c0\"" Sep 6 01:56:14.078875 env[1200]: time="2025-09-06T01:56:14.078819312Z" level=info msg="StartContainer for \"2457fd4b12be6fddd1751e34fe3a2e6f8c486d8b5732b013be3aa17ada76f3c0\"" Sep 6 01:56:14.122827 systemd[1]: Started cri-containerd-2457fd4b12be6fddd1751e34fe3a2e6f8c486d8b5732b013be3aa17ada76f3c0.scope. Sep 6 01:56:14.253828 env[1200]: time="2025-09-06T01:56:14.252230941Z" level=info msg="StartContainer for \"2457fd4b12be6fddd1751e34fe3a2e6f8c486d8b5732b013be3aa17ada76f3c0\" returns successfully" Sep 6 01:56:14.343605 env[1200]: time="2025-09-06T01:56:14.343501019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vxbkr,Uid:11bbf4f4-d10e-4105-8763-afb77a3c5fc0,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:14.383502 env[1200]: time="2025-09-06T01:56:14.382731286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:14.383502 env[1200]: time="2025-09-06T01:56:14.382788496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:14.383502 env[1200]: time="2025-09-06T01:56:14.382806406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:14.383792 env[1200]: time="2025-09-06T01:56:14.383086033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f pid=2147 runtime=io.containerd.runc.v2 Sep 6 01:56:14.418458 systemd[1]: Started cri-containerd-38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f.scope. Sep 6 01:56:14.505394 env[1200]: time="2025-09-06T01:56:14.505332349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vxbkr,Uid:11bbf4f4-d10e-4105-8763-afb77a3c5fc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\"" Sep 6 01:56:14.745623 kubelet[1948]: I0906 01:56:14.745412 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkbsg" podStartSLOduration=1.745378708 podStartE2EDuration="1.745378708s" podCreationTimestamp="2025-09-06 01:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:14.72621943 +0000 UTC m=+6.455585143" watchObservedRunningTime="2025-09-06 01:56:14.745378708 +0000 UTC m=+6.474744434" Sep 6 01:56:21.832494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212584924.mount: Deactivated successfully. Sep 6 01:56:26.618226 env[1200]: time="2025-09-06T01:56:26.618053094Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:26.621128 env[1200]: time="2025-09-06T01:56:26.621074827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:26.623891 env[1200]: time="2025-09-06T01:56:26.623853432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:26.625187 env[1200]: time="2025-09-06T01:56:26.625131379Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 01:56:26.629456 env[1200]: time="2025-09-06T01:56:26.629407817Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:56:26.631366 env[1200]: time="2025-09-06T01:56:26.631317064Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:56:26.657658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729126597.mount: Deactivated successfully. Sep 6 01:56:26.666513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287268570.mount: Deactivated successfully. Sep 6 01:56:26.670167 env[1200]: time="2025-09-06T01:56:26.670071515Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\"" Sep 6 01:56:26.673118 env[1200]: time="2025-09-06T01:56:26.673040150Z" level=info msg="StartContainer for \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\"" Sep 6 01:56:26.725880 systemd[1]: Started cri-containerd-238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79.scope. Sep 6 01:56:26.836487 env[1200]: time="2025-09-06T01:56:26.836413747Z" level=info msg="StartContainer for \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\" returns successfully" Sep 6 01:56:26.851885 systemd[1]: cri-containerd-238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79.scope: Deactivated successfully. Sep 6 01:56:26.922936 env[1200]: time="2025-09-06T01:56:26.922623984Z" level=info msg="shim disconnected" id=238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79 Sep 6 01:56:26.923612 env[1200]: time="2025-09-06T01:56:26.923576426Z" level=warning msg="cleaning up after shim disconnected" id=238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79 namespace=k8s.io Sep 6 01:56:26.923751 env[1200]: time="2025-09-06T01:56:26.923721171Z" level=info msg="cleaning up dead shim" Sep 6 01:56:26.935463 env[1200]: time="2025-09-06T01:56:26.935409904Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:56:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2360 runtime=io.containerd.runc.v2\n" Sep 6 01:56:27.651983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79-rootfs.mount: Deactivated successfully. Sep 6 01:56:27.758270 env[1200]: time="2025-09-06T01:56:27.754396503Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:56:27.779636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654068869.mount: Deactivated successfully. Sep 6 01:56:27.790732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021840432.mount: Deactivated successfully. Sep 6 01:56:27.798997 env[1200]: time="2025-09-06T01:56:27.798923601Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\"" Sep 6 01:56:27.800582 env[1200]: time="2025-09-06T01:56:27.800543884Z" level=info msg="StartContainer for \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\"" Sep 6 01:56:27.834569 systemd[1]: Started cri-containerd-aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c.scope. Sep 6 01:56:27.886853 env[1200]: time="2025-09-06T01:56:27.885160839Z" level=info msg="StartContainer for \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\" returns successfully" Sep 6 01:56:27.910056 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:56:27.911774 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:56:27.912575 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:56:27.917621 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:56:27.918842 systemd[1]: cri-containerd-aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c.scope: Deactivated successfully. Sep 6 01:56:27.943768 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:56:27.961316 env[1200]: time="2025-09-06T01:56:27.961252979Z" level=info msg="shim disconnected" id=aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c Sep 6 01:56:27.961316 env[1200]: time="2025-09-06T01:56:27.961315988Z" level=warning msg="cleaning up after shim disconnected" id=aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c namespace=k8s.io Sep 6 01:56:27.961627 env[1200]: time="2025-09-06T01:56:27.961334415Z" level=info msg="cleaning up dead shim" Sep 6 01:56:27.971319 env[1200]: time="2025-09-06T01:56:27.971261352Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:56:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2424 runtime=io.containerd.runc.v2\n" Sep 6 01:56:28.767792 env[1200]: time="2025-09-06T01:56:28.767502529Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:56:28.803846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429458762.mount: Deactivated successfully. Sep 6 01:56:28.813402 env[1200]: time="2025-09-06T01:56:28.813223319Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\"" Sep 6 01:56:28.815472 env[1200]: time="2025-09-06T01:56:28.815433883Z" level=info msg="StartContainer for \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\"" Sep 6 01:56:28.865340 systemd[1]: Started cri-containerd-fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8.scope. Sep 6 01:56:28.955531 env[1200]: time="2025-09-06T01:56:28.955429042Z" level=info msg="StartContainer for \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\" returns successfully" Sep 6 01:56:28.964646 systemd[1]: cri-containerd-fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8.scope: Deactivated successfully. Sep 6 01:56:29.068332 env[1200]: time="2025-09-06T01:56:29.067755879Z" level=info msg="shim disconnected" id=fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8 Sep 6 01:56:29.068332 env[1200]: time="2025-09-06T01:56:29.067833258Z" level=warning msg="cleaning up after shim disconnected" id=fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8 namespace=k8s.io Sep 6 01:56:29.068332 env[1200]: time="2025-09-06T01:56:29.067856075Z" level=info msg="cleaning up dead shim" Sep 6 01:56:29.087063 env[1200]: time="2025-09-06T01:56:29.086159500Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:56:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2483 runtime=io.containerd.runc.v2\n" Sep 6 01:56:29.644339 env[1200]: time="2025-09-06T01:56:29.644277030Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:29.652329 env[1200]: time="2025-09-06T01:56:29.652087420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:29.656049 env[1200]: time="2025-09-06T01:56:29.655186515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:56:29.656369 env[1200]: time="2025-09-06T01:56:29.656322266Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 01:56:29.660482 env[1200]: time="2025-09-06T01:56:29.660429577Z" level=info msg="CreateContainer within sandbox \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:56:29.676418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351772663.mount: Deactivated successfully. Sep 6 01:56:29.690628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354650383.mount: Deactivated successfully. Sep 6 01:56:29.710202 env[1200]: time="2025-09-06T01:56:29.710011312Z" level=info msg="CreateContainer within sandbox \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\"" Sep 6 01:56:29.710864 env[1200]: time="2025-09-06T01:56:29.710817511Z" level=info msg="StartContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\"" Sep 6 01:56:29.738032 systemd[1]: Started cri-containerd-96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218.scope. Sep 6 01:56:29.819500 env[1200]: time="2025-09-06T01:56:29.819363035Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:56:29.857520 env[1200]: time="2025-09-06T01:56:29.857462200Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\"" Sep 6 01:56:29.859476 env[1200]: time="2025-09-06T01:56:29.859410291Z" level=info msg="StartContainer for \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\"" Sep 6 01:56:29.874043 env[1200]: time="2025-09-06T01:56:29.873952367Z" level=info msg="StartContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" returns successfully" Sep 6 01:56:29.901838 systemd[1]: Started cri-containerd-34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8.scope. Sep 6 01:56:29.953036 env[1200]: time="2025-09-06T01:56:29.952970307Z" level=info msg="StartContainer for \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\" returns successfully" Sep 6 01:56:29.957918 systemd[1]: cri-containerd-34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8.scope: Deactivated successfully. Sep 6 01:56:30.013336 env[1200]: time="2025-09-06T01:56:30.013271078Z" level=info msg="shim disconnected" id=34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8 Sep 6 01:56:30.013664 env[1200]: time="2025-09-06T01:56:30.013620534Z" level=warning msg="cleaning up after shim disconnected" id=34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8 namespace=k8s.io Sep 6 01:56:30.013819 env[1200]: time="2025-09-06T01:56:30.013789505Z" level=info msg="cleaning up dead shim" Sep 6 01:56:30.028440 env[1200]: time="2025-09-06T01:56:30.028372774Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:56:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Sep 6 01:56:30.792384 env[1200]: time="2025-09-06T01:56:30.792320132Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:56:30.817133 env[1200]: time="2025-09-06T01:56:30.817056500Z" level=info msg="CreateContainer within sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\"" Sep 6 01:56:30.818202 env[1200]: time="2025-09-06T01:56:30.817738484Z" level=info msg="StartContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\"" Sep 6 01:56:30.859334 systemd[1]: Started cri-containerd-d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4.scope. Sep 6 01:56:31.073764 env[1200]: time="2025-09-06T01:56:31.073702831Z" level=info msg="StartContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" returns successfully" Sep 6 01:56:31.144686 kubelet[1948]: I0906 01:56:31.144573 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vxbkr" podStartSLOduration=2.994167565 podStartE2EDuration="18.14450149s" podCreationTimestamp="2025-09-06 01:56:13 +0000 UTC" firstStartedPulling="2025-09-06 01:56:14.507371548 +0000 UTC m=+6.236737256" lastFinishedPulling="2025-09-06 01:56:29.657705479 +0000 UTC m=+21.387071181" observedRunningTime="2025-09-06 01:56:31.039219015 +0000 UTC m=+22.768584725" watchObservedRunningTime="2025-09-06 01:56:31.14450149 +0000 UTC m=+22.873867212" Sep 6 01:56:31.457697 kubelet[1948]: I0906 01:56:31.455960 1948 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 01:56:31.554665 systemd[1]: Created slice kubepods-burstable-podf53d1447_a3a6_44d1_a305_f3c60c4830e0.slice. Sep 6 01:56:31.557396 kubelet[1948]: I0906 01:56:31.557351 1948 status_manager.go:890] "Failed to get status for pod" podUID="f53d1447-a3a6-44d1-a305-f3c60c4830e0" pod="kube-system/coredns-668d6bf9bc-fklcx" err="pods \"coredns-668d6bf9bc-fklcx\" is forbidden: User \"system:node:srv-jrxph.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jrxph.gb1.brightbox.com' and this object" Sep 6 01:56:31.557931 kubelet[1948]: W0906 01:56:31.557852 1948 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-jrxph.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-jrxph.gb1.brightbox.com' and this object Sep 6 01:56:31.558340 kubelet[1948]: E0906 01:56:31.558164 1948 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-jrxph.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jrxph.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 6 01:56:31.568872 systemd[1]: Created slice kubepods-burstable-poda23a852f_4454_4382_9e48_15c67cdcf8f8.slice. Sep 6 01:56:31.723675 kubelet[1948]: I0906 01:56:31.723505 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh75\" (UniqueName: \"kubernetes.io/projected/f53d1447-a3a6-44d1-a305-f3c60c4830e0-kube-api-access-sbh75\") pod \"coredns-668d6bf9bc-fklcx\" (UID: \"f53d1447-a3a6-44d1-a305-f3c60c4830e0\") " pod="kube-system/coredns-668d6bf9bc-fklcx" Sep 6 01:56:31.723675 kubelet[1948]: I0906 01:56:31.723574 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhtp\" (UniqueName: \"kubernetes.io/projected/a23a852f-4454-4382-9e48-15c67cdcf8f8-kube-api-access-vnhtp\") pod \"coredns-668d6bf9bc-jfn68\" (UID: \"a23a852f-4454-4382-9e48-15c67cdcf8f8\") " pod="kube-system/coredns-668d6bf9bc-jfn68" Sep 6 01:56:31.723675 kubelet[1948]: I0906 01:56:31.723611 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53d1447-a3a6-44d1-a305-f3c60c4830e0-config-volume\") pod \"coredns-668d6bf9bc-fklcx\" (UID: \"f53d1447-a3a6-44d1-a305-f3c60c4830e0\") " pod="kube-system/coredns-668d6bf9bc-fklcx" Sep 6 01:56:31.723675 kubelet[1948]: I0906 01:56:31.723641 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23a852f-4454-4382-9e48-15c67cdcf8f8-config-volume\") pod \"coredns-668d6bf9bc-jfn68\" (UID: \"a23a852f-4454-4382-9e48-15c67cdcf8f8\") " pod="kube-system/coredns-668d6bf9bc-jfn68" Sep 6 01:56:31.817032 kubelet[1948]: I0906 01:56:31.816941 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vfpks" podStartSLOduration=6.20682234 podStartE2EDuration="18.816917535s" podCreationTimestamp="2025-09-06 01:56:13 +0000 UTC" firstStartedPulling="2025-09-06 01:56:14.017558251 +0000 UTC m=+5.746923953" lastFinishedPulling="2025-09-06 01:56:26.627653438 +0000 UTC m=+18.357019148" observedRunningTime="2025-09-06 01:56:31.815461753 +0000 UTC m=+23.544827498" watchObservedRunningTime="2025-09-06 01:56:31.816917535 +0000 UTC m=+23.546283239" Sep 6 01:56:32.781571 env[1200]: time="2025-09-06T01:56:32.781382707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fklcx,Uid:f53d1447-a3a6-44d1-a305-f3c60c4830e0,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:32.781571 env[1200]: time="2025-09-06T01:56:32.781439449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfn68,Uid:a23a852f-4454-4382-9e48-15c67cdcf8f8,Namespace:kube-system,Attempt:0,}" Sep 6 01:56:34.104579 systemd-networkd[1019]: cilium_host: Link UP Sep 6 01:56:34.104817 systemd-networkd[1019]: cilium_net: Link UP Sep 6 01:56:34.111361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 01:56:34.111891 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:56:34.105512 systemd-networkd[1019]: cilium_net: Gained carrier Sep 6 01:56:34.110522 systemd-networkd[1019]: cilium_host: Gained carrier Sep 6 01:56:34.282604 systemd-networkd[1019]: cilium_vxlan: Link UP Sep 6 01:56:34.282619 systemd-networkd[1019]: cilium_vxlan: Gained carrier Sep 6 01:56:34.864152 kernel: NET: Registered PF_ALG protocol family Sep 6 01:56:34.886335 systemd-networkd[1019]: cilium_host: Gained IPv6LL Sep 6 01:56:35.077372 systemd-networkd[1019]: cilium_net: Gained IPv6LL Sep 6 01:56:36.014363 systemd-networkd[1019]: lxc_health: Link UP Sep 6 01:56:36.050824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:56:36.049729 systemd-networkd[1019]: lxc_health: Gained carrier Sep 6 01:56:36.112257 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Sep 6 01:56:36.389252 systemd-networkd[1019]: lxc3a2c464a009f: Link UP Sep 6 01:56:36.400137 kernel: eth0: renamed from tmp4937c Sep 6 01:56:36.406287 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3a2c464a009f: link becomes ready Sep 6 01:56:36.405999 systemd-networkd[1019]: lxc3a2c464a009f: Gained carrier Sep 6 01:56:36.409892 systemd-networkd[1019]: lxc040e7bda4be2: Link UP Sep 6 01:56:36.420745 kernel: eth0: renamed from tmp81311 Sep 6 01:56:36.427649 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc040e7bda4be2: link becomes ready Sep 6 01:56:36.425850 systemd-networkd[1019]: lxc040e7bda4be2: Gained carrier Sep 6 01:56:37.765453 systemd-networkd[1019]: lxc040e7bda4be2: Gained IPv6LL Sep 6 01:56:37.957472 systemd-networkd[1019]: lxc_health: Gained IPv6LL Sep 6 01:56:38.021446 systemd-networkd[1019]: lxc3a2c464a009f: Gained IPv6LL Sep 6 01:56:42.120888 env[1200]: time="2025-09-06T01:56:42.120609079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:42.120888 env[1200]: time="2025-09-06T01:56:42.120789466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:42.121992 env[1200]: time="2025-09-06T01:56:42.120881878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:42.122439 env[1200]: time="2025-09-06T01:56:42.122314017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566 pid=3126 runtime=io.containerd.runc.v2 Sep 6 01:56:42.158197 systemd[1]: Started cri-containerd-813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566.scope. Sep 6 01:56:42.168588 systemd[1]: run-containerd-runc-k8s.io-813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566-runc.FdUKyV.mount: Deactivated successfully. Sep 6 01:56:42.175885 env[1200]: time="2025-09-06T01:56:42.175753163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:56:42.176030 env[1200]: time="2025-09-06T01:56:42.175906235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:56:42.176030 env[1200]: time="2025-09-06T01:56:42.175981915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:56:42.176869 env[1200]: time="2025-09-06T01:56:42.176300425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890 pid=3151 runtime=io.containerd.runc.v2 Sep 6 01:56:42.236803 systemd[1]: Started cri-containerd-4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890.scope. Sep 6 01:56:42.333755 env[1200]: time="2025-09-06T01:56:42.333674949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fklcx,Uid:f53d1447-a3a6-44d1-a305-f3c60c4830e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566\"" Sep 6 01:56:42.351542 env[1200]: time="2025-09-06T01:56:42.351478064Z" level=info msg="CreateContainer within sandbox \"813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:56:42.375477 env[1200]: time="2025-09-06T01:56:42.374536361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfn68,Uid:a23a852f-4454-4382-9e48-15c67cdcf8f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890\"" Sep 6 01:56:42.381888 env[1200]: time="2025-09-06T01:56:42.381813088Z" level=info msg="CreateContainer within sandbox \"813110a4e96cd2aad6330892361e8ae83a3c3a9d1d747b739294e8fbce19f566\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4651a0b46358edd5e6d6118e32fd358882e8b7f4168b0fbde6ecf708dcbcf95f\"" Sep 6 01:56:42.384383 env[1200]: time="2025-09-06T01:56:42.384345933Z" level=info msg="StartContainer for \"4651a0b46358edd5e6d6118e32fd358882e8b7f4168b0fbde6ecf708dcbcf95f\"" Sep 6 01:56:42.411521 env[1200]: time="2025-09-06T01:56:42.410591127Z" level=info msg="CreateContainer within sandbox \"4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:56:42.431313 env[1200]: time="2025-09-06T01:56:42.431254583Z" level=info msg="CreateContainer within sandbox \"4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f96bbfbae985407c013ff9576420ab55d6c2ad8525305b7fd03fb580c34327a4\"" Sep 6 01:56:42.436551 env[1200]: time="2025-09-06T01:56:42.434221222Z" level=info msg="StartContainer for \"f96bbfbae985407c013ff9576420ab55d6c2ad8525305b7fd03fb580c34327a4\"" Sep 6 01:56:42.440929 systemd[1]: Started cri-containerd-4651a0b46358edd5e6d6118e32fd358882e8b7f4168b0fbde6ecf708dcbcf95f.scope. Sep 6 01:56:42.483820 systemd[1]: Started cri-containerd-f96bbfbae985407c013ff9576420ab55d6c2ad8525305b7fd03fb580c34327a4.scope. Sep 6 01:56:42.535511 env[1200]: time="2025-09-06T01:56:42.535378456Z" level=info msg="StartContainer for \"4651a0b46358edd5e6d6118e32fd358882e8b7f4168b0fbde6ecf708dcbcf95f\" returns successfully" Sep 6 01:56:42.549643 env[1200]: time="2025-09-06T01:56:42.549591596Z" level=info msg="StartContainer for \"f96bbfbae985407c013ff9576420ab55d6c2ad8525305b7fd03fb580c34327a4\" returns successfully" Sep 6 01:56:42.865206 kubelet[1948]: I0906 01:56:42.865062 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fklcx" podStartSLOduration=28.864991552 podStartE2EDuration="28.864991552s" podCreationTimestamp="2025-09-06 01:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:42.845327536 +0000 UTC m=+34.574693245" watchObservedRunningTime="2025-09-06 01:56:42.864991552 +0000 UTC m=+34.594357272" Sep 6 01:56:43.136615 systemd[1]: run-containerd-runc-k8s.io-4937c055b97d0ab962c729b99c0b7aa46a5abfad73f6d61fdcc90a2f3b191890-runc.nH35CT.mount: Deactivated successfully. Sep 6 01:57:18.878767 systemd[1]: Started sshd@5-10.244.19.198:22-139.178.89.65:46982.service. Sep 6 01:57:19.812039 sshd[3287]: Accepted publickey for core from 139.178.89.65 port 46982 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:19.816968 sshd[3287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:19.827522 systemd[1]: Started session-6.scope. Sep 6 01:57:19.828400 systemd-logind[1189]: New session 6 of user core. Sep 6 01:57:20.624242 sshd[3287]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:20.628760 systemd[1]: sshd@5-10.244.19.198:22-139.178.89.65:46982.service: Deactivated successfully. Sep 6 01:57:20.629937 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:57:20.630882 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:57:20.632294 systemd-logind[1189]: Removed session 6. Sep 6 01:57:25.775132 systemd[1]: Started sshd@6-10.244.19.198:22-139.178.89.65:45566.service. Sep 6 01:57:26.680878 sshd[3299]: Accepted publickey for core from 139.178.89.65 port 45566 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:26.683364 sshd[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:26.691662 systemd[1]: Started session-7.scope. Sep 6 01:57:26.692309 systemd-logind[1189]: New session 7 of user core. Sep 6 01:57:27.426927 sshd[3299]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:27.431214 systemd[1]: sshd@6-10.244.19.198:22-139.178.89.65:45566.service: Deactivated successfully. Sep 6 01:57:27.432637 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:57:27.433532 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:57:27.434829 systemd-logind[1189]: Removed session 7. Sep 6 01:57:32.585928 systemd[1]: Started sshd@7-10.244.19.198:22-139.178.89.65:55364.service. Sep 6 01:57:33.547364 sshd[3311]: Accepted publickey for core from 139.178.89.65 port 55364 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:33.549784 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:33.558780 systemd[1]: Started session-8.scope. Sep 6 01:57:33.560198 systemd-logind[1189]: New session 8 of user core. Sep 6 01:57:34.304376 sshd[3311]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:34.308834 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:57:34.310497 systemd[1]: sshd@7-10.244.19.198:22-139.178.89.65:55364.service: Deactivated successfully. Sep 6 01:57:34.311488 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:57:34.312139 systemd-logind[1189]: Removed session 8. Sep 6 01:57:39.458015 systemd[1]: Started sshd@8-10.244.19.198:22-139.178.89.65:55374.service. Sep 6 01:57:40.363806 sshd[3324]: Accepted publickey for core from 139.178.89.65 port 55374 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:40.364753 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:40.373412 systemd[1]: Started session-9.scope. Sep 6 01:57:40.374669 systemd-logind[1189]: New session 9 of user core. Sep 6 01:57:41.139806 sshd[3324]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:41.144500 systemd[1]: sshd@8-10.244.19.198:22-139.178.89.65:55374.service: Deactivated successfully. Sep 6 01:57:41.145652 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:57:41.146809 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:57:41.148277 systemd-logind[1189]: Removed session 9. Sep 6 01:57:41.290025 systemd[1]: Started sshd@9-10.244.19.198:22-139.178.89.65:55792.service. Sep 6 01:57:42.195751 sshd[3337]: Accepted publickey for core from 139.178.89.65 port 55792 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:42.197577 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:42.204569 systemd-logind[1189]: New session 10 of user core. Sep 6 01:57:42.205567 systemd[1]: Started session-10.scope. Sep 6 01:57:43.044693 sshd[3337]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:43.051254 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:57:43.051608 systemd[1]: sshd@9-10.244.19.198:22-139.178.89.65:55792.service: Deactivated successfully. Sep 6 01:57:43.052634 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:57:43.054251 systemd-logind[1189]: Removed session 10. Sep 6 01:57:43.192712 systemd[1]: Started sshd@10-10.244.19.198:22-139.178.89.65:55808.service. Sep 6 01:57:44.101549 sshd[3349]: Accepted publickey for core from 139.178.89.65 port 55808 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:44.103749 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:44.110991 systemd-logind[1189]: New session 11 of user core. Sep 6 01:57:44.115695 systemd[1]: Started session-11.scope. Sep 6 01:57:44.860540 sshd[3349]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:44.864567 systemd[1]: sshd@10-10.244.19.198:22-139.178.89.65:55808.service: Deactivated successfully. Sep 6 01:57:44.865647 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:57:44.866845 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:57:44.868687 systemd-logind[1189]: Removed session 11. Sep 6 01:57:50.012222 systemd[1]: Started sshd@11-10.244.19.198:22-139.178.89.65:55822.service. Sep 6 01:57:50.931950 sshd[3363]: Accepted publickey for core from 139.178.89.65 port 55822 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:50.934308 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:50.941961 systemd-logind[1189]: New session 12 of user core. Sep 6 01:57:50.943276 systemd[1]: Started session-12.scope. Sep 6 01:57:51.672722 sshd[3363]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:51.676768 systemd[1]: sshd@11-10.244.19.198:22-139.178.89.65:55822.service: Deactivated successfully. Sep 6 01:57:51.677849 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:57:51.678679 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:57:51.679795 systemd-logind[1189]: Removed session 12. Sep 6 01:57:56.822761 systemd[1]: Started sshd@12-10.244.19.198:22-139.178.89.65:49440.service. Sep 6 01:57:57.734881 sshd[3376]: Accepted publickey for core from 139.178.89.65 port 49440 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:57.737382 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:57.747871 systemd[1]: Started session-13.scope. Sep 6 01:57:57.749011 systemd-logind[1189]: New session 13 of user core. Sep 6 01:57:58.458914 sshd[3376]: pam_unix(sshd:session): session closed for user core Sep 6 01:57:58.462952 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:57:58.463324 systemd[1]: sshd@12-10.244.19.198:22-139.178.89.65:49440.service: Deactivated successfully. Sep 6 01:57:58.464318 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:57:58.465415 systemd-logind[1189]: Removed session 13. Sep 6 01:57:58.608137 systemd[1]: Started sshd@13-10.244.19.198:22-139.178.89.65:49448.service. Sep 6 01:57:59.504175 sshd[3388]: Accepted publickey for core from 139.178.89.65 port 49448 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:57:59.506621 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:57:59.515013 systemd-logind[1189]: New session 14 of user core. Sep 6 01:57:59.516364 systemd[1]: Started session-14.scope. Sep 6 01:58:00.968858 sshd[3388]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:00.973798 systemd[1]: sshd@13-10.244.19.198:22-139.178.89.65:49448.service: Deactivated successfully. Sep 6 01:58:00.975172 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:58:00.975989 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:58:00.977276 systemd-logind[1189]: Removed session 14. Sep 6 01:58:01.116998 systemd[1]: Started sshd@14-10.244.19.198:22-139.178.89.65:48764.service. Sep 6 01:58:02.017327 sshd[3398]: Accepted publickey for core from 139.178.89.65 port 48764 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:02.020459 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:02.028044 systemd[1]: Started session-15.scope. Sep 6 01:58:02.028614 systemd-logind[1189]: New session 15 of user core. Sep 6 01:58:03.465323 sshd[3398]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:03.480848 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:58:03.481221 systemd[1]: sshd@14-10.244.19.198:22-139.178.89.65:48764.service: Deactivated successfully. Sep 6 01:58:03.482302 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:58:03.483571 systemd-logind[1189]: Removed session 15. Sep 6 01:58:03.612488 systemd[1]: Started sshd@15-10.244.19.198:22-139.178.89.65:48766.service. Sep 6 01:58:04.527146 sshd[3417]: Accepted publickey for core from 139.178.89.65 port 48766 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:04.529344 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:04.537148 systemd-logind[1189]: New session 16 of user core. Sep 6 01:58:04.538228 systemd[1]: Started session-16.scope. Sep 6 01:58:05.534367 sshd[3417]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:05.540338 systemd[1]: sshd@15-10.244.19.198:22-139.178.89.65:48766.service: Deactivated successfully. Sep 6 01:58:05.541636 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:58:05.544451 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:58:05.547532 systemd-logind[1189]: Removed session 16. Sep 6 01:58:05.691875 systemd[1]: Started sshd@16-10.244.19.198:22-139.178.89.65:48768.service. Sep 6 01:58:06.597466 sshd[3426]: Accepted publickey for core from 139.178.89.65 port 48768 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:06.600210 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:06.607009 systemd-logind[1189]: New session 17 of user core. Sep 6 01:58:06.608163 systemd[1]: Started session-17.scope. Sep 6 01:58:07.316013 sshd[3426]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:07.320199 systemd[1]: sshd@16-10.244.19.198:22-139.178.89.65:48768.service: Deactivated successfully. Sep 6 01:58:07.321693 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:58:07.323164 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:58:07.324734 systemd-logind[1189]: Removed session 17. Sep 6 01:58:12.484928 systemd[1]: Started sshd@17-10.244.19.198:22-139.178.89.65:55326.service. Sep 6 01:58:13.441833 sshd[3441]: Accepted publickey for core from 139.178.89.65 port 55326 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:13.444277 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:13.452871 systemd-logind[1189]: New session 18 of user core. Sep 6 01:58:13.454207 systemd[1]: Started session-18.scope. Sep 6 01:58:14.200687 sshd[3441]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:14.205475 systemd[1]: sshd@17-10.244.19.198:22-139.178.89.65:55326.service: Deactivated successfully. Sep 6 01:58:14.206845 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:58:14.207911 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:58:14.210169 systemd-logind[1189]: Removed session 18. Sep 6 01:58:19.341475 systemd[1]: Started sshd@18-10.244.19.198:22-139.178.89.65:55330.service. Sep 6 01:58:20.246831 sshd[3455]: Accepted publickey for core from 139.178.89.65 port 55330 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:20.249476 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:20.257680 systemd-logind[1189]: New session 19 of user core. Sep 6 01:58:20.259903 systemd[1]: Started session-19.scope. Sep 6 01:58:20.963202 sshd[3455]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:20.967263 systemd[1]: sshd@18-10.244.19.198:22-139.178.89.65:55330.service: Deactivated successfully. Sep 6 01:58:20.968344 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:58:20.969053 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:58:20.970289 systemd-logind[1189]: Removed session 19. Sep 6 01:58:26.113270 systemd[1]: Started sshd@19-10.244.19.198:22-139.178.89.65:51126.service. Sep 6 01:58:27.014989 sshd[3467]: Accepted publickey for core from 139.178.89.65 port 51126 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:27.017151 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:27.026835 systemd-logind[1189]: New session 20 of user core. Sep 6 01:58:27.027755 systemd[1]: Started session-20.scope. Sep 6 01:58:27.732271 sshd[3467]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:27.736145 systemd[1]: sshd@19-10.244.19.198:22-139.178.89.65:51126.service: Deactivated successfully. Sep 6 01:58:27.737194 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:58:27.738869 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:58:27.740348 systemd-logind[1189]: Removed session 20. Sep 6 01:58:27.882007 systemd[1]: Started sshd@20-10.244.19.198:22-139.178.89.65:51138.service. Sep 6 01:58:28.785758 sshd[3479]: Accepted publickey for core from 139.178.89.65 port 51138 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:28.788518 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:28.796919 systemd-logind[1189]: New session 21 of user core. Sep 6 01:58:28.798216 systemd[1]: Started session-21.scope. Sep 6 01:58:31.874779 kubelet[1948]: I0906 01:58:31.874585 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jfn68" podStartSLOduration=138.874507575 podStartE2EDuration="2m18.874507575s" podCreationTimestamp="2025-09-06 01:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:56:42.893951325 +0000 UTC m=+34.623317032" watchObservedRunningTime="2025-09-06 01:58:31.874507575 +0000 UTC m=+143.603873282" Sep 6 01:58:31.906436 env[1200]: time="2025-09-06T01:58:31.906293731Z" level=info msg="StopContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" with timeout 30 (s)" Sep 6 01:58:31.908065 env[1200]: time="2025-09-06T01:58:31.908026717Z" level=info msg="Stop container \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" with signal terminated" Sep 6 01:58:31.938406 systemd[1]: run-containerd-runc-k8s.io-d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4-runc.OwnC8y.mount: Deactivated successfully. Sep 6 01:58:31.941261 systemd[1]: cri-containerd-96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218.scope: Deactivated successfully. Sep 6 01:58:31.985288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218-rootfs.mount: Deactivated successfully. Sep 6 01:58:31.992202 env[1200]: time="2025-09-06T01:58:31.992120336Z" level=info msg="shim disconnected" id=96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218 Sep 6 01:58:31.992480 env[1200]: time="2025-09-06T01:58:31.992208882Z" level=warning msg="cleaning up after shim disconnected" id=96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218 namespace=k8s.io Sep 6 01:58:31.992480 env[1200]: time="2025-09-06T01:58:31.992238285Z" level=info msg="cleaning up dead shim" Sep 6 01:58:32.006580 env[1200]: time="2025-09-06T01:58:32.006472544Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:58:32.012278 env[1200]: time="2025-09-06T01:58:32.012205619Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3522 runtime=io.containerd.runc.v2\n" Sep 6 01:58:32.018891 env[1200]: time="2025-09-06T01:58:32.018834724Z" level=info msg="StopContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" returns successfully" Sep 6 01:58:32.019372 env[1200]: time="2025-09-06T01:58:32.019333561Z" level=info msg="StopContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" with timeout 2 (s)" Sep 6 01:58:32.022333 env[1200]: time="2025-09-06T01:58:32.022252111Z" level=info msg="Stop container \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" with signal terminated" Sep 6 01:58:32.024002 env[1200]: time="2025-09-06T01:58:32.023957742Z" level=info msg="StopPodSandbox for \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\"" Sep 6 01:58:32.024306 env[1200]: time="2025-09-06T01:58:32.024265049Z" level=info msg="Container to stop \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.029841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f-shm.mount: Deactivated successfully. Sep 6 01:58:32.040170 systemd-networkd[1019]: lxc_health: Link DOWN Sep 6 01:58:32.040185 systemd-networkd[1019]: lxc_health: Lost carrier Sep 6 01:58:32.051817 systemd[1]: cri-containerd-38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f.scope: Deactivated successfully. Sep 6 01:58:32.076043 systemd[1]: cri-containerd-d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4.scope: Deactivated successfully. Sep 6 01:58:32.076591 systemd[1]: cri-containerd-d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4.scope: Consumed 10.334s CPU time. Sep 6 01:58:32.131956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4-rootfs.mount: Deactivated successfully. Sep 6 01:58:32.141545 env[1200]: time="2025-09-06T01:58:32.141397011Z" level=info msg="shim disconnected" id=d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4 Sep 6 01:58:32.141754 env[1200]: time="2025-09-06T01:58:32.141543724Z" level=warning msg="cleaning up after shim disconnected" id=d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4 namespace=k8s.io Sep 6 01:58:32.141754 env[1200]: time="2025-09-06T01:58:32.141562788Z" level=info msg="cleaning up dead shim" Sep 6 01:58:32.142544 env[1200]: time="2025-09-06T01:58:32.142500749Z" level=info msg="shim disconnected" id=38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f Sep 6 01:58:32.142700 env[1200]: time="2025-09-06T01:58:32.142666384Z" level=warning msg="cleaning up after shim disconnected" id=38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f namespace=k8s.io Sep 6 01:58:32.142911 env[1200]: time="2025-09-06T01:58:32.142881534Z" level=info msg="cleaning up dead shim" Sep 6 01:58:32.176679 env[1200]: time="2025-09-06T01:58:32.176613946Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3578 runtime=io.containerd.runc.v2\n" Sep 6 01:58:32.176985 env[1200]: time="2025-09-06T01:58:32.176942059Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3579 runtime=io.containerd.runc.v2\n" Sep 6 01:58:32.177617 env[1200]: time="2025-09-06T01:58:32.177569988Z" level=info msg="TearDown network for sandbox \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\" successfully" Sep 6 01:58:32.177714 env[1200]: time="2025-09-06T01:58:32.177613541Z" level=info msg="StopPodSandbox for \"38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f\" returns successfully" Sep 6 01:58:32.182738 env[1200]: time="2025-09-06T01:58:32.181194971Z" level=info msg="StopContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" returns successfully" Sep 6 01:58:32.183368 env[1200]: time="2025-09-06T01:58:32.183332876Z" level=info msg="StopPodSandbox for \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\"" Sep 6 01:58:32.183625 env[1200]: time="2025-09-06T01:58:32.183578322Z" level=info msg="Container to stop \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.184080 env[1200]: time="2025-09-06T01:58:32.184005559Z" level=info msg="Container to stop \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.184547 env[1200]: time="2025-09-06T01:58:32.184462641Z" level=info msg="Container to stop \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.184896 env[1200]: time="2025-09-06T01:58:32.184861883Z" level=info msg="Container to stop \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.185061 env[1200]: time="2025-09-06T01:58:32.185028177Z" level=info msg="Container to stop \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:32.207292 systemd[1]: cri-containerd-f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455.scope: Deactivated successfully. Sep 6 01:58:32.240754 env[1200]: time="2025-09-06T01:58:32.240660294Z" level=info msg="shim disconnected" id=f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455 Sep 6 01:58:32.241248 env[1200]: time="2025-09-06T01:58:32.241213744Z" level=warning msg="cleaning up after shim disconnected" id=f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455 namespace=k8s.io Sep 6 01:58:32.241494 env[1200]: time="2025-09-06T01:58:32.241374422Z" level=info msg="cleaning up dead shim" Sep 6 01:58:32.249221 kubelet[1948]: I0906 01:58:32.247605 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7hjx\" (UniqueName: \"kubernetes.io/projected/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-kube-api-access-h7hjx\") pod \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\" (UID: \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\") " Sep 6 01:58:32.249221 kubelet[1948]: I0906 01:58:32.247688 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-cilium-config-path\") pod \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\" (UID: \"11bbf4f4-d10e-4105-8763-afb77a3c5fc0\") " Sep 6 01:58:32.264000 kubelet[1948]: I0906 01:58:32.261433 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11bbf4f4-d10e-4105-8763-afb77a3c5fc0" (UID: "11bbf4f4-d10e-4105-8763-afb77a3c5fc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:58:32.269377 kubelet[1948]: I0906 01:58:32.269328 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-kube-api-access-h7hjx" (OuterVolumeSpecName: "kube-api-access-h7hjx") pod "11bbf4f4-d10e-4105-8763-afb77a3c5fc0" (UID: "11bbf4f4-d10e-4105-8763-afb77a3c5fc0"). InnerVolumeSpecName "kube-api-access-h7hjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:58:32.269662 env[1200]: time="2025-09-06T01:58:32.269609257Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3620 runtime=io.containerd.runc.v2\n" Sep 6 01:58:32.270331 env[1200]: time="2025-09-06T01:58:32.270291343Z" level=info msg="TearDown network for sandbox \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" successfully" Sep 6 01:58:32.270501 env[1200]: time="2025-09-06T01:58:32.270465558Z" level=info msg="StopPodSandbox for \"f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455\" returns successfully" Sep 6 01:58:32.348244 kubelet[1948]: I0906 01:58:32.348191 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-etc-cni-netd\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.348578 kubelet[1948]: I0906 01:58:32.348548 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-kernel\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.348765 kubelet[1948]: I0906 01:58:32.348738 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a4eefa1-acd7-400f-b803-d1074eec62ec-clustermesh-secrets\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.349226 kubelet[1948]: I0906 01:58:32.349186 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-xtables-lock\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.349461 kubelet[1948]: I0906 01:58:32.349435 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-hostproc\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.349637 kubelet[1948]: I0906 01:58:32.349609 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlb5s\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-kube-api-access-mlb5s\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.350193 kubelet[1948]: I0906 01:58:32.350159 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-cgroup\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.350542 kubelet[1948]: I0906 01:58:32.350515 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-net\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.350773 kubelet[1948]: I0906 01:58:32.350747 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-run\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.350980 kubelet[1948]: I0906 01:58:32.350951 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-config-path\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.351173 kubelet[1948]: I0906 01:58:32.351146 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-lib-modules\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.351332 kubelet[1948]: I0906 01:58:32.351304 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-bpf-maps\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.351496 kubelet[1948]: I0906 01:58:32.351469 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-hubble-tls\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.351655 kubelet[1948]: I0906 01:58:32.351627 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cni-path\") pod \"7a4eefa1-acd7-400f-b803-d1074eec62ec\" (UID: \"7a4eefa1-acd7-400f-b803-d1074eec62ec\") " Sep 6 01:58:32.353721 kubelet[1948]: I0906 01:58:32.353677 1948 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7hjx\" (UniqueName: \"kubernetes.io/projected/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-kube-api-access-h7hjx\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.353903 kubelet[1948]: I0906 01:58:32.353874 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11bbf4f4-d10e-4105-8763-afb77a3c5fc0-cilium-config-path\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.354084 kubelet[1948]: I0906 01:58:32.348399 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.354246 kubelet[1948]: I0906 01:58:32.348629 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.354390 kubelet[1948]: I0906 01:58:32.349390 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.354906 kubelet[1948]: I0906 01:58:32.350091 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.357876 kubelet[1948]: I0906 01:58:32.350316 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.357999 kubelet[1948]: I0906 01:58:32.350675 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.358471 kubelet[1948]: I0906 01:58:32.350944 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.358471 kubelet[1948]: I0906 01:58:32.354051 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.358471 kubelet[1948]: I0906 01:58:32.354529 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.358779 kubelet[1948]: I0906 01:58:32.354552 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:32.358779 kubelet[1948]: I0906 01:58:32.354777 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:58:32.358779 kubelet[1948]: I0906 01:58:32.358398 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-kube-api-access-mlb5s" (OuterVolumeSpecName: "kube-api-access-mlb5s") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "kube-api-access-mlb5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:58:32.359382 kubelet[1948]: I0906 01:58:32.359347 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4eefa1-acd7-400f-b803-d1074eec62ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:58:32.362017 kubelet[1948]: I0906 01:58:32.361983 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a4eefa1-acd7-400f-b803-d1074eec62ec" (UID: "7a4eefa1-acd7-400f-b803-d1074eec62ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:58:32.454677 kubelet[1948]: I0906 01:58:32.454523 1948 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-hostproc\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.454917 kubelet[1948]: I0906 01:58:32.454885 1948 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mlb5s\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-kube-api-access-mlb5s\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.455086 kubelet[1948]: I0906 01:58:32.455050 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-cgroup\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.455262 kubelet[1948]: I0906 01:58:32.455237 1948 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-net\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.455390 kubelet[1948]: I0906 01:58:32.455364 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-run\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455576 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a4eefa1-acd7-400f-b803-d1074eec62ec-cilium-config-path\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455615 1948 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-lib-modules\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455634 1948 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-bpf-maps\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455656 1948 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a4eefa1-acd7-400f-b803-d1074eec62ec-hubble-tls\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455671 1948 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-cni-path\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455686 1948 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-etc-cni-netd\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455702 1948 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-host-proc-sys-kernel\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456252 kubelet[1948]: I0906 01:58:32.455717 1948 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a4eefa1-acd7-400f-b803-d1074eec62ec-clustermesh-secrets\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.456712 kubelet[1948]: I0906 01:58:32.455733 1948 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a4eefa1-acd7-400f-b803-d1074eec62ec-xtables-lock\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:32.674228 systemd[1]: Removed slice kubepods-besteffort-pod11bbf4f4_d10e_4105_8763_afb77a3c5fc0.slice. Sep 6 01:58:32.676698 systemd[1]: Removed slice kubepods-burstable-pod7a4eefa1_acd7_400f_b803_d1074eec62ec.slice. Sep 6 01:58:32.676844 systemd[1]: kubepods-burstable-pod7a4eefa1_acd7_400f_b803_d1074eec62ec.slice: Consumed 10.514s CPU time. Sep 6 01:58:32.927596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38c39b9d70d6f0721743e67e81f33377fdd27dcdf33e4c9191c7477608dea39f-rootfs.mount: Deactivated successfully. Sep 6 01:58:32.928044 systemd[1]: var-lib-kubelet-pods-11bbf4f4\x2dd10e\x2d4105\x2d8763\x2dafb77a3c5fc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh7hjx.mount: Deactivated successfully. Sep 6 01:58:32.928467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455-rootfs.mount: Deactivated successfully. Sep 6 01:58:32.928765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f08502c40989ce18309356c1542249dc646bbe2a19ba4f678e14e85cf3600455-shm.mount: Deactivated successfully. Sep 6 01:58:32.929023 systemd[1]: var-lib-kubelet-pods-7a4eefa1\x2dacd7\x2d400f\x2db803\x2dd1074eec62ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlb5s.mount: Deactivated successfully. Sep 6 01:58:32.929358 systemd[1]: var-lib-kubelet-pods-7a4eefa1\x2dacd7\x2d400f\x2db803\x2dd1074eec62ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:58:32.929718 systemd[1]: var-lib-kubelet-pods-7a4eefa1\x2dacd7\x2d400f\x2db803\x2dd1074eec62ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:58:33.151128 kubelet[1948]: I0906 01:58:33.151062 1948 scope.go:117] "RemoveContainer" containerID="96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218" Sep 6 01:58:33.162350 env[1200]: time="2025-09-06T01:58:33.162253182Z" level=info msg="RemoveContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\"" Sep 6 01:58:33.178328 env[1200]: time="2025-09-06T01:58:33.178176370Z" level=info msg="RemoveContainer for \"96626acfc25816e64d8003bf8d1bef7383230b217bba046dab61e1b57ff5b218\" returns successfully" Sep 6 01:58:33.181283 kubelet[1948]: I0906 01:58:33.181237 1948 scope.go:117] "RemoveContainer" containerID="d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4" Sep 6 01:58:33.184390 env[1200]: time="2025-09-06T01:58:33.184350831Z" level=info msg="RemoveContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\"" Sep 6 01:58:33.192464 env[1200]: time="2025-09-06T01:58:33.192376087Z" level=info msg="RemoveContainer for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" returns successfully" Sep 6 01:58:33.195739 kubelet[1948]: I0906 01:58:33.193486 1948 scope.go:117] "RemoveContainer" containerID="34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8" Sep 6 01:58:33.197887 env[1200]: time="2025-09-06T01:58:33.197580331Z" level=info msg="RemoveContainer for \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\"" Sep 6 01:58:33.202797 env[1200]: time="2025-09-06T01:58:33.202719131Z" level=info msg="RemoveContainer for \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\" returns successfully" Sep 6 01:58:33.203240 kubelet[1948]: I0906 01:58:33.203211 1948 scope.go:117] "RemoveContainer" containerID="fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8" Sep 6 01:58:33.207825 env[1200]: time="2025-09-06T01:58:33.207770443Z" level=info msg="RemoveContainer for \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\"" Sep 6 01:58:33.210813 env[1200]: time="2025-09-06T01:58:33.210769556Z" level=info msg="RemoveContainer for \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\" returns successfully" Sep 6 01:58:33.211086 kubelet[1948]: I0906 01:58:33.211060 1948 scope.go:117] "RemoveContainer" containerID="aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c" Sep 6 01:58:33.212882 env[1200]: time="2025-09-06T01:58:33.212829306Z" level=info msg="RemoveContainer for \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\"" Sep 6 01:58:33.220008 env[1200]: time="2025-09-06T01:58:33.219916195Z" level=info msg="RemoveContainer for \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\" returns successfully" Sep 6 01:58:33.220281 kubelet[1948]: I0906 01:58:33.220226 1948 scope.go:117] "RemoveContainer" containerID="238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79" Sep 6 01:58:33.221778 env[1200]: time="2025-09-06T01:58:33.221742006Z" level=info msg="RemoveContainer for \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\"" Sep 6 01:58:33.225214 env[1200]: time="2025-09-06T01:58:33.225175531Z" level=info msg="RemoveContainer for \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\" returns successfully" Sep 6 01:58:33.225440 kubelet[1948]: I0906 01:58:33.225397 1948 scope.go:117] "RemoveContainer" containerID="d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4" Sep 6 01:58:33.225831 env[1200]: time="2025-09-06T01:58:33.225699460Z" level=error msg="ContainerStatus for \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\": not found" Sep 6 01:58:33.226209 kubelet[1948]: E0906 01:58:33.226163 1948 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\": not found" containerID="d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4" Sep 6 01:58:33.228220 kubelet[1948]: I0906 01:58:33.227967 1948 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4"} err="failed to get container status \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d537bb1689b885eb660fb92e126eb0635282ae49679e0b3668fb340f2bb7a9c4\": not found" Sep 6 01:58:33.228220 kubelet[1948]: I0906 01:58:33.228195 1948 scope.go:117] "RemoveContainer" containerID="34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8" Sep 6 01:58:33.228986 env[1200]: time="2025-09-06T01:58:33.228919311Z" level=error msg="ContainerStatus for \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\": not found" Sep 6 01:58:33.229367 kubelet[1948]: E0906 01:58:33.229303 1948 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\": not found" containerID="34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8" Sep 6 01:58:33.229367 kubelet[1948]: I0906 01:58:33.229350 1948 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8"} err="failed to get container status \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"34fbb10f8aff02d80d6092ae865b1f59966c7bf2f6221df22d7814a6f55f36f8\": not found" Sep 6 01:58:33.229650 kubelet[1948]: I0906 01:58:33.229388 1948 scope.go:117] "RemoveContainer" containerID="fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8" Sep 6 01:58:33.229784 env[1200]: time="2025-09-06T01:58:33.229625604Z" level=error msg="ContainerStatus for \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\": not found" Sep 6 01:58:33.230068 kubelet[1948]: E0906 01:58:33.230037 1948 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\": not found" containerID="fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8" Sep 6 01:58:33.230274 kubelet[1948]: I0906 01:58:33.230238 1948 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8"} err="failed to get container status \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdd80a366db5a816e8f1be91444966af842000d9b1b3be6f1d7837d8054dd1d8\": not found" Sep 6 01:58:33.230552 kubelet[1948]: I0906 01:58:33.230497 1948 scope.go:117] "RemoveContainer" containerID="aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c" Sep 6 01:58:33.231033 env[1200]: time="2025-09-06T01:58:33.230972452Z" level=error msg="ContainerStatus for \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\": not found" Sep 6 01:58:33.231208 kubelet[1948]: E0906 01:58:33.231177 1948 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\": not found" containerID="aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c" Sep 6 01:58:33.231298 kubelet[1948]: I0906 01:58:33.231215 1948 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c"} err="failed to get container status \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa4772530b86288e46be70eeca7dc612f8b8d6181407e6a762b6040f4f77b41c\": not found" Sep 6 01:58:33.231298 kubelet[1948]: I0906 01:58:33.231246 1948 scope.go:117] "RemoveContainer" containerID="238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79" Sep 6 01:58:33.231744 env[1200]: time="2025-09-06T01:58:33.231683575Z" level=error msg="ContainerStatus for \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\": not found" Sep 6 01:58:33.232130 kubelet[1948]: E0906 01:58:33.232076 1948 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\": not found" containerID="238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79" Sep 6 01:58:33.232517 kubelet[1948]: I0906 01:58:33.232131 1948 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79"} err="failed to get container status \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\": rpc error: code = NotFound desc = an error occurred when try to find container \"238b0e682226852dc0a66f18400d076e21ecfac0c610537e25bb4fd9b2907e79\": not found" Sep 6 01:58:33.859310 kubelet[1948]: E0906 01:58:33.859231 1948 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:58:33.967758 sshd[3479]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:33.971908 systemd[1]: sshd@20-10.244.19.198:22-139.178.89.65:51138.service: Deactivated successfully. Sep 6 01:58:33.973071 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:58:33.973330 systemd[1]: session-21.scope: Consumed 1.799s CPU time. Sep 6 01:58:33.973979 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:58:33.975563 systemd-logind[1189]: Removed session 21. Sep 6 01:58:34.135776 systemd[1]: Started sshd@21-10.244.19.198:22-139.178.89.65:60084.service. Sep 6 01:58:34.664753 kubelet[1948]: I0906 01:58:34.664704 1948 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bbf4f4-d10e-4105-8763-afb77a3c5fc0" path="/var/lib/kubelet/pods/11bbf4f4-d10e-4105-8763-afb77a3c5fc0/volumes" Sep 6 01:58:34.666191 kubelet[1948]: I0906 01:58:34.666162 1948 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4eefa1-acd7-400f-b803-d1074eec62ec" path="/var/lib/kubelet/pods/7a4eefa1-acd7-400f-b803-d1074eec62ec/volumes" Sep 6 01:58:35.112024 sshd[3641]: Accepted publickey for core from 139.178.89.65 port 60084 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:35.114225 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:35.122936 systemd[1]: Started session-22.scope. Sep 6 01:58:35.124264 systemd-logind[1189]: New session 22 of user core. Sep 6 01:58:36.370598 kubelet[1948]: I0906 01:58:36.370536 1948 memory_manager.go:355] "RemoveStaleState removing state" podUID="7a4eefa1-acd7-400f-b803-d1074eec62ec" containerName="cilium-agent" Sep 6 01:58:36.370598 kubelet[1948]: I0906 01:58:36.370585 1948 memory_manager.go:355] "RemoveStaleState removing state" podUID="11bbf4f4-d10e-4105-8763-afb77a3c5fc0" containerName="cilium-operator" Sep 6 01:58:36.387846 systemd[1]: Created slice kubepods-burstable-podaf29dedd_1fb7_4f69_8298_933a6ba2d1b3.slice. Sep 6 01:58:36.485378 kubelet[1948]: I0906 01:58:36.485285 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-etc-cni-netd\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.485949 kubelet[1948]: I0906 01:58:36.485797 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-ipsec-secrets\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.486343 kubelet[1948]: I0906 01:58:36.486286 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hubble-tls\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.486578 kubelet[1948]: I0906 01:58:36.486547 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bjz\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-kube-api-access-d4bjz\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.486745 kubelet[1948]: I0906 01:58:36.486718 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-cgroup\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.486925 kubelet[1948]: I0906 01:58:36.486897 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-clustermesh-secrets\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.487084 kubelet[1948]: I0906 01:58:36.487056 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-bpf-maps\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.487296 kubelet[1948]: I0906 01:58:36.487270 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hostproc\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.487490 kubelet[1948]: I0906 01:58:36.487451 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-xtables-lock\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.487646 kubelet[1948]: I0906 01:58:36.487618 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cni-path\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.487829 kubelet[1948]: I0906 01:58:36.487800 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-config-path\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.488007 kubelet[1948]: I0906 01:58:36.487976 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-lib-modules\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.488169 kubelet[1948]: I0906 01:58:36.488141 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-net\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.488388 kubelet[1948]: I0906 01:58:36.488320 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-run\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.488579 kubelet[1948]: I0906 01:58:36.488537 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-kernel\") pod \"cilium-p6bgp\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " pod="kube-system/cilium-p6bgp" Sep 6 01:58:36.543881 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:36.547606 systemd[1]: sshd@21-10.244.19.198:22-139.178.89.65:60084.service: Deactivated successfully. Sep 6 01:58:36.548723 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:58:36.549658 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:58:36.550878 systemd-logind[1189]: Removed session 22. Sep 6 01:58:36.700458 env[1200]: time="2025-09-06T01:58:36.697789386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6bgp,Uid:af29dedd-1fb7-4f69-8298-933a6ba2d1b3,Namespace:kube-system,Attempt:0,}" Sep 6 01:58:36.702129 systemd[1]: Started sshd@22-10.244.19.198:22-139.178.89.65:60100.service. Sep 6 01:58:36.731002 env[1200]: time="2025-09-06T01:58:36.730821428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:58:36.731301 env[1200]: time="2025-09-06T01:58:36.730989538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:58:36.731301 env[1200]: time="2025-09-06T01:58:36.731014292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:58:36.731808 env[1200]: time="2025-09-06T01:58:36.731592054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42 pid=3664 runtime=io.containerd.runc.v2 Sep 6 01:58:36.755256 systemd[1]: Started cri-containerd-d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42.scope. Sep 6 01:58:36.806914 env[1200]: time="2025-09-06T01:58:36.806850391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6bgp,Uid:af29dedd-1fb7-4f69-8298-933a6ba2d1b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\"" Sep 6 01:58:36.812604 env[1200]: time="2025-09-06T01:58:36.812556555Z" level=info msg="CreateContainer within sandbox \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:58:36.829165 env[1200]: time="2025-09-06T01:58:36.829074257Z" level=info msg="CreateContainer within sandbox \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" Sep 6 01:58:36.830719 env[1200]: time="2025-09-06T01:58:36.830395775Z" level=info msg="StartContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" Sep 6 01:58:36.860457 systemd[1]: Started cri-containerd-8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526.scope. Sep 6 01:58:36.884579 systemd[1]: cri-containerd-8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526.scope: Deactivated successfully. Sep 6 01:58:36.902704 env[1200]: time="2025-09-06T01:58:36.902637417Z" level=info msg="shim disconnected" id=8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526 Sep 6 01:58:36.903148 env[1200]: time="2025-09-06T01:58:36.903089670Z" level=warning msg="cleaning up after shim disconnected" id=8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526 namespace=k8s.io Sep 6 01:58:36.903320 env[1200]: time="2025-09-06T01:58:36.903290369Z" level=info msg="cleaning up dead shim" Sep 6 01:58:36.914149 env[1200]: time="2025-09-06T01:58:36.914050195Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3721 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:58:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:58:36.914885 env[1200]: time="2025-09-06T01:58:36.914715061Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 6 01:58:36.916494 env[1200]: time="2025-09-06T01:58:36.915242028Z" level=error msg="Failed to pipe stdout of container \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" error="reading from a closed fifo" Sep 6 01:58:36.916696 env[1200]: time="2025-09-06T01:58:36.916170813Z" level=error msg="Failed to pipe stderr of container \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" error="reading from a closed fifo" Sep 6 01:58:36.917937 env[1200]: time="2025-09-06T01:58:36.917875973Z" level=error msg="StartContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:58:36.918688 kubelet[1948]: E0906 01:58:36.918477 1948 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526" Sep 6 01:58:36.925015 kubelet[1948]: E0906 01:58:36.924430 1948 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 01:58:36.925015 kubelet[1948]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:58:36.925015 kubelet[1948]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:58:36.925015 kubelet[1948]: rm /hostbin/cilium-mount Sep 6 01:58:36.925446 kubelet[1948]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4bjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-p6bgp_kube-system(af29dedd-1fb7-4f69-8298-933a6ba2d1b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:58:36.925446 kubelet[1948]: > logger="UnhandledError" Sep 6 01:58:36.926405 kubelet[1948]: E0906 01:58:36.926338 1948 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-p6bgp" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" Sep 6 01:58:37.190796 env[1200]: time="2025-09-06T01:58:37.190667905Z" level=info msg="CreateContainer within sandbox \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 6 01:58:37.212479 env[1200]: time="2025-09-06T01:58:37.212400089Z" level=info msg="CreateContainer within sandbox \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\"" Sep 6 01:58:37.215648 env[1200]: time="2025-09-06T01:58:37.213931059Z" level=info msg="StartContainer for \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\"" Sep 6 01:58:37.254505 systemd[1]: Started cri-containerd-12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab.scope. Sep 6 01:58:37.271701 systemd[1]: cri-containerd-12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab.scope: Deactivated successfully. Sep 6 01:58:37.282435 env[1200]: time="2025-09-06T01:58:37.282298114Z" level=info msg="shim disconnected" id=12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab Sep 6 01:58:37.282435 env[1200]: time="2025-09-06T01:58:37.282382784Z" level=warning msg="cleaning up after shim disconnected" id=12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab namespace=k8s.io Sep 6 01:58:37.282435 env[1200]: time="2025-09-06T01:58:37.282400890Z" level=info msg="cleaning up dead shim" Sep 6 01:58:37.293668 env[1200]: time="2025-09-06T01:58:37.293580683Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3759 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:58:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:58:37.294070 env[1200]: time="2025-09-06T01:58:37.293972184Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 6 01:58:37.296784 env[1200]: time="2025-09-06T01:58:37.296054867Z" level=error msg="Failed to pipe stdout of container \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\"" error="reading from a closed fifo" Sep 6 01:58:37.296784 env[1200]: time="2025-09-06T01:58:37.296180520Z" level=error msg="Failed to pipe stderr of container \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\"" error="reading from a closed fifo" Sep 6 01:58:37.297831 env[1200]: time="2025-09-06T01:58:37.297777323Z" level=error msg="StartContainer for \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:58:37.298289 kubelet[1948]: E0906 01:58:37.298238 1948 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab" Sep 6 01:58:37.298497 kubelet[1948]: E0906 01:58:37.298446 1948 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 01:58:37.298497 kubelet[1948]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:58:37.298497 kubelet[1948]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:58:37.298497 kubelet[1948]: rm /hostbin/cilium-mount Sep 6 01:58:37.298497 kubelet[1948]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4bjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-p6bgp_kube-system(af29dedd-1fb7-4f69-8298-933a6ba2d1b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:58:37.298497 kubelet[1948]: > logger="UnhandledError" Sep 6 01:58:37.300404 kubelet[1948]: E0906 01:58:37.300167 1948 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-p6bgp" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" Sep 6 01:58:37.659550 sshd[3655]: Accepted publickey for core from 139.178.89.65 port 60100 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:37.662372 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:37.669931 systemd-logind[1189]: New session 23 of user core. Sep 6 01:58:37.670019 systemd[1]: Started session-23.scope. Sep 6 01:58:38.188805 kubelet[1948]: I0906 01:58:38.188762 1948 scope.go:117] "RemoveContainer" containerID="8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526" Sep 6 01:58:38.189684 kubelet[1948]: I0906 01:58:38.189656 1948 scope.go:117] "RemoveContainer" containerID="8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526" Sep 6 01:58:38.193499 env[1200]: time="2025-09-06T01:58:38.193449321Z" level=info msg="RemoveContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" Sep 6 01:58:38.194516 env[1200]: time="2025-09-06T01:58:38.194458069Z" level=info msg="RemoveContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\"" Sep 6 01:58:38.194831 env[1200]: time="2025-09-06T01:58:38.194769200Z" level=error msg="RemoveContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\" failed" error="failed to set removing state for container \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\": container is already in removing state" Sep 6 01:58:38.196281 kubelet[1948]: E0906 01:58:38.195302 1948 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\": container is already in removing state" containerID="8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526" Sep 6 01:58:38.207564 kubelet[1948]: E0906 01:58:38.207463 1948 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\": container is already in removing state; Skipping pod \"cilium-p6bgp_kube-system(af29dedd-1fb7-4f69-8298-933a6ba2d1b3)\"" logger="UnhandledError" Sep 6 01:58:38.210374 env[1200]: time="2025-09-06T01:58:38.210307006Z" level=info msg="RemoveContainer for \"8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526\" returns successfully" Sep 6 01:58:38.211419 kubelet[1948]: E0906 01:58:38.211060 1948 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-p6bgp_kube-system(af29dedd-1fb7-4f69-8298-933a6ba2d1b3)\"" pod="kube-system/cilium-p6bgp" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" Sep 6 01:58:38.509620 sshd[3655]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:38.514091 systemd[1]: sshd@22-10.244.19.198:22-139.178.89.65:60100.service: Deactivated successfully. Sep 6 01:58:38.515281 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:58:38.516177 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:58:38.518457 systemd-logind[1189]: Removed session 23. Sep 6 01:58:38.668270 systemd[1]: Started sshd@23-10.244.19.198:22-139.178.89.65:60102.service. Sep 6 01:58:38.860881 kubelet[1948]: E0906 01:58:38.860799 1948 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:58:39.195609 env[1200]: time="2025-09-06T01:58:39.195292896Z" level=info msg="StopPodSandbox for \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\"" Sep 6 01:58:39.196236 env[1200]: time="2025-09-06T01:58:39.196190358Z" level=info msg="Container to stop \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:58:39.199558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42-shm.mount: Deactivated successfully. Sep 6 01:58:39.211241 systemd[1]: cri-containerd-d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42.scope: Deactivated successfully. Sep 6 01:58:39.246563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42-rootfs.mount: Deactivated successfully. Sep 6 01:58:39.257128 env[1200]: time="2025-09-06T01:58:39.257048249Z" level=info msg="shim disconnected" id=d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42 Sep 6 01:58:39.257872 env[1200]: time="2025-09-06T01:58:39.257839091Z" level=warning msg="cleaning up after shim disconnected" id=d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42 namespace=k8s.io Sep 6 01:58:39.258041 env[1200]: time="2025-09-06T01:58:39.258011611Z" level=info msg="cleaning up dead shim" Sep 6 01:58:39.268951 env[1200]: time="2025-09-06T01:58:39.268893101Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3801 runtime=io.containerd.runc.v2\n" Sep 6 01:58:39.269787 env[1200]: time="2025-09-06T01:58:39.269747538Z" level=info msg="TearDown network for sandbox \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" successfully" Sep 6 01:58:39.270062 env[1200]: time="2025-09-06T01:58:39.269992084Z" level=info msg="StopPodSandbox for \"d94749cddbdba4c9f450c0369e19bfdf7b529513c6b07ad8199c25be29ad6e42\" returns successfully" Sep 6 01:58:39.415408 kubelet[1948]: I0906 01:58:39.415235 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-lib-modules\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.415408 kubelet[1948]: I0906 01:58:39.415316 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-config-path\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.415408 kubelet[1948]: I0906 01:58:39.415358 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-xtables-lock\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.415408 kubelet[1948]: I0906 01:58:39.415395 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-ipsec-secrets\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415428 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4bjz\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-kube-api-access-d4bjz\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415468 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hostproc\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415492 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cni-path\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415522 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-etc-cni-netd\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415592 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hubble-tls\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415626 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-clustermesh-secrets\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415655 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-cgroup\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415679 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-net\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415707 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-bpf-maps\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415733 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-run\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415783 1948 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-kernel\") pod \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\" (UID: \"af29dedd-1fb7-4f69-8298-933a6ba2d1b3\") " Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415901 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.416287 kubelet[1948]: I0906 01:58:39.415969 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.418667 kubelet[1948]: I0906 01:58:39.418178 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.418667 kubelet[1948]: I0906 01:58:39.418228 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.420084 kubelet[1948]: I0906 01:58:39.420045 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:58:39.424632 systemd[1]: var-lib-kubelet-pods-af29dedd\x2d1fb7\x2d4f69\x2d8298\x2d933a6ba2d1b3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:58:39.425852 kubelet[1948]: I0906 01:58:39.425815 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.425968 kubelet[1948]: I0906 01:58:39.425879 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.425968 kubelet[1948]: I0906 01:58:39.425915 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.426265 kubelet[1948]: I0906 01:58:39.426235 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.426377 kubelet[1948]: I0906 01:58:39.426280 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.426377 kubelet[1948]: I0906 01:58:39.426317 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:58:39.427179 kubelet[1948]: I0906 01:58:39.427146 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:58:39.433088 systemd[1]: var-lib-kubelet-pods-af29dedd\x2d1fb7\x2d4f69\x2d8298\x2d933a6ba2d1b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:58:39.436011 kubelet[1948]: I0906 01:58:39.435967 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:58:39.439741 systemd[1]: var-lib-kubelet-pods-af29dedd\x2d1fb7\x2d4f69\x2d8298\x2d933a6ba2d1b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:58:39.441636 kubelet[1948]: I0906 01:58:39.441592 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:58:39.442520 kubelet[1948]: I0906 01:58:39.442485 1948 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-kube-api-access-d4bjz" (OuterVolumeSpecName: "kube-api-access-d4bjz") pod "af29dedd-1fb7-4f69-8298-933a6ba2d1b3" (UID: "af29dedd-1fb7-4f69-8298-933a6ba2d1b3"). InnerVolumeSpecName "kube-api-access-d4bjz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516602 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-cgroup\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516656 1948 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-net\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516676 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-run\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516708 1948 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-host-proc-sys-kernel\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516726 1948 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-bpf-maps\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.516754 kubelet[1948]: I0906 01:58:39.516752 1948 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-lib-modules\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516767 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-config-path\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516783 1948 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4bjz\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-kube-api-access-d4bjz\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516798 1948 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hostproc\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516813 1948 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-xtables-lock\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516828 1948 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cilium-ipsec-secrets\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516843 1948 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-cni-path\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516860 1948 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-hubble-tls\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516881 1948 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-clustermesh-secrets\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.517259 kubelet[1948]: I0906 01:58:39.516896 1948 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af29dedd-1fb7-4f69-8298-933a6ba2d1b3-etc-cni-netd\") on node \"srv-jrxph.gb1.brightbox.com\" DevicePath \"\"" Sep 6 01:58:39.625974 sshd[3782]: Accepted publickey for core from 139.178.89.65 port 60102 ssh2: RSA SHA256:8Jg6bi6M/j5fwJksmVOnJI1ducBJE/3+ZbOFydKh6RQ Sep 6 01:58:39.628474 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:58:39.640909 systemd-logind[1189]: New session 24 of user core. Sep 6 01:58:39.642180 systemd[1]: Started session-24.scope. Sep 6 01:58:40.011538 kubelet[1948]: W0906 01:58:40.011369 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf29dedd_1fb7_4f69_8298_933a6ba2d1b3.slice/cri-containerd-8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526.scope WatchSource:0}: container "8fd356472fefe38868429dc4a5ad2df12c4acf95779a4d372a05e03445f28526" in namespace "k8s.io": not found Sep 6 01:58:40.199255 systemd[1]: var-lib-kubelet-pods-af29dedd\x2d1fb7\x2d4f69\x2d8298\x2d933a6ba2d1b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd4bjz.mount: Deactivated successfully. Sep 6 01:58:40.203289 kubelet[1948]: I0906 01:58:40.203256 1948 scope.go:117] "RemoveContainer" containerID="12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab" Sep 6 01:58:40.209086 systemd[1]: Removed slice kubepods-burstable-podaf29dedd_1fb7_4f69_8298_933a6ba2d1b3.slice. Sep 6 01:58:40.212511 env[1200]: time="2025-09-06T01:58:40.212032773Z" level=info msg="RemoveContainer for \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\"" Sep 6 01:58:40.219273 env[1200]: time="2025-09-06T01:58:40.219211096Z" level=info msg="RemoveContainer for \"12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab\" returns successfully" Sep 6 01:58:40.314303 kubelet[1948]: I0906 01:58:40.314140 1948 memory_manager.go:355] "RemoveStaleState removing state" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" containerName="mount-cgroup" Sep 6 01:58:40.314572 kubelet[1948]: I0906 01:58:40.314547 1948 memory_manager.go:355] "RemoveStaleState removing state" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" containerName="mount-cgroup" Sep 6 01:58:40.325600 systemd[1]: Created slice kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice. Sep 6 01:58:40.327636 kubelet[1948]: I0906 01:58:40.327587 1948 status_manager.go:890] "Failed to get status for pod" podUID="343f9881-365c-458c-b65a-5ae45c2a17e4" pod="kube-system/cilium-xfp55" err="pods \"cilium-xfp55\" is forbidden: User \"system:node:srv-jrxph.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jrxph.gb1.brightbox.com' and this object" Sep 6 01:58:40.425582 kubelet[1948]: I0906 01:58:40.425519 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-cilium-run\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.425582 kubelet[1948]: I0906 01:58:40.425588 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-lib-modules\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425624 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-xtables-lock\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425651 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/343f9881-365c-458c-b65a-5ae45c2a17e4-cilium-config-path\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425680 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-hostproc\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425711 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh87c\" (UniqueName: \"kubernetes.io/projected/343f9881-365c-458c-b65a-5ae45c2a17e4-kube-api-access-fh87c\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425772 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-cni-path\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425802 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/343f9881-365c-458c-b65a-5ae45c2a17e4-clustermesh-secrets\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425845 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-etc-cni-netd\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425874 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-host-proc-sys-net\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425945 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/343f9881-365c-458c-b65a-5ae45c2a17e4-hubble-tls\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.425981 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-cilium-cgroup\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.426016 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-bpf-maps\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.426065 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/343f9881-365c-458c-b65a-5ae45c2a17e4-host-proc-sys-kernel\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.426315 kubelet[1948]: I0906 01:58:40.426123 1948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/343f9881-365c-458c-b65a-5ae45c2a17e4-cilium-ipsec-secrets\") pod \"cilium-xfp55\" (UID: \"343f9881-365c-458c-b65a-5ae45c2a17e4\") " pod="kube-system/cilium-xfp55" Sep 6 01:58:40.633336 env[1200]: time="2025-09-06T01:58:40.632612016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfp55,Uid:343f9881-365c-458c-b65a-5ae45c2a17e4,Namespace:kube-system,Attempt:0,}" Sep 6 01:58:40.657681 env[1200]: time="2025-09-06T01:58:40.657526270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:58:40.657939 env[1200]: time="2025-09-06T01:58:40.657669695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:58:40.657939 env[1200]: time="2025-09-06T01:58:40.657698887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:58:40.658361 env[1200]: time="2025-09-06T01:58:40.658248581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077 pid=3836 runtime=io.containerd.runc.v2 Sep 6 01:58:40.679646 systemd[1]: Started cri-containerd-72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077.scope. Sep 6 01:58:40.686490 kubelet[1948]: I0906 01:58:40.682502 1948 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af29dedd-1fb7-4f69-8298-933a6ba2d1b3" path="/var/lib/kubelet/pods/af29dedd-1fb7-4f69-8298-933a6ba2d1b3/volumes" Sep 6 01:58:40.738948 env[1200]: time="2025-09-06T01:58:40.738885505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xfp55,Uid:343f9881-365c-458c-b65a-5ae45c2a17e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\"" Sep 6 01:58:40.743355 env[1200]: time="2025-09-06T01:58:40.743299321Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:58:40.758724 env[1200]: time="2025-09-06T01:58:40.758663696Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156\"" Sep 6 01:58:40.762070 env[1200]: time="2025-09-06T01:58:40.762018858Z" level=info msg="StartContainer for \"d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156\"" Sep 6 01:58:40.788866 systemd[1]: Started cri-containerd-d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156.scope. Sep 6 01:58:40.834755 env[1200]: time="2025-09-06T01:58:40.833963862Z" level=info msg="StartContainer for \"d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156\" returns successfully" Sep 6 01:58:40.853257 systemd[1]: cri-containerd-d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156.scope: Deactivated successfully. Sep 6 01:58:40.886854 env[1200]: time="2025-09-06T01:58:40.885996850Z" level=info msg="shim disconnected" id=d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156 Sep 6 01:58:40.886854 env[1200]: time="2025-09-06T01:58:40.886076857Z" level=warning msg="cleaning up after shim disconnected" id=d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156 namespace=k8s.io Sep 6 01:58:40.886854 env[1200]: time="2025-09-06T01:58:40.886204752Z" level=info msg="cleaning up dead shim" Sep 6 01:58:40.897970 env[1200]: time="2025-09-06T01:58:40.897848386Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\n" Sep 6 01:58:41.215012 env[1200]: time="2025-09-06T01:58:41.213777224Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:58:41.231939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166859628.mount: Deactivated successfully. Sep 6 01:58:41.241212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164012398.mount: Deactivated successfully. Sep 6 01:58:41.250307 env[1200]: time="2025-09-06T01:58:41.250226347Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5\"" Sep 6 01:58:41.251562 env[1200]: time="2025-09-06T01:58:41.251526033Z" level=info msg="StartContainer for \"6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5\"" Sep 6 01:58:41.278239 systemd[1]: Started cri-containerd-6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5.scope. Sep 6 01:58:41.319822 env[1200]: time="2025-09-06T01:58:41.319762482Z" level=info msg="StartContainer for \"6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5\" returns successfully" Sep 6 01:58:41.331526 systemd[1]: cri-containerd-6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5.scope: Deactivated successfully. Sep 6 01:58:41.363148 env[1200]: time="2025-09-06T01:58:41.363072018Z" level=info msg="shim disconnected" id=6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5 Sep 6 01:58:41.363648 env[1200]: time="2025-09-06T01:58:41.363612081Z" level=warning msg="cleaning up after shim disconnected" id=6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5 namespace=k8s.io Sep 6 01:58:41.363833 env[1200]: time="2025-09-06T01:58:41.363803017Z" level=info msg="cleaning up dead shim" Sep 6 01:58:41.375823 env[1200]: time="2025-09-06T01:58:41.375731919Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Sep 6 01:58:41.809353 kubelet[1948]: I0906 01:58:41.809263 1948 setters.go:602] "Node became not ready" node="srv-jrxph.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:58:41Z","lastTransitionTime":"2025-09-06T01:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:58:42.220281 env[1200]: time="2025-09-06T01:58:42.220212768Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:58:42.257265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206946170.mount: Deactivated successfully. Sep 6 01:58:42.268937 env[1200]: time="2025-09-06T01:58:42.268848368Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee\"" Sep 6 01:58:42.271695 env[1200]: time="2025-09-06T01:58:42.271643389Z" level=info msg="StartContainer for \"72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee\"" Sep 6 01:58:42.300022 systemd[1]: Started cri-containerd-72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee.scope. Sep 6 01:58:42.376897 env[1200]: time="2025-09-06T01:58:42.376830767Z" level=info msg="StartContainer for \"72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee\" returns successfully" Sep 6 01:58:42.390508 systemd[1]: cri-containerd-72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee.scope: Deactivated successfully. Sep 6 01:58:42.420049 env[1200]: time="2025-09-06T01:58:42.419950365Z" level=info msg="shim disconnected" id=72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee Sep 6 01:58:42.420401 env[1200]: time="2025-09-06T01:58:42.420365379Z" level=warning msg="cleaning up after shim disconnected" id=72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee namespace=k8s.io Sep 6 01:58:42.420602 env[1200]: time="2025-09-06T01:58:42.420571361Z" level=info msg="cleaning up dead shim" Sep 6 01:58:42.432975 env[1200]: time="2025-09-06T01:58:42.432905329Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4043 runtime=io.containerd.runc.v2\n" Sep 6 01:58:43.122124 kubelet[1948]: W0906 01:58:43.122041 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf29dedd_1fb7_4f69_8298_933a6ba2d1b3.slice/cri-containerd-12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab.scope WatchSource:0}: container "12d829a4a1dab9295be0a210f133809832fb9270abb715966cec946fe86d61ab" in namespace "k8s.io": not found Sep 6 01:58:43.224714 env[1200]: time="2025-09-06T01:58:43.222301983Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:58:43.255087 env[1200]: time="2025-09-06T01:58:43.255026474Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61\"" Sep 6 01:58:43.257612 env[1200]: time="2025-09-06T01:58:43.257541868Z" level=info msg="StartContainer for \"86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61\"" Sep 6 01:58:43.296959 systemd[1]: Started cri-containerd-86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61.scope. Sep 6 01:58:43.343873 systemd[1]: cri-containerd-86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61.scope: Deactivated successfully. Sep 6 01:58:43.347032 env[1200]: time="2025-09-06T01:58:43.346743539Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice/cri-containerd-86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61.scope/memory.events\": no such file or directory" Sep 6 01:58:43.349615 env[1200]: time="2025-09-06T01:58:43.349539324Z" level=info msg="StartContainer for \"86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61\" returns successfully" Sep 6 01:58:43.379036 env[1200]: time="2025-09-06T01:58:43.378845853Z" level=info msg="shim disconnected" id=86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61 Sep 6 01:58:43.379036 env[1200]: time="2025-09-06T01:58:43.378918480Z" level=warning msg="cleaning up after shim disconnected" id=86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61 namespace=k8s.io Sep 6 01:58:43.379036 env[1200]: time="2025-09-06T01:58:43.378937634Z" level=info msg="cleaning up dead shim" Sep 6 01:58:43.393935 env[1200]: time="2025-09-06T01:58:43.393842571Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:58:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4100 runtime=io.containerd.runc.v2\n" Sep 6 01:58:43.863002 kubelet[1948]: E0906 01:58:43.862830 1948 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:58:44.200078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61-rootfs.mount: Deactivated successfully. Sep 6 01:58:44.232131 env[1200]: time="2025-09-06T01:58:44.232052970Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:58:44.253847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465954276.mount: Deactivated successfully. Sep 6 01:58:44.266445 env[1200]: time="2025-09-06T01:58:44.266382897Z" level=info msg="CreateContainer within sandbox \"72abe48670a5873b5bcf3a3a6f9bd1ffb1dca6d5947e440f988eea0b13897077\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba\"" Sep 6 01:58:44.267913 env[1200]: time="2025-09-06T01:58:44.267875904Z" level=info msg="StartContainer for \"849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba\"" Sep 6 01:58:44.296148 systemd[1]: Started cri-containerd-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba.scope. Sep 6 01:58:44.346877 env[1200]: time="2025-09-06T01:58:44.346820071Z" level=info msg="StartContainer for \"849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba\" returns successfully" Sep 6 01:58:45.233141 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 01:58:46.241489 kubelet[1948]: W0906 01:58:46.241427 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice/cri-containerd-d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156.scope WatchSource:0}: task d54d6029fea688902ccfe0b6d0d18c36061af60e65ceee95f5d53074ae9cb156 not found: not found Sep 6 01:58:46.534731 systemd[1]: run-containerd-runc-k8s.io-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba-runc.SyVOPB.mount: Deactivated successfully. Sep 6 01:58:48.785537 systemd[1]: run-containerd-runc-k8s.io-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba-runc.t51rRN.mount: Deactivated successfully. Sep 6 01:58:48.988171 systemd-networkd[1019]: lxc_health: Link UP Sep 6 01:58:49.015323 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:58:49.014949 systemd-networkd[1019]: lxc_health: Gained carrier Sep 6 01:58:49.355360 kubelet[1948]: W0906 01:58:49.355173 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice/cri-containerd-6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5.scope WatchSource:0}: task 6942bda009f647059a8286e20a0d5f0b1e0ed1d044d01b402e27412c477106e5 not found: not found Sep 6 01:58:50.437570 systemd-networkd[1019]: lxc_health: Gained IPv6LL Sep 6 01:58:50.713686 kubelet[1948]: I0906 01:58:50.713413 1948 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xfp55" podStartSLOduration=10.713333685 podStartE2EDuration="10.713333685s" podCreationTimestamp="2025-09-06 01:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:58:45.26560885 +0000 UTC m=+156.994974565" watchObservedRunningTime="2025-09-06 01:58:50.713333685 +0000 UTC m=+162.442699390" Sep 6 01:58:51.104836 systemd[1]: run-containerd-runc-k8s.io-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba-runc.Xk0koe.mount: Deactivated successfully. Sep 6 01:58:52.468849 kubelet[1948]: W0906 01:58:52.468748 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice/cri-containerd-72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee.scope WatchSource:0}: task 72fc993fcb036e953f3e723f17f248ace6fdf8a137d9752f19aadf77ec5178ee not found: not found Sep 6 01:58:53.313715 systemd[1]: run-containerd-runc-k8s.io-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba-runc.Ek3U6a.mount: Deactivated successfully. Sep 6 01:58:55.573205 systemd[1]: run-containerd-runc-k8s.io-849ee77a3099a26040efc075657e42a4994bbc86c2c62087e24edaebfa5da2ba-runc.cWCcKX.mount: Deactivated successfully. Sep 6 01:58:55.579460 kubelet[1948]: W0906 01:58:55.579258 1948 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343f9881_365c_458c_b65a_5ae45c2a17e4.slice/cri-containerd-86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61.scope WatchSource:0}: task 86d22e4bae3cb0478fe37142d044f797d0b9bf91541aaf6e740d1ef400336e61 not found: not found Sep 6 01:58:55.817227 sshd[3782]: pam_unix(sshd:session): session closed for user core Sep 6 01:58:55.823512 systemd[1]: sshd@23-10.244.19.198:22-139.178.89.65:60102.service: Deactivated successfully. Sep 6 01:58:55.824709 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:58:55.825656 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:58:55.827637 systemd-logind[1189]: Removed session 24.