Jul 16 12:26:18.932050 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 16 12:26:18.932093 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:26:18.932113 kernel: BIOS-provided physical RAM map: Jul 16 12:26:18.932123 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 16 12:26:18.932146 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 16 12:26:18.932156 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 16 12:26:18.935208 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 16 12:26:18.935223 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 16 12:26:18.935234 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 16 12:26:18.935245 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 16 12:26:18.935261 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 16 12:26:18.935271 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 16 12:26:18.935282 kernel: NX (Execute Disable) protection: active Jul 16 12:26:18.935292 kernel: SMBIOS 2.8 present. Jul 16 12:26:18.935305 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 16 12:26:18.935316 kernel: Hypervisor detected: KVM Jul 16 12:26:18.935331 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 16 12:26:18.935342 kernel: kvm-clock: cpu 0, msr 6519b001, primary cpu clock Jul 16 12:26:18.935353 kernel: kvm-clock: using sched offset of 4890650546 cycles Jul 16 12:26:18.935365 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 16 12:26:18.935376 kernel: tsc: Detected 2499.998 MHz processor Jul 16 12:26:18.935387 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 16 12:26:18.935398 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 16 12:26:18.935409 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 16 12:26:18.935419 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 16 12:26:18.935434 kernel: Using GB pages for direct mapping Jul 16 12:26:18.935445 kernel: ACPI: Early table checksum verification disabled Jul 16 12:26:18.935455 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 16 12:26:18.935466 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935477 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935488 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935499 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 16 12:26:18.935509 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935520 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935534 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935545 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 16 12:26:18.935556 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 16 12:26:18.935567 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 16 12:26:18.935587 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 16 12:26:18.935598 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 16 12:26:18.935614 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 16 12:26:18.935628 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 16 12:26:18.935640 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 16 12:26:18.935651 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 16 12:26:18.935663 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 16 12:26:18.935674 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 16 12:26:18.935685 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 16 12:26:18.935697 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 16 12:26:18.935711 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 16 12:26:18.935723 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 16 12:26:18.935734 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 16 12:26:18.935746 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 16 12:26:18.935757 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 16 12:26:18.935768 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 16 12:26:18.935779 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 16 12:26:18.935791 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 16 12:26:18.935802 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 16 12:26:18.935813 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 16 12:26:18.935828 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 16 12:26:18.935840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 16 12:26:18.935851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 16 12:26:18.935863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 16 12:26:18.935875 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 16 12:26:18.935886 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 16 12:26:18.935898 kernel: Zone ranges: Jul 16 12:26:18.935909 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 16 12:26:18.935921 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 16 12:26:18.935936 kernel: Normal empty Jul 16 12:26:18.935947 kernel: Movable zone start for each node Jul 16 12:26:18.935959 kernel: Early memory node ranges Jul 16 12:26:18.935970 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 16 12:26:18.935982 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 16 12:26:18.935993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 16 12:26:18.936004 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 16 12:26:18.936016 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 16 12:26:18.936027 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 16 12:26:18.936055 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 16 12:26:18.936068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 16 12:26:18.936079 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 16 12:26:18.936090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 16 12:26:18.936102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 16 12:26:18.936113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 16 12:26:18.936125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 16 12:26:18.936151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 16 12:26:18.936163 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 16 12:26:18.936179 kernel: TSC deadline timer available Jul 16 12:26:18.936191 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 16 12:26:18.936202 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 16 12:26:18.936214 kernel: Booting paravirtualized kernel on KVM Jul 16 12:26:18.936225 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 16 12:26:18.936237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 16 12:26:18.936249 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 16 12:26:18.936260 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 16 12:26:18.936272 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 16 12:26:18.936286 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Jul 16 12:26:18.936298 kernel: kvm-guest: PV spinlocks enabled Jul 16 12:26:18.936310 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 16 12:26:18.936321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 16 12:26:18.936333 kernel: Policy zone: DMA32 Jul 16 12:26:18.936346 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:26:18.936358 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 12:26:18.936369 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 12:26:18.936385 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 16 12:26:18.936397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 12:26:18.936409 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 192524K reserved, 0K cma-reserved) Jul 16 12:26:18.936421 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 16 12:26:18.936432 kernel: Kernel/User page tables isolation: enabled Jul 16 12:26:18.936444 kernel: ftrace: allocating 34607 entries in 136 pages Jul 16 12:26:18.936455 kernel: ftrace: allocated 136 pages with 2 groups Jul 16 12:26:18.936467 kernel: rcu: Hierarchical RCU implementation. Jul 16 12:26:18.936479 kernel: rcu: RCU event tracing is enabled. Jul 16 12:26:18.936495 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 16 12:26:18.936508 kernel: Rude variant of Tasks RCU enabled. Jul 16 12:26:18.936520 kernel: Tracing variant of Tasks RCU enabled. Jul 16 12:26:18.936532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 12:26:18.936543 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 16 12:26:18.936555 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 16 12:26:18.936567 kernel: random: crng init done Jul 16 12:26:18.936589 kernel: Console: colour VGA+ 80x25 Jul 16 12:26:18.936601 kernel: printk: console [tty0] enabled Jul 16 12:26:18.936613 kernel: printk: console [ttyS0] enabled Jul 16 12:26:18.936625 kernel: ACPI: Core revision 20210730 Jul 16 12:26:18.936637 kernel: APIC: Switch to symmetric I/O mode setup Jul 16 12:26:18.936653 kernel: x2apic enabled Jul 16 12:26:18.936665 kernel: Switched APIC routing to physical x2apic. Jul 16 12:26:18.936677 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 16 12:26:18.936690 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 16 12:26:18.936702 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 16 12:26:18.936718 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 16 12:26:18.936730 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 16 12:26:18.936742 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 16 12:26:18.936753 kernel: Spectre V2 : Mitigation: Retpolines Jul 16 12:26:18.936765 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 16 12:26:18.936777 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 16 12:26:18.936789 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 16 12:26:18.936801 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 16 12:26:18.936813 kernel: MDS: Mitigation: Clear CPU buffers Jul 16 12:26:18.936825 kernel: MMIO Stale Data: Unknown: No mitigations Jul 16 12:26:18.936837 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 16 12:26:18.936852 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 16 12:26:18.936864 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 16 12:26:18.936877 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 16 12:26:18.936889 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 16 12:26:18.936900 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 16 12:26:18.936913 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 16 12:26:18.936925 kernel: Freeing SMP alternatives memory: 32K Jul 16 12:26:18.936936 kernel: pid_max: default: 32768 minimum: 301 Jul 16 12:26:18.936948 kernel: LSM: Security Framework initializing Jul 16 12:26:18.936960 kernel: SELinux: Initializing. Jul 16 12:26:18.936972 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 16 12:26:18.936988 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 16 12:26:18.937000 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 16 12:26:18.937012 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 16 12:26:18.937024 kernel: signal: max sigframe size: 1776 Jul 16 12:26:18.937037 kernel: rcu: Hierarchical SRCU implementation. Jul 16 12:26:18.937062 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 16 12:26:18.937074 kernel: smp: Bringing up secondary CPUs ... Jul 16 12:26:18.937086 kernel: x86: Booting SMP configuration: Jul 16 12:26:18.937098 kernel: .... node #0, CPUs: #1 Jul 16 12:26:18.937115 kernel: kvm-clock: cpu 1, msr 6519b041, secondary cpu clock Jul 16 12:26:18.937127 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 16 12:26:18.937155 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Jul 16 12:26:18.937168 kernel: smp: Brought up 1 node, 2 CPUs Jul 16 12:26:18.937180 kernel: smpboot: Max logical packages: 16 Jul 16 12:26:18.937192 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 16 12:26:18.937204 kernel: devtmpfs: initialized Jul 16 12:26:18.937216 kernel: x86/mm: Memory block size: 128MB Jul 16 12:26:18.937229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 12:26:18.937241 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 16 12:26:18.937258 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 12:26:18.937270 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 12:26:18.937282 kernel: audit: initializing netlink subsys (disabled) Jul 16 12:26:18.937294 kernel: audit: type=2000 audit(1752668778.332:1): state=initialized audit_enabled=0 res=1 Jul 16 12:26:18.937306 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 12:26:18.937318 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 16 12:26:18.937330 kernel: cpuidle: using governor menu Jul 16 12:26:18.937342 kernel: ACPI: bus type PCI registered Jul 16 12:26:18.937354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 12:26:18.937370 kernel: dca service started, version 1.12.1 Jul 16 12:26:18.937382 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 16 12:26:18.937395 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 16 12:26:18.937407 kernel: PCI: Using configuration type 1 for base access Jul 16 12:26:18.937419 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 16 12:26:18.937431 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 12:26:18.937443 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 12:26:18.937455 kernel: ACPI: Added _OSI(Module Device) Jul 16 12:26:18.937470 kernel: ACPI: Added _OSI(Processor Device) Jul 16 12:26:18.937483 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 12:26:18.937495 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 16 12:26:18.937507 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 16 12:26:18.937519 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 16 12:26:18.937531 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 16 12:26:18.937544 kernel: ACPI: Interpreter enabled Jul 16 12:26:18.937556 kernel: ACPI: PM: (supports S0 S5) Jul 16 12:26:18.937568 kernel: ACPI: Using IOAPIC for interrupt routing Jul 16 12:26:18.937580 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 16 12:26:18.937596 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 16 12:26:18.937608 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 16 12:26:18.937880 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 12:26:18.938058 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 16 12:26:18.940472 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 16 12:26:18.940497 kernel: PCI host bridge to bus 0000:00 Jul 16 12:26:18.940681 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 16 12:26:18.940836 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 16 12:26:18.940979 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 16 12:26:18.941153 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 16 12:26:18.941302 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 16 12:26:18.941455 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 16 12:26:18.941609 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 16 12:26:18.941863 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 16 12:26:18.942074 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 16 12:26:18.942259 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 16 12:26:18.942416 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 16 12:26:18.942570 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 16 12:26:18.942736 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 16 12:26:18.942925 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.943109 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 16 12:26:18.943313 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.943474 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 16 12:26:18.943650 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.943807 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 16 12:26:18.943982 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.944177 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 16 12:26:18.944356 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.944511 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 16 12:26:18.944689 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.944870 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 16 12:26:18.945033 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.945227 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 16 12:26:18.945427 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 16 12:26:18.945585 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 16 12:26:18.945770 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 16 12:26:18.945926 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 16 12:26:18.946096 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 16 12:26:18.946275 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 16 12:26:18.946440 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 16 12:26:18.946618 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 16 12:26:18.946776 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 16 12:26:18.946932 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 16 12:26:18.947103 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 16 12:26:18.955336 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 16 12:26:18.955510 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 16 12:26:18.955702 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 16 12:26:18.955865 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 16 12:26:18.956024 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 16 12:26:18.956249 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 16 12:26:18.956411 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 16 12:26:18.956598 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 16 12:26:18.956770 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 16 12:26:18.956928 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 16 12:26:18.957098 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 16 12:26:18.957269 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:26:18.957474 kernel: pci_bus 0000:02: extended config space not accessible Jul 16 12:26:18.957657 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 16 12:26:18.957838 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 16 12:26:18.958004 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 16 12:26:18.958227 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 16 12:26:18.958460 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 16 12:26:18.958630 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 16 12:26:18.958788 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 16 12:26:18.958950 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 16 12:26:18.959146 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:26:18.959333 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 16 12:26:18.959500 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 16 12:26:18.959657 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 16 12:26:18.959817 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 16 12:26:18.959976 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:26:18.960159 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 16 12:26:18.960318 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 16 12:26:18.960482 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:26:18.960640 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 16 12:26:18.960797 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 16 12:26:18.960954 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:26:18.961126 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 16 12:26:18.961305 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 16 12:26:18.961473 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:26:18.961632 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 16 12:26:18.961796 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 16 12:26:18.961953 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:26:18.962127 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 16 12:26:18.969344 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 16 12:26:18.969506 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:26:18.969526 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 16 12:26:18.969540 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 16 12:26:18.969553 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 16 12:26:18.969572 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 16 12:26:18.969585 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 16 12:26:18.969597 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 16 12:26:18.969609 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 16 12:26:18.969622 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 16 12:26:18.969634 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 16 12:26:18.969646 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 16 12:26:18.969659 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 16 12:26:18.969671 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 16 12:26:18.969687 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 16 12:26:18.969700 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 16 12:26:18.969712 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 16 12:26:18.969724 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 16 12:26:18.969737 kernel: iommu: Default domain type: Translated Jul 16 12:26:18.969749 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 16 12:26:18.969904 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 16 12:26:18.970074 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 16 12:26:18.970258 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 16 12:26:18.970277 kernel: vgaarb: loaded Jul 16 12:26:18.970290 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 12:26:18.970303 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 12:26:18.970315 kernel: PTP clock support registered Jul 16 12:26:18.970340 kernel: PCI: Using ACPI for IRQ routing Jul 16 12:26:18.970352 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 16 12:26:18.970363 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 16 12:26:18.970375 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 16 12:26:18.970406 kernel: clocksource: Switched to clocksource kvm-clock Jul 16 12:26:18.970418 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 12:26:18.970431 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 12:26:18.970443 kernel: pnp: PnP ACPI init Jul 16 12:26:18.970652 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 16 12:26:18.970673 kernel: pnp: PnP ACPI: found 5 devices Jul 16 12:26:18.970686 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 16 12:26:18.970699 kernel: NET: Registered PF_INET protocol family Jul 16 12:26:18.970717 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 16 12:26:18.970730 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 16 12:26:18.970743 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 12:26:18.970755 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 16 12:26:18.970768 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 16 12:26:18.970780 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 16 12:26:18.970793 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 16 12:26:18.970805 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 16 12:26:18.970817 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 12:26:18.970834 kernel: NET: Registered PF_XDP protocol family Jul 16 12:26:18.970988 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 16 12:26:18.971174 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 12:26:18.971331 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 12:26:18.971487 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 16 12:26:18.971640 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 16 12:26:18.971802 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 16 12:26:18.971958 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 16 12:26:18.972127 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 16 12:26:18.972303 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 16 12:26:18.972457 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 16 12:26:18.972610 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 16 12:26:18.972778 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 16 12:26:18.972940 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 16 12:26:18.973110 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 16 12:26:18.973287 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 16 12:26:18.973472 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 16 12:26:18.973669 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 16 12:26:18.973829 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 16 12:26:18.973983 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 16 12:26:18.974170 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 16 12:26:18.974334 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 16 12:26:18.974489 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:26:18.974662 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 16 12:26:18.974849 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 16 12:26:18.975005 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 16 12:26:18.985671 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:26:18.985864 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 16 12:26:18.986026 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 16 12:26:18.986220 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 16 12:26:18.986378 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:26:18.986553 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 16 12:26:18.986710 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 16 12:26:18.986866 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 16 12:26:18.987020 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:26:18.987205 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 16 12:26:18.987363 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 16 12:26:18.987521 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 16 12:26:18.987700 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:26:18.987872 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 16 12:26:18.988061 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 16 12:26:18.988238 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 16 12:26:18.988409 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:26:18.988581 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 16 12:26:18.988759 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 16 12:26:18.988930 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 16 12:26:18.989116 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:26:18.989307 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 16 12:26:18.989481 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 16 12:26:18.989640 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 16 12:26:18.989818 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:26:18.989988 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 16 12:26:18.990179 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 16 12:26:18.990324 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 16 12:26:18.990465 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 16 12:26:18.990606 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 16 12:26:18.990761 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 16 12:26:18.990939 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 16 12:26:18.991124 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 16 12:26:18.991299 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 16 12:26:18.991474 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 16 12:26:18.991682 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 16 12:26:18.991849 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 16 12:26:18.992013 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 16 12:26:18.992283 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 16 12:26:18.992444 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 16 12:26:18.992593 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 16 12:26:18.992759 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 16 12:26:18.992908 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 16 12:26:18.993070 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 16 12:26:18.993250 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 16 12:26:18.993408 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 16 12:26:18.993556 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 16 12:26:18.993726 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 16 12:26:18.993874 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 16 12:26:18.994022 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 16 12:26:18.994226 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 16 12:26:18.994378 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 16 12:26:18.994534 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 16 12:26:18.994692 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 16 12:26:18.994841 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 16 12:26:18.994989 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 16 12:26:18.995009 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 16 12:26:18.995023 kernel: PCI: CLS 0 bytes, default 64 Jul 16 12:26:18.995037 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 16 12:26:18.995061 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 16 12:26:18.995088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 16 12:26:18.995102 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 16 12:26:18.995115 kernel: Initialise system trusted keyrings Jul 16 12:26:18.995128 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 16 12:26:18.995158 kernel: Key type asymmetric registered Jul 16 12:26:18.995171 kernel: Asymmetric key parser 'x509' registered Jul 16 12:26:18.995184 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 16 12:26:18.995197 kernel: io scheduler mq-deadline registered Jul 16 12:26:18.995210 kernel: io scheduler kyber registered Jul 16 12:26:18.995229 kernel: io scheduler bfq registered Jul 16 12:26:18.995389 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 16 12:26:18.995546 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 16 12:26:18.995703 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.995861 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 16 12:26:18.996019 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 16 12:26:18.996213 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.996378 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 16 12:26:18.996533 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 16 12:26:18.996688 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.996843 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 16 12:26:18.996997 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 16 12:26:18.997184 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.997352 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 16 12:26:18.997510 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 16 12:26:18.997667 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.997847 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 16 12:26:18.998009 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 16 12:26:18.998201 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.998367 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 16 12:26:18.998531 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 16 12:26:18.998688 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:18.998846 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 16 12:26:18.999003 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 16 12:26:19.009237 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 16 12:26:19.009271 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 16 12:26:19.009287 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 16 12:26:19.009301 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 16 12:26:19.009314 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 12:26:19.009328 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 16 12:26:19.009349 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 16 12:26:19.009362 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 16 12:26:19.009375 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 16 12:26:19.009602 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 16 12:26:19.009625 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 16 12:26:19.009773 kernel: rtc_cmos 00:03: registered as rtc0 Jul 16 12:26:19.009928 kernel: rtc_cmos 00:03: setting system clock to 2025-07-16T12:26:18 UTC (1752668778) Jul 16 12:26:19.010090 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 16 12:26:19.010110 kernel: intel_pstate: CPU model not supported Jul 16 12:26:19.010124 kernel: NET: Registered PF_INET6 protocol family Jul 16 12:26:19.010162 kernel: Segment Routing with IPv6 Jul 16 12:26:19.010176 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 12:26:19.010189 kernel: NET: Registered PF_PACKET protocol family Jul 16 12:26:19.010202 kernel: Key type dns_resolver registered Jul 16 12:26:19.010215 kernel: IPI shorthand broadcast: enabled Jul 16 12:26:19.010229 kernel: sched_clock: Marking stable (966807880, 224378646)->(1465372693, -274186167) Jul 16 12:26:19.010242 kernel: registered taskstats version 1 Jul 16 12:26:19.010255 kernel: Loading compiled-in X.509 certificates Jul 16 12:26:19.010268 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 16 12:26:19.010285 kernel: Key type .fscrypt registered Jul 16 12:26:19.010298 kernel: Key type fscrypt-provisioning registered Jul 16 12:26:19.010311 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 16 12:26:19.010342 kernel: ima: Allocated hash algorithm: sha1 Jul 16 12:26:19.010354 kernel: ima: No architecture policies found Jul 16 12:26:19.010367 kernel: clk: Disabling unused clocks Jul 16 12:26:19.010380 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 16 12:26:19.010392 kernel: Write protecting the kernel read-only data: 28672k Jul 16 12:26:19.010414 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 16 12:26:19.010430 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 16 12:26:19.010456 kernel: Run /init as init process Jul 16 12:26:19.010480 kernel: with arguments: Jul 16 12:26:19.010493 kernel: /init Jul 16 12:26:19.010517 kernel: with environment: Jul 16 12:26:19.010529 kernel: HOME=/ Jul 16 12:26:19.010541 kernel: TERM=linux Jul 16 12:26:19.010553 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 12:26:19.010589 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 16 12:26:19.010612 systemd[1]: Detected virtualization kvm. Jul 16 12:26:19.010627 systemd[1]: Detected architecture x86-64. Jul 16 12:26:19.010641 systemd[1]: Running in initrd. Jul 16 12:26:19.010654 systemd[1]: No hostname configured, using default hostname. Jul 16 12:26:19.010668 systemd[1]: Hostname set to . Jul 16 12:26:19.010682 systemd[1]: Initializing machine ID from VM UUID. Jul 16 12:26:19.010696 systemd[1]: Queued start job for default target initrd.target. Jul 16 12:26:19.010714 systemd[1]: Started systemd-ask-password-console.path. Jul 16 12:26:19.010728 systemd[1]: Reached target cryptsetup.target. Jul 16 12:26:19.010742 systemd[1]: Reached target paths.target. Jul 16 12:26:19.010756 systemd[1]: Reached target slices.target. Jul 16 12:26:19.010770 systemd[1]: Reached target swap.target. Jul 16 12:26:19.010783 systemd[1]: Reached target timers.target. Jul 16 12:26:19.010798 systemd[1]: Listening on iscsid.socket. Jul 16 12:26:19.010812 systemd[1]: Listening on iscsiuio.socket. Jul 16 12:26:19.010837 systemd[1]: Listening on systemd-journald-audit.socket. Jul 16 12:26:19.010852 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 16 12:26:19.010866 systemd[1]: Listening on systemd-journald.socket. Jul 16 12:26:19.010880 systemd[1]: Listening on systemd-networkd.socket. Jul 16 12:26:19.010899 systemd[1]: Listening on systemd-udevd-control.socket. Jul 16 12:26:19.010918 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 16 12:26:19.010932 systemd[1]: Reached target sockets.target. Jul 16 12:26:19.010946 systemd[1]: Starting kmod-static-nodes.service... Jul 16 12:26:19.010960 systemd[1]: Finished network-cleanup.service. Jul 16 12:26:19.010978 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 12:26:19.010992 systemd[1]: Starting systemd-journald.service... Jul 16 12:26:19.011006 systemd[1]: Starting systemd-modules-load.service... Jul 16 12:26:19.011020 systemd[1]: Starting systemd-resolved.service... Jul 16 12:26:19.011054 systemd[1]: Starting systemd-vconsole-setup.service... Jul 16 12:26:19.011069 systemd[1]: Finished kmod-static-nodes.service. Jul 16 12:26:19.011083 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 12:26:19.011106 systemd-journald[201]: Journal started Jul 16 12:26:19.011195 systemd-journald[201]: Runtime Journal (/run/log/journal/a5b3df5e3c994d03be93d3192a2dcf7d) is 4.7M, max 38.1M, 33.3M free. Jul 16 12:26:18.930174 systemd-modules-load[202]: Inserted module 'overlay' Jul 16 12:26:19.028319 kernel: Bridge firewalling registered Jul 16 12:26:18.986549 systemd-resolved[203]: Positive Trust Anchors: Jul 16 12:26:19.044487 systemd[1]: Started systemd-resolved.service. Jul 16 12:26:19.044518 kernel: audit: type=1130 audit(1752668779.028:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.044539 systemd[1]: Started systemd-journald.service. Jul 16 12:26:19.044566 kernel: audit: type=1130 audit(1752668779.036:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.044585 kernel: SCSI subsystem initialized Jul 16 12:26:19.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:18.986572 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 12:26:18.986617 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 16 12:26:18.990505 systemd-resolved[203]: Defaulting to hostname 'linux'. Jul 16 12:26:19.014059 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 16 12:26:19.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.050716 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 12:26:19.079362 kernel: audit: type=1130 audit(1752668779.050:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.079394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 12:26:19.079423 kernel: audit: type=1130 audit(1752668779.051:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.079442 kernel: device-mapper: uevent: version 1.0.3 Jul 16 12:26:19.079459 kernel: audit: type=1130 audit(1752668779.052:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.079477 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 16 12:26:19.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.051581 systemd[1]: Finished systemd-vconsole-setup.service. Jul 16 12:26:19.052386 systemd[1]: Reached target nss-lookup.target. Jul 16 12:26:19.059848 systemd[1]: Starting dracut-cmdline-ask.service... Jul 16 12:26:19.077734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 16 12:26:19.090934 systemd-modules-load[202]: Inserted module 'dm_multipath' Jul 16 12:26:19.092760 systemd[1]: Finished systemd-modules-load.service. Jul 16 12:26:19.110318 kernel: audit: type=1130 audit(1752668779.093:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.110350 kernel: audit: type=1130 audit(1752668779.105:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.099322 systemd[1]: Starting systemd-sysctl.service... Jul 16 12:26:19.104844 systemd[1]: Finished dracut-cmdline-ask.service. Jul 16 12:26:19.105706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 16 12:26:19.117876 kernel: audit: type=1130 audit(1752668779.112:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.112953 systemd[1]: Finished systemd-sysctl.service. Jul 16 12:26:19.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.119718 systemd[1]: Starting dracut-cmdline.service... Jul 16 12:26:19.138050 kernel: audit: type=1130 audit(1752668779.118:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.142748 dracut-cmdline[224]: dracut-dracut-053 Jul 16 12:26:19.145975 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 16 12:26:19.232171 kernel: Loading iSCSI transport class v2.0-870. Jul 16 12:26:19.253149 kernel: iscsi: registered transport (tcp) Jul 16 12:26:19.282747 kernel: iscsi: registered transport (qla4xxx) Jul 16 12:26:19.282810 kernel: QLogic iSCSI HBA Driver Jul 16 12:26:19.331007 systemd[1]: Finished dracut-cmdline.service. Jul 16 12:26:19.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.333060 systemd[1]: Starting dracut-pre-udev.service... Jul 16 12:26:19.393199 kernel: raid6: sse2x4 gen() 13265 MB/s Jul 16 12:26:19.411172 kernel: raid6: sse2x4 xor() 7576 MB/s Jul 16 12:26:19.429169 kernel: raid6: sse2x2 gen() 9331 MB/s Jul 16 12:26:19.447173 kernel: raid6: sse2x2 xor() 7871 MB/s Jul 16 12:26:19.465185 kernel: raid6: sse2x1 gen() 9938 MB/s Jul 16 12:26:19.483820 kernel: raid6: sse2x1 xor() 6985 MB/s Jul 16 12:26:19.483859 kernel: raid6: using algorithm sse2x4 gen() 13265 MB/s Jul 16 12:26:19.483883 kernel: raid6: .... xor() 7576 MB/s, rmw enabled Jul 16 12:26:19.485155 kernel: raid6: using ssse3x2 recovery algorithm Jul 16 12:26:19.503174 kernel: xor: automatically using best checksumming function avx Jul 16 12:26:19.621226 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 16 12:26:19.634927 systemd[1]: Finished dracut-pre-udev.service. Jul 16 12:26:19.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.636000 audit: BPF prog-id=7 op=LOAD Jul 16 12:26:19.636000 audit: BPF prog-id=8 op=LOAD Jul 16 12:26:19.636801 systemd[1]: Starting systemd-udevd.service... Jul 16 12:26:19.654152 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 16 12:26:19.662441 systemd[1]: Started systemd-udevd.service. Jul 16 12:26:19.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.667855 systemd[1]: Starting dracut-pre-trigger.service... Jul 16 12:26:19.685980 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jul 16 12:26:19.729055 systemd[1]: Finished dracut-pre-trigger.service. Jul 16 12:26:19.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.731235 systemd[1]: Starting systemd-udev-trigger.service... Jul 16 12:26:19.824744 systemd[1]: Finished systemd-udev-trigger.service. Jul 16 12:26:19.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:19.930178 kernel: ACPI: bus type USB registered Jul 16 12:26:19.933924 kernel: usbcore: registered new interface driver usbfs Jul 16 12:26:19.933976 kernel: usbcore: registered new interface driver hub Jul 16 12:26:19.935381 kernel: usbcore: registered new device driver usb Jul 16 12:26:19.943201 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 16 12:26:20.009636 kernel: cryptd: max_cpu_qlen set to 1000 Jul 16 12:26:20.009677 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 12:26:20.009706 kernel: GPT:17805311 != 125829119 Jul 16 12:26:20.009725 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 12:26:20.009742 kernel: GPT:17805311 != 125829119 Jul 16 12:26:20.009767 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 12:26:20.009784 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:26:20.009801 kernel: AVX version of gcm_enc/dec engaged. Jul 16 12:26:20.009818 kernel: AES CTR mode by8 optimization enabled Jul 16 12:26:20.009882 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 16 12:26:20.018821 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 16 12:26:20.019050 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 16 12:26:20.019277 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 16 12:26:20.019471 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 16 12:26:20.019664 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 16 12:26:20.019874 kernel: hub 1-0:1.0: USB hub found Jul 16 12:26:20.020155 kernel: hub 1-0:1.0: 4 ports detected Jul 16 12:26:20.020380 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 16 12:26:20.020614 kernel: hub 2-0:1.0: USB hub found Jul 16 12:26:20.020854 kernel: hub 2-0:1.0: 4 ports detected Jul 16 12:26:20.036163 kernel: libata version 3.00 loaded. Jul 16 12:26:20.053170 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Jul 16 12:26:20.063328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 16 12:26:20.159456 kernel: ahci 0000:00:1f.2: version 3.0 Jul 16 12:26:20.159774 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 16 12:26:20.159805 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 16 12:26:20.160058 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 16 12:26:20.160265 kernel: scsi host0: ahci Jul 16 12:26:20.160497 kernel: scsi host1: ahci Jul 16 12:26:20.160718 kernel: scsi host2: ahci Jul 16 12:26:20.160944 kernel: scsi host3: ahci Jul 16 12:26:20.161204 kernel: scsi host4: ahci Jul 16 12:26:20.161414 kernel: scsi host5: ahci Jul 16 12:26:20.161616 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 16 12:26:20.161649 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 16 12:26:20.161681 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 16 12:26:20.161713 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 16 12:26:20.161735 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 16 12:26:20.161751 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 16 12:26:20.158510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 16 12:26:20.165247 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 16 12:26:20.178049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 16 12:26:20.183461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 16 12:26:20.185614 systemd[1]: Starting disk-uuid.service... Jul 16 12:26:20.197181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:26:20.202643 disk-uuid[528]: Primary Header is updated. Jul 16 12:26:20.202643 disk-uuid[528]: Secondary Entries is updated. Jul 16 12:26:20.202643 disk-uuid[528]: Secondary Header is updated. Jul 16 12:26:20.259165 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 16 12:26:20.401179 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 12:26:20.408150 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.408190 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.411552 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.411601 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.413273 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.414996 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 16 12:26:20.423715 kernel: usbcore: registered new interface driver usbhid Jul 16 12:26:20.423750 kernel: usbhid: USB HID core driver Jul 16 12:26:20.435633 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 16 12:26:20.435689 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 16 12:26:21.212119 disk-uuid[529]: The operation has completed successfully. Jul 16 12:26:21.213419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 16 12:26:21.273343 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 12:26:21.273488 systemd[1]: Finished disk-uuid.service. Jul 16 12:26:21.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.275391 systemd[1]: Starting verity-setup.service... Jul 16 12:26:21.301176 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 16 12:26:21.356065 systemd[1]: Found device dev-mapper-usr.device. Jul 16 12:26:21.357758 systemd[1]: Finished verity-setup.service. Jul 16 12:26:21.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.359606 systemd[1]: Mounting sysusr-usr.mount... Jul 16 12:26:21.452159 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 16 12:26:21.453391 systemd[1]: Mounted sysusr-usr.mount. Jul 16 12:26:21.454880 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 16 12:26:21.456942 systemd[1]: Starting ignition-setup.service... Jul 16 12:26:21.459506 systemd[1]: Starting parse-ip-for-networkd.service... Jul 16 12:26:21.475178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 12:26:21.475244 kernel: BTRFS info (device vda6): using free space tree Jul 16 12:26:21.475264 kernel: BTRFS info (device vda6): has skinny extents Jul 16 12:26:21.490999 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 16 12:26:21.499224 systemd[1]: Finished ignition-setup.service. Jul 16 12:26:21.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.501038 systemd[1]: Starting ignition-fetch-offline.service... Jul 16 12:26:21.606036 systemd[1]: Finished parse-ip-for-networkd.service. Jul 16 12:26:21.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.608000 audit: BPF prog-id=9 op=LOAD Jul 16 12:26:21.608881 systemd[1]: Starting systemd-networkd.service... Jul 16 12:26:21.642535 systemd-networkd[711]: lo: Link UP Jul 16 12:26:21.642548 systemd-networkd[711]: lo: Gained carrier Jul 16 12:26:21.643981 systemd-networkd[711]: Enumeration completed Jul 16 12:26:21.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.644093 systemd[1]: Started systemd-networkd.service. Jul 16 12:26:21.645087 systemd[1]: Reached target network.target. Jul 16 12:26:21.646662 systemd[1]: Starting iscsiuio.service... Jul 16 12:26:21.654894 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 12:26:21.659395 systemd-networkd[711]: eth0: Link UP Jul 16 12:26:21.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.659412 systemd-networkd[711]: eth0: Gained carrier Jul 16 12:26:21.675265 ignition[625]: Ignition 2.14.0 Jul 16 12:26:21.660009 systemd[1]: Started iscsiuio.service. Jul 16 12:26:21.675296 ignition[625]: Stage: fetch-offline Jul 16 12:26:21.676278 systemd[1]: Starting iscsid.service... Jul 16 12:26:21.675435 ignition[625]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:21.675481 ignition[625]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:21.677300 ignition[625]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:21.685548 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 16 12:26:21.685548 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 16 12:26:21.685548 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 16 12:26:21.685548 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 16 12:26:21.685548 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 16 12:26:21.685548 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 16 12:26:21.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.678317 ignition[625]: parsed url from cmdline: "" Jul 16 12:26:21.684890 systemd[1]: Started iscsid.service. Jul 16 12:26:21.678325 ignition[625]: no config URL provided Jul 16 12:26:21.686485 systemd[1]: Finished ignition-fetch-offline.service. Jul 16 12:26:21.678335 ignition[625]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 12:26:21.689102 systemd[1]: Starting dracut-initqueue.service... Jul 16 12:26:21.678352 ignition[625]: no config at "/usr/lib/ignition/user.ign" Jul 16 12:26:21.693097 systemd[1]: Starting ignition-fetch.service... Jul 16 12:26:21.678361 ignition[625]: failed to fetch config: resource requires networking Jul 16 12:26:21.699265 systemd-networkd[711]: eth0: DHCPv4 address 10.230.12.42/30, gateway 10.230.12.41 acquired from 10.230.12.41 Jul 16 12:26:21.678913 ignition[625]: Ignition finished successfully Jul 16 12:26:21.708244 systemd[1]: Finished dracut-initqueue.service. Jul 16 12:26:21.711166 ignition[718]: Ignition 2.14.0 Jul 16 12:26:21.709254 systemd[1]: Reached target remote-fs-pre.target. Jul 16 12:26:21.711178 ignition[718]: Stage: fetch Jul 16 12:26:21.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.709855 systemd[1]: Reached target remote-cryptsetup.target. Jul 16 12:26:21.711343 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:21.710489 systemd[1]: Reached target remote-fs.target. Jul 16 12:26:21.711379 ignition[718]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:21.712224 systemd[1]: Starting dracut-pre-mount.service... Jul 16 12:26:21.713084 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:21.725852 systemd[1]: Finished dracut-pre-mount.service. Jul 16 12:26:21.713290 ignition[718]: parsed url from cmdline: "" Jul 16 12:26:21.713311 ignition[718]: no config URL provided Jul 16 12:26:21.713338 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 12:26:21.713355 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jul 16 12:26:21.729357 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 16 12:26:21.729410 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 16 12:26:21.731269 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 16 12:26:21.749542 ignition[718]: GET result: OK Jul 16 12:26:21.749950 ignition[718]: parsing config with SHA512: a8d5413b99cfea280128e7d95a51fe859929aa64aa5a5fd6dff0b79318d94882e992ebf42d930b638ded403f39cdd0c2de5a68e42a6de531f0c56ea52ad13be1 Jul 16 12:26:21.762721 unknown[718]: fetched base config from "system" Jul 16 12:26:21.763468 ignition[718]: fetch: fetch complete Jul 16 12:26:21.762737 unknown[718]: fetched base config from "system" Jul 16 12:26:21.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.763477 ignition[718]: fetch: fetch passed Jul 16 12:26:21.762745 unknown[718]: fetched user config from "openstack" Jul 16 12:26:21.763539 ignition[718]: Ignition finished successfully Jul 16 12:26:21.765872 systemd[1]: Finished ignition-fetch.service. Jul 16 12:26:21.768072 systemd[1]: Starting ignition-kargs.service... Jul 16 12:26:21.780156 ignition[736]: Ignition 2.14.0 Jul 16 12:26:21.780175 ignition[736]: Stage: kargs Jul 16 12:26:21.780341 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:21.780376 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:21.781649 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:21.783384 ignition[736]: kargs: kargs passed Jul 16 12:26:21.784447 systemd[1]: Finished ignition-kargs.service. Jul 16 12:26:21.783454 ignition[736]: Ignition finished successfully Jul 16 12:26:21.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.787201 systemd[1]: Starting ignition-disks.service... Jul 16 12:26:21.796704 ignition[742]: Ignition 2.14.0 Jul 16 12:26:21.796735 ignition[742]: Stage: disks Jul 16 12:26:21.796880 ignition[742]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:21.796912 ignition[742]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:21.798189 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:21.799863 ignition[742]: disks: disks passed Jul 16 12:26:21.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.800744 systemd[1]: Finished ignition-disks.service. Jul 16 12:26:21.799923 ignition[742]: Ignition finished successfully Jul 16 12:26:21.801854 systemd[1]: Reached target initrd-root-device.target. Jul 16 12:26:21.802577 systemd[1]: Reached target local-fs-pre.target. Jul 16 12:26:21.803796 systemd[1]: Reached target local-fs.target. Jul 16 12:26:21.805157 systemd[1]: Reached target sysinit.target. Jul 16 12:26:21.806497 systemd[1]: Reached target basic.target. Jul 16 12:26:21.808837 systemd[1]: Starting systemd-fsck-root.service... Jul 16 12:26:21.828620 systemd-fsck[750]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 16 12:26:21.833872 systemd[1]: Finished systemd-fsck-root.service. Jul 16 12:26:21.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.835714 systemd[1]: Mounting sysroot.mount... Jul 16 12:26:21.846175 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 16 12:26:21.846533 systemd[1]: Mounted sysroot.mount. Jul 16 12:26:21.848015 systemd[1]: Reached target initrd-root-fs.target. Jul 16 12:26:21.851042 systemd[1]: Mounting sysroot-usr.mount... Jul 16 12:26:21.853095 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 16 12:26:21.854824 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 16 12:26:21.855597 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 12:26:21.855637 systemd[1]: Reached target ignition-diskful.target. Jul 16 12:26:21.857794 systemd[1]: Mounted sysroot-usr.mount. Jul 16 12:26:21.859646 systemd[1]: Starting initrd-setup-root.service... Jul 16 12:26:21.867418 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 12:26:21.880754 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Jul 16 12:26:21.892239 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 12:26:21.901793 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 12:26:21.978266 systemd[1]: Finished initrd-setup-root.service. Jul 16 12:26:21.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:21.980595 systemd[1]: Starting ignition-mount.service... Jul 16 12:26:21.982408 systemd[1]: Starting sysroot-boot.service... Jul 16 12:26:21.992214 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Jul 16 12:26:22.004833 ignition[805]: INFO : Ignition 2.14.0 Jul 16 12:26:22.004833 ignition[805]: INFO : Stage: mount Jul 16 12:26:22.006587 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:22.006587 ignition[805]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:22.006587 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:22.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:22.011101 ignition[805]: INFO : mount: mount passed Jul 16 12:26:22.011101 ignition[805]: INFO : Ignition finished successfully Jul 16 12:26:22.009400 systemd[1]: Finished ignition-mount.service. Jul 16 12:26:22.022602 coreos-metadata[756]: Jul 16 12:26:22.022 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 16 12:26:22.033237 systemd[1]: Finished sysroot-boot.service. Jul 16 12:26:22.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:22.038813 coreos-metadata[756]: Jul 16 12:26:22.038 INFO Fetch successful Jul 16 12:26:22.039732 coreos-metadata[756]: Jul 16 12:26:22.039 INFO wrote hostname srv-j7d31.gb1.brightbox.com to /sysroot/etc/hostname Jul 16 12:26:22.042578 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 16 12:26:22.042742 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 16 12:26:22.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:22.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:22.379489 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 16 12:26:22.406180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (813) Jul 16 12:26:22.411200 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 12:26:22.411245 kernel: BTRFS info (device vda6): using free space tree Jul 16 12:26:22.411265 kernel: BTRFS info (device vda6): has skinny extents Jul 16 12:26:22.419057 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 16 12:26:22.421076 systemd[1]: Starting ignition-files.service... Jul 16 12:26:22.444529 ignition[833]: INFO : Ignition 2.14.0 Jul 16 12:26:22.444529 ignition[833]: INFO : Stage: files Jul 16 12:26:22.446310 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:22.446310 ignition[833]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:22.446310 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:22.449853 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Jul 16 12:26:22.451197 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 12:26:22.451197 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 12:26:22.455371 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 12:26:22.456567 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 12:26:22.458825 unknown[833]: wrote ssh authorized keys file for user: core Jul 16 12:26:22.459853 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 12:26:22.461992 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 16 12:26:22.463966 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 16 12:26:22.465645 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 12:26:22.466931 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 16 12:26:22.826175 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 16 12:26:23.598988 systemd-networkd[711]: eth0: Gained IPv6LL Jul 16 12:26:25.105991 systemd-networkd[711]: eth0: Ignoring DHCPv6 address 2a02:1348:179:830a:24:19ff:fee6:c2a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:830a:24:19ff:fee6:c2a/64 assigned by NDisc. Jul 16 12:26:25.106009 systemd-networkd[711]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 16 12:26:25.743547 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 12:26:25.746534 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 16 12:26:25.746534 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 16 12:26:26.330092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 16 12:26:26.577047 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 12:26:26.578403 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:26:26.587092 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 16 12:26:27.282319 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 16 12:26:28.991741 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 12:26:28.991741 ignition[833]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 16 12:26:28.991741 ignition[833]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 16 12:26:28.991741 ignition[833]: INFO : files: op(e): [started] processing unit "containerd.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(e): [finished] processing unit "containerd.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 16 12:26:28.996743 ignition[833]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 12:26:29.024017 kernel: kauditd_printk_skb: 28 callbacks suppressed Jul 16 12:26:29.024056 kernel: audit: type=1130 audit(1752668789.008:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.005138 systemd[1]: Finished ignition-files.service. Jul 16 12:26:29.025014 ignition[833]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 12:26:29.025014 ignition[833]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 12:26:29.025014 ignition[833]: INFO : files: files passed Jul 16 12:26:29.025014 ignition[833]: INFO : Ignition finished successfully Jul 16 12:26:29.009903 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 16 12:26:29.019879 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 16 12:26:29.036828 kernel: audit: type=1130 audit(1752668789.030:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.036905 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 12:26:29.047931 kernel: audit: type=1130 audit(1752668789.037:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.047965 kernel: audit: type=1131 audit(1752668789.037:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.021094 systemd[1]: Starting ignition-quench.service... Jul 16 12:26:29.029567 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 16 12:26:29.030991 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 12:26:29.031111 systemd[1]: Finished ignition-quench.service. Jul 16 12:26:29.037635 systemd[1]: Reached target ignition-complete.target. Jul 16 12:26:29.049542 systemd[1]: Starting initrd-parse-etc.service... Jul 16 12:26:29.068385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 12:26:29.068573 systemd[1]: Finished initrd-parse-etc.service. Jul 16 12:26:29.092974 kernel: audit: type=1130 audit(1752668789.069:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.093006 kernel: audit: type=1131 audit(1752668789.069:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.070051 systemd[1]: Reached target initrd-fs.target. Jul 16 12:26:29.093635 systemd[1]: Reached target initrd.target. Jul 16 12:26:29.094918 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 16 12:26:29.096027 systemd[1]: Starting dracut-pre-pivot.service... Jul 16 12:26:29.120183 kernel: audit: type=1130 audit(1752668789.112:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.112407 systemd[1]: Finished dracut-pre-pivot.service. Jul 16 12:26:29.114052 systemd[1]: Starting initrd-cleanup.service... Jul 16 12:26:29.128653 systemd[1]: Stopped target nss-lookup.target. Jul 16 12:26:29.130174 systemd[1]: Stopped target remote-cryptsetup.target. Jul 16 12:26:29.131747 systemd[1]: Stopped target timers.target. Jul 16 12:26:29.133257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 12:26:29.134206 systemd[1]: Stopped dracut-pre-pivot.service. Jul 16 12:26:29.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.135964 systemd[1]: Stopped target initrd.target. Jul 16 12:26:29.141366 kernel: audit: type=1131 audit(1752668789.135:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.142025 systemd[1]: Stopped target basic.target. Jul 16 12:26:29.142820 systemd[1]: Stopped target ignition-complete.target. Jul 16 12:26:29.144128 systemd[1]: Stopped target ignition-diskful.target. Jul 16 12:26:29.145408 systemd[1]: Stopped target initrd-root-device.target. Jul 16 12:26:29.146665 systemd[1]: Stopped target remote-fs.target. Jul 16 12:26:29.147907 systemd[1]: Stopped target remote-fs-pre.target. Jul 16 12:26:29.149199 systemd[1]: Stopped target sysinit.target. Jul 16 12:26:29.150471 systemd[1]: Stopped target local-fs.target. Jul 16 12:26:29.151781 systemd[1]: Stopped target local-fs-pre.target. Jul 16 12:26:29.152975 systemd[1]: Stopped target swap.target. Jul 16 12:26:29.154161 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 12:26:29.160541 kernel: audit: type=1131 audit(1752668789.155:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.154375 systemd[1]: Stopped dracut-pre-mount.service. Jul 16 12:26:29.155556 systemd[1]: Stopped target cryptsetup.target. Jul 16 12:26:29.167599 kernel: audit: type=1131 audit(1752668789.162:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.161331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 12:26:29.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.161569 systemd[1]: Stopped dracut-initqueue.service. Jul 16 12:26:29.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.162677 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 12:26:29.162914 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 16 12:26:29.168517 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 12:26:29.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.168732 systemd[1]: Stopped ignition-files.service. Jul 16 12:26:29.170963 systemd[1]: Stopping ignition-mount.service... Jul 16 12:26:29.173111 systemd[1]: Stopping iscsiuio.service... Jul 16 12:26:29.177116 systemd[1]: Stopping sysroot-boot.service... Jul 16 12:26:29.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.177830 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 12:26:29.178080 systemd[1]: Stopped systemd-udev-trigger.service. Jul 16 12:26:29.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.179265 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 12:26:29.179474 systemd[1]: Stopped dracut-pre-trigger.service. Jul 16 12:26:29.185397 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 16 12:26:29.186417 systemd[1]: Stopped iscsiuio.service. Jul 16 12:26:29.189854 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 12:26:29.189973 systemd[1]: Finished initrd-cleanup.service. Jul 16 12:26:29.200486 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 12:26:29.206125 ignition[871]: INFO : Ignition 2.14.0 Jul 16 12:26:29.206125 ignition[871]: INFO : Stage: umount Jul 16 12:26:29.206125 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 16 12:26:29.206125 ignition[871]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 16 12:26:29.206125 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 16 12:26:29.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.206686 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 12:26:29.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.217610 ignition[871]: INFO : umount: umount passed Jul 16 12:26:29.217610 ignition[871]: INFO : Ignition finished successfully Jul 16 12:26:29.206857 systemd[1]: Stopped sysroot-boot.service. Jul 16 12:26:29.208537 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 12:26:29.208684 systemd[1]: Stopped ignition-mount.service. Jul 16 12:26:29.209474 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 12:26:29.209534 systemd[1]: Stopped ignition-disks.service. Jul 16 12:26:29.210692 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 12:26:29.210756 systemd[1]: Stopped ignition-kargs.service. Jul 16 12:26:29.211966 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 16 12:26:29.212024 systemd[1]: Stopped ignition-fetch.service. Jul 16 12:26:29.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.213458 systemd[1]: Stopped target network.target. Jul 16 12:26:29.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.215349 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 12:26:29.215410 systemd[1]: Stopped ignition-fetch-offline.service. Jul 16 12:26:29.216870 systemd[1]: Stopped target paths.target. Jul 16 12:26:29.218223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 12:26:29.221201 systemd[1]: Stopped systemd-ask-password-console.path. Jul 16 12:26:29.222500 systemd[1]: Stopped target slices.target. Jul 16 12:26:29.223810 systemd[1]: Stopped target sockets.target. Jul 16 12:26:29.225117 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 12:26:29.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.225187 systemd[1]: Closed iscsid.socket. Jul 16 12:26:29.226471 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 12:26:29.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.226522 systemd[1]: Closed iscsiuio.socket. Jul 16 12:26:29.227651 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 12:26:29.245000 audit: BPF prog-id=6 op=UNLOAD Jul 16 12:26:29.227712 systemd[1]: Stopped ignition-setup.service. Jul 16 12:26:29.228963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 12:26:29.229022 systemd[1]: Stopped initrd-setup-root.service. Jul 16 12:26:29.231279 systemd[1]: Stopping systemd-networkd.service... Jul 16 12:26:29.233525 systemd[1]: Stopping systemd-resolved.service... Jul 16 12:26:29.235234 systemd-networkd[711]: eth0: DHCPv6 lease lost Jul 16 12:26:29.249000 audit: BPF prog-id=9 op=UNLOAD Jul 16 12:26:29.237693 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 12:26:29.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.237871 systemd[1]: Stopped systemd-networkd.service. Jul 16 12:26:29.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.241498 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 12:26:29.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.241698 systemd[1]: Stopped systemd-resolved.service. Jul 16 12:26:29.244085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 12:26:29.244176 systemd[1]: Closed systemd-networkd.socket. Jul 16 12:26:29.246218 systemd[1]: Stopping network-cleanup.service... Jul 16 12:26:29.250302 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 12:26:29.250428 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 16 12:26:29.251975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 12:26:29.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.252047 systemd[1]: Stopped systemd-sysctl.service. Jul 16 12:26:29.253660 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 12:26:29.253741 systemd[1]: Stopped systemd-modules-load.service. Jul 16 12:26:29.255036 systemd[1]: Stopping systemd-udevd.service... Jul 16 12:26:29.257893 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 12:26:29.260186 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 12:26:29.260425 systemd[1]: Stopped systemd-udevd.service. Jul 16 12:26:29.263218 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 12:26:29.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.263355 systemd[1]: Closed systemd-udevd-control.socket. Jul 16 12:26:29.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.281013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 12:26:29.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.281071 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 16 12:26:29.282421 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 12:26:29.282485 systemd[1]: Stopped dracut-pre-udev.service. Jul 16 12:26:29.283744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 12:26:29.283830 systemd[1]: Stopped dracut-cmdline.service. Jul 16 12:26:29.285224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 12:26:29.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:29.285285 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 16 12:26:29.287552 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 16 12:26:29.288304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 12:26:29.288397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 16 12:26:29.289244 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 12:26:29.289305 systemd[1]: Stopped kmod-static-nodes.service. Jul 16 12:26:29.289975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 12:26:29.290042 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 16 12:26:29.292364 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 16 12:26:29.293110 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 12:26:29.293282 systemd[1]: Stopped network-cleanup.service. Jul 16 12:26:29.298607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 12:26:29.298735 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 16 12:26:29.300153 systemd[1]: Reached target initrd-switch-root.target. Jul 16 12:26:29.302391 systemd[1]: Starting initrd-switch-root.service... Jul 16 12:26:29.312631 systemd[1]: Switching root. Jul 16 12:26:29.315000 audit: BPF prog-id=8 op=UNLOAD Jul 16 12:26:29.315000 audit: BPF prog-id=7 op=UNLOAD Jul 16 12:26:29.317000 audit: BPF prog-id=5 op=UNLOAD Jul 16 12:26:29.317000 audit: BPF prog-id=4 op=UNLOAD Jul 16 12:26:29.317000 audit: BPF prog-id=3 op=UNLOAD Jul 16 12:26:29.337833 iscsid[716]: iscsid shutting down. Jul 16 12:26:29.338603 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 16 12:26:29.338682 systemd-journald[201]: Journal stopped Jul 16 12:26:33.531255 kernel: SELinux: Class mctp_socket not defined in policy. Jul 16 12:26:33.531364 kernel: SELinux: Class anon_inode not defined in policy. Jul 16 12:26:33.531401 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 16 12:26:33.531440 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 12:26:33.531472 kernel: SELinux: policy capability open_perms=1 Jul 16 12:26:33.531493 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 12:26:33.531517 kernel: SELinux: policy capability always_check_network=0 Jul 16 12:26:33.531536 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 12:26:33.531555 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 12:26:33.531579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 12:26:33.531599 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 12:26:33.531621 systemd[1]: Successfully loaded SELinux policy in 61.719ms. Jul 16 12:26:33.531680 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.261ms. Jul 16 12:26:33.531720 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 16 12:26:33.531744 systemd[1]: Detected virtualization kvm. Jul 16 12:26:33.531793 systemd[1]: Detected architecture x86-64. Jul 16 12:26:33.531817 systemd[1]: Detected first boot. Jul 16 12:26:33.531838 systemd[1]: Hostname set to . Jul 16 12:26:33.531859 systemd[1]: Initializing machine ID from VM UUID. Jul 16 12:26:33.531879 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 16 12:26:33.531913 systemd[1]: Populated /etc with preset unit settings. Jul 16 12:26:33.531936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:26:33.531964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:26:33.531986 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:26:33.532017 systemd[1]: Queued start job for default target multi-user.target. Jul 16 12:26:33.532050 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 16 12:26:33.532070 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 16 12:26:33.532114 systemd[1]: Created slice system-addon\x2drun.slice. Jul 16 12:26:33.532164 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 16 12:26:33.532189 systemd[1]: Created slice system-getty.slice. Jul 16 12:26:33.532209 systemd[1]: Created slice system-modprobe.slice. Jul 16 12:26:33.532230 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 16 12:26:33.532251 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 16 12:26:33.532271 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 16 12:26:33.532301 systemd[1]: Created slice user.slice. Jul 16 12:26:33.532334 systemd[1]: Started systemd-ask-password-console.path. Jul 16 12:26:33.532357 systemd[1]: Started systemd-ask-password-wall.path. Jul 16 12:26:33.532384 systemd[1]: Set up automount boot.automount. Jul 16 12:26:33.532406 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 16 12:26:33.532426 systemd[1]: Reached target integritysetup.target. Jul 16 12:26:33.532462 systemd[1]: Reached target remote-cryptsetup.target. Jul 16 12:26:33.532483 systemd[1]: Reached target remote-fs.target. Jul 16 12:26:33.532515 systemd[1]: Reached target slices.target. Jul 16 12:26:33.532543 systemd[1]: Reached target swap.target. Jul 16 12:26:33.532563 systemd[1]: Reached target torcx.target. Jul 16 12:26:33.532589 systemd[1]: Reached target veritysetup.target. Jul 16 12:26:33.532611 systemd[1]: Listening on systemd-coredump.socket. Jul 16 12:26:33.532632 systemd[1]: Listening on systemd-initctl.socket. Jul 16 12:26:33.532666 systemd[1]: Listening on systemd-journald-audit.socket. Jul 16 12:26:33.532688 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 16 12:26:33.532709 systemd[1]: Listening on systemd-journald.socket. Jul 16 12:26:33.532729 systemd[1]: Listening on systemd-networkd.socket. Jul 16 12:26:33.532762 systemd[1]: Listening on systemd-udevd-control.socket. Jul 16 12:26:33.532785 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 16 12:26:33.532805 systemd[1]: Listening on systemd-userdbd.socket. Jul 16 12:26:33.532825 systemd[1]: Mounting dev-hugepages.mount... Jul 16 12:26:33.532845 systemd[1]: Mounting dev-mqueue.mount... Jul 16 12:26:33.532865 systemd[1]: Mounting media.mount... Jul 16 12:26:33.532885 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:33.532906 systemd[1]: Mounting sys-kernel-debug.mount... Jul 16 12:26:33.532938 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 16 12:26:33.532979 systemd[1]: Mounting tmp.mount... Jul 16 12:26:33.533019 systemd[1]: Starting flatcar-tmpfiles.service... Jul 16 12:26:33.533038 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:26:33.533071 systemd[1]: Starting kmod-static-nodes.service... Jul 16 12:26:33.533097 systemd[1]: Starting modprobe@configfs.service... Jul 16 12:26:33.533128 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:26:33.533148 systemd[1]: Starting modprobe@drm.service... Jul 16 12:26:33.533179 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:26:33.533208 systemd[1]: Starting modprobe@fuse.service... Jul 16 12:26:33.533241 systemd[1]: Starting modprobe@loop.service... Jul 16 12:26:33.533271 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 12:26:33.533294 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 16 12:26:33.533315 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 16 12:26:33.533336 systemd[1]: Starting systemd-journald.service... Jul 16 12:26:33.533356 systemd[1]: Starting systemd-modules-load.service... Jul 16 12:26:33.533399 kernel: fuse: init (API version 7.34) Jul 16 12:26:33.533420 systemd[1]: Starting systemd-network-generator.service... Jul 16 12:26:33.533441 systemd[1]: Starting systemd-remount-fs.service... Jul 16 12:26:33.533481 systemd[1]: Starting systemd-udev-trigger.service... Jul 16 12:26:33.533503 kernel: loop: module loaded Jul 16 12:26:33.533523 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:33.533556 systemd[1]: Mounted dev-hugepages.mount. Jul 16 12:26:33.533578 systemd[1]: Mounted dev-mqueue.mount. Jul 16 12:26:33.533598 systemd[1]: Mounted media.mount. Jul 16 12:26:33.533618 systemd[1]: Mounted sys-kernel-debug.mount. Jul 16 12:26:33.533637 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 16 12:26:33.533674 systemd[1]: Mounted tmp.mount. Jul 16 12:26:33.533708 systemd[1]: Finished kmod-static-nodes.service. Jul 16 12:26:33.533730 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 12:26:33.533750 systemd[1]: Finished modprobe@configfs.service. Jul 16 12:26:33.533770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:26:33.533790 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:26:33.533811 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 12:26:33.533838 systemd[1]: Finished modprobe@drm.service. Jul 16 12:26:33.533859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:26:33.533880 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:26:33.533912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 12:26:33.533940 systemd-journald[1013]: Journal started Jul 16 12:26:33.534011 systemd-journald[1013]: Runtime Journal (/run/log/journal/a5b3df5e3c994d03be93d3192a2dcf7d) is 4.7M, max 38.1M, 33.3M free. Jul 16 12:26:33.305000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 16 12:26:33.305000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 16 12:26:33.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.528000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 16 12:26:33.528000 audit[1013]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd79e345e0 a2=4000 a3=7ffd79e3467c items=0 ppid=1 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:26:33.528000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 16 12:26:33.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.536163 systemd[1]: Finished modprobe@fuse.service. Jul 16 12:26:33.544394 systemd[1]: Started systemd-journald.service. Jul 16 12:26:33.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.543976 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:26:33.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.544932 systemd[1]: Finished modprobe@loop.service. Jul 16 12:26:33.549372 systemd[1]: Finished systemd-modules-load.service. Jul 16 12:26:33.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.551430 systemd[1]: Finished systemd-network-generator.service. Jul 16 12:26:33.554052 systemd[1]: Finished systemd-remount-fs.service. Jul 16 12:26:33.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.555461 systemd[1]: Reached target network-pre.target. Jul 16 12:26:33.560003 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 16 12:26:33.562320 systemd[1]: Mounting sys-kernel-config.mount... Jul 16 12:26:33.563048 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 12:26:33.567234 systemd[1]: Starting systemd-hwdb-update.service... Jul 16 12:26:33.571885 systemd[1]: Starting systemd-journal-flush.service... Jul 16 12:26:33.574367 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:26:33.577262 systemd[1]: Starting systemd-random-seed.service... Jul 16 12:26:33.578420 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:26:33.582530 systemd[1]: Starting systemd-sysctl.service... Jul 16 12:26:33.588297 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 16 12:26:33.588541 systemd-journald[1013]: Time spent on flushing to /var/log/journal/a5b3df5e3c994d03be93d3192a2dcf7d is 58.438ms for 1228 entries. Jul 16 12:26:33.588541 systemd-journald[1013]: System Journal (/var/log/journal/a5b3df5e3c994d03be93d3192a2dcf7d) is 8.0M, max 584.8M, 576.8M free. Jul 16 12:26:33.653723 systemd-journald[1013]: Received client request to flush runtime journal. Jul 16 12:26:33.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.592068 systemd[1]: Mounted sys-kernel-config.mount. Jul 16 12:26:33.604104 systemd[1]: Finished systemd-random-seed.service. Jul 16 12:26:33.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.604934 systemd[1]: Reached target first-boot-complete.target. Jul 16 12:26:33.611607 systemd[1]: Finished flatcar-tmpfiles.service. Jul 16 12:26:33.618101 systemd[1]: Starting systemd-sysusers.service... Jul 16 12:26:33.633900 systemd[1]: Finished systemd-sysctl.service. Jul 16 12:26:33.654829 systemd[1]: Finished systemd-journal-flush.service. Jul 16 12:26:33.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.661492 systemd[1]: Finished systemd-sysusers.service. Jul 16 12:26:33.664897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 16 12:26:33.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.707150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 16 12:26:33.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:33.763498 systemd[1]: Finished systemd-udev-trigger.service. Jul 16 12:26:33.766152 systemd[1]: Starting systemd-udev-settle.service... Jul 16 12:26:33.777873 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 16 12:26:34.264827 systemd[1]: Finished systemd-hwdb-update.service. Jul 16 12:26:34.272366 kernel: kauditd_printk_skb: 77 callbacks suppressed Jul 16 12:26:34.272488 kernel: audit: type=1130 audit(1752668794.265:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.267593 systemd[1]: Starting systemd-udevd.service... Jul 16 12:26:34.296765 systemd-udevd[1068]: Using default interface naming scheme 'v252'. Jul 16 12:26:34.348024 systemd[1]: Started systemd-udevd.service. Jul 16 12:26:34.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.354259 kernel: audit: type=1130 audit(1752668794.348:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.351385 systemd[1]: Starting systemd-networkd.service... Jul 16 12:26:34.363993 systemd[1]: Starting systemd-userdbd.service... Jul 16 12:26:34.429771 systemd[1]: Started systemd-userdbd.service. Jul 16 12:26:34.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.436164 kernel: audit: type=1130 audit(1752668794.430:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.438283 systemd[1]: Found device dev-ttyS0.device. Jul 16 12:26:34.565155 systemd-networkd[1070]: lo: Link UP Jul 16 12:26:34.565169 systemd-networkd[1070]: lo: Gained carrier Jul 16 12:26:34.566025 systemd-networkd[1070]: Enumeration completed Jul 16 12:26:34.566240 systemd[1]: Started systemd-networkd.service. Jul 16 12:26:34.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.567207 systemd-networkd[1070]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 12:26:34.569479 systemd-networkd[1070]: eth0: Link UP Jul 16 12:26:34.569486 systemd-networkd[1070]: eth0: Gained carrier Jul 16 12:26:34.573154 kernel: audit: type=1130 audit(1752668794.566:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.580169 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 16 12:26:34.592635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 16 12:26:34.599549 systemd-networkd[1070]: eth0: DHCPv4 address 10.230.12.42/30, gateway 10.230.12.41 acquired from 10.230.12.41 Jul 16 12:26:34.600168 kernel: ACPI: button: Power Button [PWRF] Jul 16 12:26:34.603184 kernel: mousedev: PS/2 mouse device common for all mice Jul 16 12:26:34.648000 audit[1081]: AVC avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 16 12:26:34.657245 kernel: audit: type=1400 audit(1752668794.648:121): avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 16 12:26:34.648000 audit[1081]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a693ad6cf0 a1=338ac a2=7ffbadea9bc5 a3=5 items=110 ppid=1068 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:26:34.667508 kernel: audit: type=1300 audit(1752668794.648:121): arch=c000003e syscall=175 success=yes exit=0 a0=55a693ad6cf0 a1=338ac a2=7ffbadea9bc5 a3=5 items=110 ppid=1068 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:26:34.667572 kernel: audit: type=1307 audit(1752668794.648:121): cwd="/" Jul 16 12:26:34.648000 audit: CWD cwd="/" Jul 16 12:26:34.673368 kernel: audit: type=1302 audit(1752668794.648:121): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.673426 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jul 16 12:26:34.648000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.680579 kernel: audit: type=1302 audit(1752668794.648:121): item=1 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=1 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.685723 kernel: audit: type=1302 audit(1752668794.648:121): item=2 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=2 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=3 name=(null) inode=16161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=4 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=5 name=(null) inode=16162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=6 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=7 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=8 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=9 name=(null) inode=16164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=10 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=11 name=(null) inode=16165 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=12 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=13 name=(null) inode=16166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=14 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=15 name=(null) inode=16167 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=16 name=(null) inode=16163 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=17 name=(null) inode=16168 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=18 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=19 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=20 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=21 name=(null) inode=16170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=22 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=23 name=(null) inode=16171 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=24 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=25 name=(null) inode=16172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=26 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=27 name=(null) inode=16173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=28 name=(null) inode=16169 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=29 name=(null) inode=16174 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=30 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=31 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=32 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=33 name=(null) inode=16176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=34 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=35 name=(null) inode=16177 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=36 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=37 name=(null) inode=16178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=38 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=39 name=(null) inode=16179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=40 name=(null) inode=16175 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=41 name=(null) inode=16180 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=42 name=(null) inode=16160 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=43 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=44 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=45 name=(null) inode=16182 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=46 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=47 name=(null) inode=16183 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=48 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=49 name=(null) inode=16184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=50 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=51 name=(null) inode=16185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=52 name=(null) inode=16181 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=53 name=(null) inode=16186 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=55 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=56 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=57 name=(null) inode=16188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=58 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=59 name=(null) inode=16189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=60 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=61 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=62 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=63 name=(null) inode=16191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=64 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=65 name=(null) inode=16192 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=66 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=67 name=(null) inode=16193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=68 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=69 name=(null) inode=16194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=70 name=(null) inode=16190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=71 name=(null) inode=16195 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=72 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=73 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=74 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=75 name=(null) inode=16197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=76 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=77 name=(null) inode=16198 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=78 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=79 name=(null) inode=16199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=80 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=81 name=(null) inode=16200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=82 name=(null) inode=16196 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=83 name=(null) inode=16201 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=84 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=85 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=86 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=87 name=(null) inode=16203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=88 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=89 name=(null) inode=16204 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=90 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=91 name=(null) inode=16205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=92 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=93 name=(null) inode=16206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=94 name=(null) inode=16202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=95 name=(null) inode=16207 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=96 name=(null) inode=16187 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=97 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=98 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=99 name=(null) inode=16209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=100 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=101 name=(null) inode=16210 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=102 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=103 name=(null) inode=16211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=104 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=105 name=(null) inode=16212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=106 name=(null) inode=16208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=107 name=(null) inode=16213 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PATH item=109 name=(null) inode=16214 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 16 12:26:34.648000 audit: PROCTITLE proctitle="(udev-worker)" Jul 16 12:26:34.715157 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 16 12:26:34.753413 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 16 12:26:34.753673 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 16 12:26:34.901806 systemd[1]: Finished systemd-udev-settle.service. Jul 16 12:26:34.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.904520 systemd[1]: Starting lvm2-activation-early.service... Jul 16 12:26:34.929186 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 16 12:26:34.961812 systemd[1]: Finished lvm2-activation-early.service. Jul 16 12:26:34.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:34.962732 systemd[1]: Reached target cryptsetup.target. Jul 16 12:26:34.965288 systemd[1]: Starting lvm2-activation.service... Jul 16 12:26:34.972411 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 16 12:26:35.000670 systemd[1]: Finished lvm2-activation.service. Jul 16 12:26:35.001551 systemd[1]: Reached target local-fs-pre.target. Jul 16 12:26:35.002203 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 12:26:35.002245 systemd[1]: Reached target local-fs.target. Jul 16 12:26:35.002850 systemd[1]: Reached target machines.target. Jul 16 12:26:35.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.005511 systemd[1]: Starting ldconfig.service... Jul 16 12:26:35.007231 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:26:35.007323 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:35.009017 systemd[1]: Starting systemd-boot-update.service... Jul 16 12:26:35.011094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 16 12:26:35.013661 systemd[1]: Starting systemd-machine-id-commit.service... Jul 16 12:26:35.016313 systemd[1]: Starting systemd-sysext.service... Jul 16 12:26:35.032432 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Jul 16 12:26:35.034104 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 16 12:26:35.038032 systemd[1]: Unmounting usr-share-oem.mount... Jul 16 12:26:35.044379 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 16 12:26:35.044716 systemd[1]: Unmounted usr-share-oem.mount. Jul 16 12:26:35.178191 kernel: loop0: detected capacity change from 0 to 221472 Jul 16 12:26:35.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.303868 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 16 12:26:35.338892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 12:26:35.339815 systemd[1]: Finished systemd-machine-id-commit.service. Jul 16 12:26:35.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.362211 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 12:26:35.382163 kernel: loop1: detected capacity change from 0 to 221472 Jul 16 12:26:35.401975 (sd-sysext)[1121]: Using extensions 'kubernetes'. Jul 16 12:26:35.404697 (sd-sysext)[1121]: Merged extensions into '/usr'. Jul 16 12:26:35.422100 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) Jul 16 12:26:35.422100 systemd-fsck[1118]: /dev/vda1: 790 files, 120725/258078 clusters Jul 16 12:26:35.423365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 16 12:26:35.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.428281 systemd[1]: Mounting boot.mount... Jul 16 12:26:35.462365 systemd[1]: Mounted boot.mount. Jul 16 12:26:35.470603 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:35.473007 systemd[1]: Mounting usr-share-oem.mount... Jul 16 12:26:35.475584 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:26:35.477603 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:26:35.485104 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:26:35.487694 systemd[1]: Starting modprobe@loop.service... Jul 16 12:26:35.488704 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:26:35.488938 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:35.489174 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:35.496827 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:26:35.497359 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:26:35.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.501934 systemd[1]: Mounted usr-share-oem.mount. Jul 16 12:26:35.507047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:26:35.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.507297 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:26:35.508569 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:26:35.508802 systemd[1]: Finished modprobe@loop.service. Jul 16 12:26:35.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.509928 systemd[1]: Finished systemd-sysext.service. Jul 16 12:26:35.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.522724 systemd[1]: Starting ensure-sysext.service... Jul 16 12:26:35.523461 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:26:35.523567 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:26:35.525102 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 16 12:26:35.536246 systemd[1]: Reloading. Jul 16 12:26:35.553188 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 16 12:26:35.562888 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 12:26:35.572788 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 12:26:35.674170 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-07-16T12:26:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:26:35.674236 /usr/lib/systemd/system-generators/torcx-generator[1160]: time="2025-07-16T12:26:35Z" level=info msg="torcx already run" Jul 16 12:26:35.732601 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 12:26:35.823875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:26:35.823907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:26:35.860649 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:26:35.941010 systemd[1]: Finished ldconfig.service. Jul 16 12:26:35.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.944440 systemd[1]: Finished systemd-boot-update.service. Jul 16 12:26:35.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.946982 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 16 12:26:35.951796 systemd[1]: Starting audit-rules.service... Jul 16 12:26:35.954625 systemd[1]: Starting clean-ca-certificates.service... Jul 16 12:26:35.958810 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 16 12:26:35.964624 systemd[1]: Starting systemd-resolved.service... Jul 16 12:26:35.969386 systemd[1]: Starting systemd-timesyncd.service... Jul 16 12:26:35.988939 systemd[1]: Starting systemd-update-utmp.service... Jul 16 12:26:35.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:35.995585 systemd[1]: Finished clean-ca-certificates.service. Jul 16 12:26:35.999957 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 12:26:36.005807 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.008099 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:26:36.011234 systemd[1]: Starting modprobe@efi_pstore.service... Jul 16 12:26:36.014485 systemd[1]: Starting modprobe@loop.service... Jul 16 12:26:36.017000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.017514 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.017781 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:36.018003 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 12:26:36.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.025196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:26:36.025458 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:26:36.028633 systemd[1]: Finished systemd-update-utmp.service. Jul 16 12:26:36.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.033836 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.035599 systemd[1]: Starting modprobe@dm_mod.service... Jul 16 12:26:36.036397 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.036628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:36.036864 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 12:26:36.041077 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.046558 systemd[1]: Starting modprobe@drm.service... Jul 16 12:26:36.047380 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.047564 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:36.054450 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 16 12:26:36.056599 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 12:26:36.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.062902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 12:26:36.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.063149 systemd[1]: Finished modprobe@efi_pstore.service. Jul 16 12:26:36.065262 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 12:26:36.065497 systemd[1]: Finished modprobe@loop.service. Jul 16 12:26:36.067057 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 12:26:36.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.068466 systemd[1]: Finished modprobe@drm.service. Jul 16 12:26:36.069934 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:36.070060 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 12:26:36.070213 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 12:26:36.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.072254 systemd[1]: Finished ensure-sysext.service. Jul 16 12:26:36.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.075712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 12:26:36.075974 systemd[1]: Finished modprobe@dm_mod.service. Jul 16 12:26:36.076858 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.084461 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 16 12:26:36.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.087374 systemd[1]: Starting systemd-update-done.service... Jul 16 12:26:36.103509 systemd[1]: Finished systemd-update-done.service. Jul 16 12:26:36.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 16 12:26:36.147000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 16 12:26:36.147000 audit[1254]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffbf3090e0 a2=420 a3=0 items=0 ppid=1216 pid=1254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 16 12:26:36.147000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 16 12:26:36.148017 augenrules[1254]: No rules Jul 16 12:26:36.148425 systemd[1]: Finished audit-rules.service. Jul 16 12:26:36.167829 systemd[1]: Started systemd-timesyncd.service. Jul 16 12:26:36.168827 systemd[1]: Reached target time-set.target. Jul 16 12:26:36.184820 systemd-resolved[1219]: Positive Trust Anchors: Jul 16 12:26:36.184845 systemd-resolved[1219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 12:26:36.184893 systemd-resolved[1219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 16 12:26:36.192568 systemd-resolved[1219]: Using system hostname 'srv-j7d31.gb1.brightbox.com'. Jul 16 12:26:36.195138 systemd[1]: Started systemd-resolved.service. Jul 16 12:26:36.195937 systemd[1]: Reached target network.target. Jul 16 12:26:36.196595 systemd[1]: Reached target nss-lookup.target. Jul 16 12:26:36.197284 systemd[1]: Reached target sysinit.target. Jul 16 12:26:36.198010 systemd[1]: Started motdgen.path. Jul 16 12:26:36.198673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 16 12:26:36.199607 systemd[1]: Started logrotate.timer. Jul 16 12:26:36.200365 systemd[1]: Started mdadm.timer. Jul 16 12:26:36.200949 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 16 12:26:36.201693 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 12:26:36.201736 systemd[1]: Reached target paths.target. Jul 16 12:26:36.202392 systemd[1]: Reached target timers.target. Jul 16 12:26:36.203399 systemd[1]: Listening on dbus.socket. Jul 16 12:26:36.205935 systemd[1]: Starting docker.socket... Jul 16 12:26:36.208551 systemd[1]: Listening on sshd.socket. Jul 16 12:26:36.209282 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:36.209698 systemd[1]: Listening on docker.socket. Jul 16 12:26:36.210402 systemd[1]: Reached target sockets.target. Jul 16 12:26:36.211085 systemd[1]: Reached target basic.target. Jul 16 12:26:36.211898 systemd[1]: System is tainted: cgroupsv1 Jul 16 12:26:36.211956 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.211997 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 16 12:26:36.213706 systemd[1]: Starting containerd.service... Jul 16 12:26:36.217326 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 16 12:26:36.219900 systemd[1]: Starting dbus.service... Jul 16 12:26:36.222438 systemd[1]: Starting enable-oem-cloudinit.service... Jul 16 12:26:36.229419 systemd[1]: Starting extend-filesystems.service... Jul 16 12:26:36.230543 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 16 12:26:36.234456 systemd[1]: Starting motdgen.service... Jul 16 12:26:36.236980 jq[1267]: false Jul 16 12:26:36.241434 systemd[1]: Starting prepare-helm.service... Jul 16 12:26:36.244720 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 16 12:26:36.258600 systemd[1]: Starting sshd-keygen.service... Jul 16 12:26:36.265717 systemd[1]: Starting systemd-logind.service... Jul 16 12:26:36.267694 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 16 12:26:36.267828 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 16 12:26:36.269890 systemd[1]: Starting update-engine.service... Jul 16 12:26:36.292980 jq[1285]: true Jul 16 12:26:36.275191 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 16 12:26:36.286760 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 12:26:36.291959 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 16 12:26:36.294137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 12:26:36.299598 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 16 12:26:36.322841 jq[1297]: true Jul 16 12:26:36.340406 tar[1291]: linux-amd64/helm Jul 16 12:26:36.361498 extend-filesystems[1269]: Found loop1 Jul 16 12:26:36.365043 extend-filesystems[1269]: Found vda Jul 16 12:26:36.366033 extend-filesystems[1269]: Found vda1 Jul 16 12:26:36.366033 extend-filesystems[1269]: Found vda2 Jul 16 12:26:36.368805 extend-filesystems[1269]: Found vda3 Jul 16 12:26:36.368805 extend-filesystems[1269]: Found usr Jul 16 12:26:36.368805 extend-filesystems[1269]: Found vda4 Jul 16 12:26:36.368805 extend-filesystems[1269]: Found vda6 Jul 16 12:26:36.368805 extend-filesystems[1269]: Found vda7 Jul 16 12:26:36.368805 extend-filesystems[1269]: Found vda9 Jul 16 12:26:36.368805 extend-filesystems[1269]: Checking size of /dev/vda9 Jul 16 12:26:36.370273 dbus-daemon[1266]: [system] SELinux support is enabled Jul 16 12:26:36.370688 systemd[1]: Started dbus.service. Jul 16 12:26:36.382988 dbus-daemon[1266]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1070 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 16 12:26:36.374545 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 12:26:36.396174 dbus-daemon[1266]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 12:26:36.374629 systemd[1]: Reached target system-config.target. Jul 16 12:26:36.378439 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 12:26:36.378497 systemd[1]: Reached target user-config.target. Jul 16 12:26:36.383889 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 12:26:36.384356 systemd[1]: Finished motdgen.service. Jul 16 12:26:36.398687 systemd-networkd[1070]: eth0: Gained IPv6LL Jul 16 12:26:36.401832 systemd[1]: Starting systemd-hostnamed.service... Jul 16 12:26:36.403276 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 16 12:26:36.406084 systemd[1]: Reached target network-online.target. Jul 16 12:26:36.410435 systemd[1]: Starting kubelet.service... Jul 16 12:26:36.424776 extend-filesystems[1269]: Resized partition /dev/vda9 Jul 16 12:26:36.456294 bash[1323]: Updated "/home/core/.ssh/authorized_keys" Jul 16 12:26:36.457992 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 16 12:26:36.476107 extend-filesystems[1331]: resize2fs 1.46.5 (30-Dec-2021) Jul 16 12:26:36.494203 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 16 12:26:36.495974 update_engine[1282]: I0716 12:26:36.495038 1282 main.cc:92] Flatcar Update Engine starting Jul 16 12:26:36.501802 systemd[1]: Started update-engine.service. Jul 16 12:26:37.699425 systemd-resolved[1219]: Clock change detected. Flushing caches. Jul 16 12:26:37.699618 systemd-timesyncd[1221]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org). Jul 16 12:26:37.699699 systemd-timesyncd[1221]: Initial clock synchronization to Wed 2025-07-16 12:26:37.699357 UTC. Jul 16 12:26:37.705098 update_engine[1282]: I0716 12:26:37.703510 1282 update_check_scheduler.cc:74] Next update check in 6m45s Jul 16 12:26:37.700950 systemd[1]: Started locksmithd.service. Jul 16 12:26:37.776438 env[1299]: time="2025-07-16T12:26:37.776333163Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 16 12:26:37.869766 systemd-logind[1280]: Watching system buttons on /dev/input/event2 (Power Button) Jul 16 12:26:37.871616 systemd-logind[1280]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 16 12:26:37.873288 systemd-logind[1280]: New seat seat0. Jul 16 12:26:37.906413 env[1299]: time="2025-07-16T12:26:37.906358417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 16 12:26:37.906777 env[1299]: time="2025-07-16T12:26:37.906730805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.910949 env[1299]: time="2025-07-16T12:26:37.910906288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:26:37.911813 env[1299]: time="2025-07-16T12:26:37.911781917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.912794 env[1299]: time="2025-07-16T12:26:37.912757651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:26:37.915847 env[1299]: time="2025-07-16T12:26:37.915809049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.915989 env[1299]: time="2025-07-16T12:26:37.915957445Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 16 12:26:37.916102 env[1299]: time="2025-07-16T12:26:37.916073465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.916376 env[1299]: time="2025-07-16T12:26:37.916346605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.921079 env[1299]: time="2025-07-16T12:26:37.921047208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 16 12:26:37.921455 env[1299]: time="2025-07-16T12:26:37.921419145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 16 12:26:37.921577 env[1299]: time="2025-07-16T12:26:37.921548372Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 16 12:26:37.921851 env[1299]: time="2025-07-16T12:26:37.921820267Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 16 12:26:37.921970 env[1299]: time="2025-07-16T12:26:37.921941063Z" level=info msg="metadata content store policy set" policy=shared Jul 16 12:26:37.922786 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 16 12:26:37.924440 systemd[1]: Started systemd-logind.service. Jul 16 12:26:37.943949 extend-filesystems[1331]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 16 12:26:37.943949 extend-filesystems[1331]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 16 12:26:37.943949 extend-filesystems[1331]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 16 12:26:37.953197 extend-filesystems[1269]: Resized filesystem in /dev/vda9 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948301182Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948350766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948373059Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948453619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948479003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948500830Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948521812Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948559774Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948581582Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948602602Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948621875Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948649563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.948912053Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 16 12:26:37.954625 env[1299]: time="2025-07-16T12:26:37.949073695Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 16 12:26:37.945107 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949572396Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949632214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949657872Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949744312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949863608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949892331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949921050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949944541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949970992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.949993599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.950012217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.950033855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.950291176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.950318541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.955557 env[1299]: time="2025-07-16T12:26:37.950347443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.945531 systemd[1]: Finished extend-filesystems.service. Jul 16 12:26:37.958414 env[1299]: time="2025-07-16T12:26:37.950368401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 16 12:26:37.958414 env[1299]: time="2025-07-16T12:26:37.950392206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 16 12:26:37.958414 env[1299]: time="2025-07-16T12:26:37.950409488Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 16 12:26:37.958414 env[1299]: time="2025-07-16T12:26:37.950455140Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 16 12:26:37.958414 env[1299]: time="2025-07-16T12:26:37.950517362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 16 12:26:37.958716 env[1299]: time="2025-07-16T12:26:37.950869168Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 16 12:26:37.958716 env[1299]: time="2025-07-16T12:26:37.950959804Z" level=info msg="Connect containerd service" Jul 16 12:26:37.958716 env[1299]: time="2025-07-16T12:26:37.951064505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.984993124Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985232268Z" level=info msg="Start subscribing containerd event" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985313008Z" level=info msg="Start recovering state" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985446887Z" level=info msg="Start event monitor" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985448674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985495914Z" level=info msg="Start snapshots syncer" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985520357Z" level=info msg="Start cni network conf syncer for default" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985534832Z" level=info msg="Start streaming server" Jul 16 12:26:37.986876 env[1299]: time="2025-07-16T12:26:37.985543196Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 12:26:37.985786 systemd[1]: Started containerd.service. Jul 16 12:26:37.987607 env[1299]: time="2025-07-16T12:26:37.987569708Z" level=info msg="containerd successfully booted in 0.229742s" Jul 16 12:26:38.041690 dbus-daemon[1266]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 16 12:26:38.042271 dbus-daemon[1266]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1324 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 16 12:26:38.041903 systemd[1]: Started systemd-hostnamed.service. Jul 16 12:26:38.047716 systemd[1]: Starting polkit.service... Jul 16 12:26:38.067591 polkitd[1343]: Started polkitd version 121 Jul 16 12:26:38.086951 polkitd[1343]: Loading rules from directory /etc/polkit-1/rules.d Jul 16 12:26:38.087071 polkitd[1343]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 16 12:26:38.089329 polkitd[1343]: Finished loading, compiling and executing 2 rules Jul 16 12:26:38.090284 dbus-daemon[1266]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 16 12:26:38.090537 systemd[1]: Started polkit.service. Jul 16 12:26:38.092275 polkitd[1343]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 16 12:26:38.107171 systemd-hostnamed[1324]: Hostname set to (static) Jul 16 12:26:38.115557 systemd-networkd[1070]: eth0: Ignoring DHCPv6 address 2a02:1348:179:830a:24:19ff:fee6:c2a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:830a:24:19ff:fee6:c2a/64 assigned by NDisc. Jul 16 12:26:38.115570 systemd-networkd[1070]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 16 12:26:38.745674 tar[1291]: linux-amd64/LICENSE Jul 16 12:26:38.746391 tar[1291]: linux-amd64/README.md Jul 16 12:26:38.749097 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 12:26:38.753429 systemd[1]: Finished prepare-helm.service. Jul 16 12:26:38.779760 systemd[1]: Finished sshd-keygen.service. Jul 16 12:26:38.787518 systemd[1]: Starting issuegen.service... Jul 16 12:26:38.798594 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 12:26:38.798964 systemd[1]: Finished issuegen.service. Jul 16 12:26:38.802564 systemd[1]: Starting systemd-user-sessions.service... Jul 16 12:26:38.816798 systemd[1]: Finished systemd-user-sessions.service. Jul 16 12:26:38.817360 locksmithd[1333]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 12:26:38.820080 systemd[1]: Started getty@tty1.service. Jul 16 12:26:38.825370 systemd[1]: Started serial-getty@ttyS0.service. Jul 16 12:26:38.830189 systemd[1]: Reached target getty.target. Jul 16 12:26:39.354454 systemd[1]: Started kubelet.service. Jul 16 12:26:40.073874 kubelet[1379]: E0716 12:26:40.073800 1379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:26:40.076517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:26:40.076831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:26:44.581064 coreos-metadata[1264]: Jul 16 12:26:44.580 WARN failed to locate config-drive, using the metadata service API instead Jul 16 12:26:44.637244 coreos-metadata[1264]: Jul 16 12:26:44.637 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 16 12:26:44.668848 coreos-metadata[1264]: Jul 16 12:26:44.668 INFO Fetch successful Jul 16 12:26:44.669170 coreos-metadata[1264]: Jul 16 12:26:44.668 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 16 12:26:44.700043 coreos-metadata[1264]: Jul 16 12:26:44.699 INFO Fetch successful Jul 16 12:26:44.701754 unknown[1264]: wrote ssh authorized keys file for user: core Jul 16 12:26:44.715832 update-ssh-keys[1391]: Updated "/home/core/.ssh/authorized_keys" Jul 16 12:26:44.716559 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 16 12:26:44.717094 systemd[1]: Reached target multi-user.target. Jul 16 12:26:44.719639 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 16 12:26:44.732366 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 16 12:26:44.732708 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 16 12:26:44.732970 systemd[1]: Startup finished in 12.010s (kernel) + 13.978s (userspace) = 25.989s. Jul 16 12:26:47.300948 systemd[1]: Created slice system-sshd.slice. Jul 16 12:26:47.302998 systemd[1]: Started sshd@0-10.230.12.42:22-147.75.109.163:53820.service. Jul 16 12:26:48.266166 sshd[1396]: Accepted publickey for core from 147.75.109.163 port 53820 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:26:48.269162 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:48.285137 systemd[1]: Created slice user-500.slice. Jul 16 12:26:48.286831 systemd[1]: Starting user-runtime-dir@500.service... Jul 16 12:26:48.292814 systemd-logind[1280]: New session 1 of user core. Jul 16 12:26:48.301070 systemd[1]: Finished user-runtime-dir@500.service. Jul 16 12:26:48.302959 systemd[1]: Starting user@500.service... Jul 16 12:26:48.312276 (systemd)[1401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:48.417445 systemd[1401]: Queued start job for default target default.target. Jul 16 12:26:48.418321 systemd[1401]: Reached target paths.target. Jul 16 12:26:48.418358 systemd[1401]: Reached target sockets.target. Jul 16 12:26:48.418381 systemd[1401]: Reached target timers.target. Jul 16 12:26:48.418402 systemd[1401]: Reached target basic.target. Jul 16 12:26:48.418573 systemd[1]: Started user@500.service. Jul 16 12:26:48.419948 systemd[1401]: Reached target default.target. Jul 16 12:26:48.420081 systemd[1]: Started session-1.scope. Jul 16 12:26:48.421893 systemd[1401]: Startup finished in 100ms. Jul 16 12:26:49.043219 systemd[1]: Started sshd@1-10.230.12.42:22-147.75.109.163:34160.service. Jul 16 12:26:49.931999 sshd[1410]: Accepted publickey for core from 147.75.109.163 port 34160 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:26:49.934229 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:49.942143 systemd-logind[1280]: New session 2 of user core. Jul 16 12:26:49.943025 systemd[1]: Started session-2.scope. Jul 16 12:26:50.166623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 12:26:50.167024 systemd[1]: Stopped kubelet.service. Jul 16 12:26:50.169748 systemd[1]: Starting kubelet.service... Jul 16 12:26:50.347155 systemd[1]: Started kubelet.service. Jul 16 12:26:50.502210 kubelet[1421]: E0716 12:26:50.502134 1421 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:26:50.505926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:26:50.506228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:26:50.553610 sshd[1410]: pam_unix(sshd:session): session closed for user core Jul 16 12:26:50.557693 systemd[1]: sshd@1-10.230.12.42:22-147.75.109.163:34160.service: Deactivated successfully. Jul 16 12:26:50.559349 systemd[1]: session-2.scope: Deactivated successfully. Jul 16 12:26:50.559381 systemd-logind[1280]: Session 2 logged out. Waiting for processes to exit. Jul 16 12:26:50.561458 systemd-logind[1280]: Removed session 2. Jul 16 12:26:50.699271 systemd[1]: Started sshd@2-10.230.12.42:22-147.75.109.163:34176.service. Jul 16 12:26:51.588967 sshd[1432]: Accepted publickey for core from 147.75.109.163 port 34176 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:26:51.591589 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:51.599387 systemd[1]: Started session-3.scope. Jul 16 12:26:51.600242 systemd-logind[1280]: New session 3 of user core. Jul 16 12:26:52.205568 sshd[1432]: pam_unix(sshd:session): session closed for user core Jul 16 12:26:52.209876 systemd[1]: sshd@2-10.230.12.42:22-147.75.109.163:34176.service: Deactivated successfully. Jul 16 12:26:52.211209 systemd[1]: session-3.scope: Deactivated successfully. Jul 16 12:26:52.211396 systemd-logind[1280]: Session 3 logged out. Waiting for processes to exit. Jul 16 12:26:52.213101 systemd-logind[1280]: Removed session 3. Jul 16 12:26:52.349232 systemd[1]: Started sshd@3-10.230.12.42:22-147.75.109.163:34180.service. Jul 16 12:26:53.229370 sshd[1439]: Accepted publickey for core from 147.75.109.163 port 34180 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:26:53.231996 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:53.239017 systemd[1]: Started session-4.scope. Jul 16 12:26:53.239821 systemd-logind[1280]: New session 4 of user core. Jul 16 12:26:53.846208 sshd[1439]: pam_unix(sshd:session): session closed for user core Jul 16 12:26:53.849721 systemd[1]: sshd@3-10.230.12.42:22-147.75.109.163:34180.service: Deactivated successfully. Jul 16 12:26:53.850804 systemd[1]: session-4.scope: Deactivated successfully. Jul 16 12:26:53.852141 systemd-logind[1280]: Session 4 logged out. Waiting for processes to exit. Jul 16 12:26:53.853627 systemd-logind[1280]: Removed session 4. Jul 16 12:26:53.991399 systemd[1]: Started sshd@4-10.230.12.42:22-147.75.109.163:34194.service. Jul 16 12:26:54.874830 sshd[1446]: Accepted publickey for core from 147.75.109.163 port 34194 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:26:54.876580 sshd[1446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:26:54.884029 systemd[1]: Started session-5.scope. Jul 16 12:26:54.884481 systemd-logind[1280]: New session 5 of user core. Jul 16 12:26:55.368624 sudo[1450]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 12:26:55.369047 sudo[1450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 16 12:26:55.407630 systemd[1]: Starting docker.service... Jul 16 12:26:55.464125 env[1460]: time="2025-07-16T12:26:55.464034196Z" level=info msg="Starting up" Jul 16 12:26:55.467196 env[1460]: time="2025-07-16T12:26:55.467167735Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 16 12:26:55.467317 env[1460]: time="2025-07-16T12:26:55.467289918Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 16 12:26:55.467464 env[1460]: time="2025-07-16T12:26:55.467421787Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 16 12:26:55.467611 env[1460]: time="2025-07-16T12:26:55.467583051Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 16 12:26:55.471331 env[1460]: time="2025-07-16T12:26:55.471298725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 16 12:26:55.471532 env[1460]: time="2025-07-16T12:26:55.471496294Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 16 12:26:55.471715 env[1460]: time="2025-07-16T12:26:55.471658592Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 16 12:26:55.471854 env[1460]: time="2025-07-16T12:26:55.471825136Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 16 12:26:55.506130 env[1460]: time="2025-07-16T12:26:55.506087773Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 16 12:26:55.506414 env[1460]: time="2025-07-16T12:26:55.506373837Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 16 12:26:55.506892 env[1460]: time="2025-07-16T12:26:55.506864318Z" level=info msg="Loading containers: start." Jul 16 12:26:55.695776 kernel: Initializing XFRM netlink socket Jul 16 12:26:55.743354 env[1460]: time="2025-07-16T12:26:55.743261179Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 16 12:26:55.825881 systemd-networkd[1070]: docker0: Link UP Jul 16 12:26:55.900777 env[1460]: time="2025-07-16T12:26:55.900713701Z" level=info msg="Loading containers: done." Jul 16 12:26:55.918329 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3287961646-merged.mount: Deactivated successfully. Jul 16 12:26:55.923299 env[1460]: time="2025-07-16T12:26:55.923254405Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 12:26:55.923774 env[1460]: time="2025-07-16T12:26:55.923719043Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 16 12:26:55.924067 env[1460]: time="2025-07-16T12:26:55.924039041Z" level=info msg="Daemon has completed initialization" Jul 16 12:26:55.942900 systemd[1]: Started docker.service. Jul 16 12:26:55.952237 env[1460]: time="2025-07-16T12:26:55.952110688Z" level=info msg="API listen on /run/docker.sock" Jul 16 12:26:57.329550 env[1299]: time="2025-07-16T12:26:57.329434988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 16 12:26:58.197923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103508225.mount: Deactivated successfully. Jul 16 12:27:00.274815 env[1299]: time="2025-07-16T12:27:00.274734892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:00.276709 env[1299]: time="2025-07-16T12:27:00.276674308Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:00.279102 env[1299]: time="2025-07-16T12:27:00.279067275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:00.282217 env[1299]: time="2025-07-16T12:27:00.282179997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:00.283281 env[1299]: time="2025-07-16T12:27:00.283239509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 16 12:27:00.284174 env[1299]: time="2025-07-16T12:27:00.284139324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 16 12:27:00.666164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 12:27:00.666478 systemd[1]: Stopped kubelet.service. Jul 16 12:27:00.669718 systemd[1]: Starting kubelet.service... Jul 16 12:27:00.831069 systemd[1]: Started kubelet.service. Jul 16 12:27:00.984381 kubelet[1594]: E0716 12:27:00.984183 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:27:00.986502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:27:00.986827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:27:03.134132 env[1299]: time="2025-07-16T12:27:03.134045547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:03.136423 env[1299]: time="2025-07-16T12:27:03.136381213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:03.139088 env[1299]: time="2025-07-16T12:27:03.139051147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:03.141635 env[1299]: time="2025-07-16T12:27:03.141595858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:03.142946 env[1299]: time="2025-07-16T12:27:03.142905647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 16 12:27:03.143704 env[1299]: time="2025-07-16T12:27:03.143669911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 16 12:27:05.585681 env[1299]: time="2025-07-16T12:27:05.585580770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:05.588836 env[1299]: time="2025-07-16T12:27:05.588791277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:05.591598 env[1299]: time="2025-07-16T12:27:05.591547752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:05.594143 env[1299]: time="2025-07-16T12:27:05.594103632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:05.595638 env[1299]: time="2025-07-16T12:27:05.595585072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 16 12:27:05.596648 env[1299]: time="2025-07-16T12:27:05.596613035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 16 12:27:07.901739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996097347.mount: Deactivated successfully. Jul 16 12:27:08.153911 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 16 12:27:09.016805 env[1299]: time="2025-07-16T12:27:09.016726302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:09.022049 env[1299]: time="2025-07-16T12:27:09.022007190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:09.023676 env[1299]: time="2025-07-16T12:27:09.023633832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:09.025626 env[1299]: time="2025-07-16T12:27:09.025586633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:09.026560 env[1299]: time="2025-07-16T12:27:09.026519418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 16 12:27:09.027534 env[1299]: time="2025-07-16T12:27:09.027497255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 12:27:09.936024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613759678.mount: Deactivated successfully. Jul 16 12:27:11.166252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 16 12:27:11.166702 systemd[1]: Stopped kubelet.service. Jul 16 12:27:11.171050 systemd[1]: Starting kubelet.service... Jul 16 12:27:11.326595 systemd[1]: Started kubelet.service. Jul 16 12:27:11.466088 kubelet[1611]: E0716 12:27:11.465939 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:27:11.468213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:27:11.468522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:27:11.546776 env[1299]: time="2025-07-16T12:27:11.545582716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:11.547790 env[1299]: time="2025-07-16T12:27:11.547755775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:11.550323 env[1299]: time="2025-07-16T12:27:11.550284397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:11.552902 env[1299]: time="2025-07-16T12:27:11.552867330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:11.554115 env[1299]: time="2025-07-16T12:27:11.554064973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 16 12:27:11.555170 env[1299]: time="2025-07-16T12:27:11.555134728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 12:27:12.716844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403744284.mount: Deactivated successfully. Jul 16 12:27:12.722946 env[1299]: time="2025-07-16T12:27:12.722900564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:12.724571 env[1299]: time="2025-07-16T12:27:12.724537928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:12.726523 env[1299]: time="2025-07-16T12:27:12.726483193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:12.728296 env[1299]: time="2025-07-16T12:27:12.728262196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:12.729258 env[1299]: time="2025-07-16T12:27:12.729193108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 16 12:27:12.729937 env[1299]: time="2025-07-16T12:27:12.729903052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 16 12:27:13.983390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576906767.mount: Deactivated successfully. Jul 16 12:27:17.509224 env[1299]: time="2025-07-16T12:27:17.509131347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:17.511958 env[1299]: time="2025-07-16T12:27:17.511905424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:17.516025 env[1299]: time="2025-07-16T12:27:17.515120420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:17.517913 env[1299]: time="2025-07-16T12:27:17.517867631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:17.519652 env[1299]: time="2025-07-16T12:27:17.519597438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 16 12:27:21.666374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 16 12:27:21.666755 systemd[1]: Stopped kubelet.service. Jul 16 12:27:21.669961 systemd[1]: Starting kubelet.service... Jul 16 12:27:22.188199 systemd[1]: Started kubelet.service. Jul 16 12:27:22.304197 kubelet[1642]: E0716 12:27:22.304111 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 12:27:22.307443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 12:27:22.307734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 12:27:22.753673 update_engine[1282]: I0716 12:27:22.752331 1282 update_attempter.cc:509] Updating boot flags... Jul 16 12:27:22.815521 systemd[1]: Stopped kubelet.service. Jul 16 12:27:22.820620 systemd[1]: Starting kubelet.service... Jul 16 12:27:22.873735 systemd[1]: Reloading. Jul 16 12:27:23.064650 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2025-07-16T12:27:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:27:23.064712 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2025-07-16T12:27:23Z" level=info msg="torcx already run" Jul 16 12:27:23.200085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:27:23.200128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:27:23.232025 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:27:23.366164 systemd[1]: Started kubelet.service. Jul 16 12:27:23.390266 systemd[1]: Stopping kubelet.service... Jul 16 12:27:23.397874 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 12:27:23.398230 systemd[1]: Stopped kubelet.service. Jul 16 12:27:23.406295 systemd[1]: Starting kubelet.service... Jul 16 12:27:23.544034 systemd[1]: Started kubelet.service. Jul 16 12:27:23.674193 kubelet[1762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:27:23.674810 kubelet[1762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 12:27:23.674946 kubelet[1762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:27:23.675249 kubelet[1762]: I0716 12:27:23.675187 1762 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 12:27:24.081306 kubelet[1762]: I0716 12:27:24.081142 1762 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 12:27:24.081306 kubelet[1762]: I0716 12:27:24.081192 1762 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 12:27:24.081993 kubelet[1762]: I0716 12:27:24.081961 1762 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 12:27:24.117674 kubelet[1762]: E0716 12:27:24.117607 1762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:24.118827 kubelet[1762]: I0716 12:27:24.118793 1762 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 12:27:24.128184 kubelet[1762]: E0716 12:27:24.128138 1762 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 16 12:27:24.128184 kubelet[1762]: I0716 12:27:24.128183 1762 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 16 12:27:24.138254 kubelet[1762]: I0716 12:27:24.138207 1762 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 12:27:24.139501 kubelet[1762]: I0716 12:27:24.139450 1762 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 12:27:24.139769 kubelet[1762]: I0716 12:27:24.139655 1762 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 12:27:24.140025 kubelet[1762]: I0716 12:27:24.139710 1762 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-j7d31.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 16 12:27:24.140273 kubelet[1762]: I0716 12:27:24.140054 1762 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 12:27:24.140273 kubelet[1762]: I0716 12:27:24.140074 1762 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 12:27:24.140394 kubelet[1762]: I0716 12:27:24.140300 1762 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:27:24.145650 kubelet[1762]: I0716 12:27:24.145565 1762 kubelet.go:408] "Attempting to sync node with API server" Jul 16 12:27:24.145650 kubelet[1762]: I0716 12:27:24.145609 1762 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 12:27:24.145994 kubelet[1762]: I0716 12:27:24.145681 1762 kubelet.go:314] "Adding apiserver pod source" Jul 16 12:27:24.145994 kubelet[1762]: I0716 12:27:24.145729 1762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 12:27:24.158509 kubelet[1762]: W0716 12:27:24.158424 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-j7d31.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:24.158788 kubelet[1762]: E0716 12:27:24.158725 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-j7d31.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:24.159470 kubelet[1762]: I0716 12:27:24.159412 1762 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 16 12:27:24.160344 kubelet[1762]: I0716 12:27:24.160307 1762 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 12:27:24.160601 kubelet[1762]: W0716 12:27:24.160579 1762 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 12:27:24.167324 kubelet[1762]: W0716 12:27:24.164848 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:24.167562 kubelet[1762]: E0716 12:27:24.167498 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:24.170538 kubelet[1762]: I0716 12:27:24.170510 1762 server.go:1274] "Started kubelet" Jul 16 12:27:24.180184 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 16 12:27:24.181505 kubelet[1762]: I0716 12:27:24.181463 1762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 12:27:24.181644 kubelet[1762]: I0716 12:27:24.181346 1762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 12:27:24.183845 kubelet[1762]: I0716 12:27:24.183818 1762 server.go:449] "Adding debug handlers to kubelet server" Jul 16 12:27:24.186987 kubelet[1762]: I0716 12:27:24.186931 1762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 12:27:24.187440 kubelet[1762]: I0716 12:27:24.187403 1762 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 12:27:24.190586 kubelet[1762]: I0716 12:27:24.190546 1762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 12:27:24.192355 kubelet[1762]: E0716 12:27:24.192323 1762 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-j7d31.gb1.brightbox.com\" not found" Jul 16 12:27:24.192601 kubelet[1762]: I0716 12:27:24.192574 1762 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 12:27:24.193106 kubelet[1762]: I0716 12:27:24.193080 1762 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 12:27:24.193347 kubelet[1762]: I0716 12:27:24.193323 1762 reconciler.go:26] "Reconciler: start to sync state" Jul 16 12:27:24.194463 kubelet[1762]: W0716 12:27:24.194386 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:24.194657 kubelet[1762]: E0716 12:27:24.194620 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:24.195117 kubelet[1762]: I0716 12:27:24.195091 1762 factory.go:221] Registration of the systemd container factory successfully Jul 16 12:27:24.195448 kubelet[1762]: I0716 12:27:24.195404 1762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 12:27:24.198085 kubelet[1762]: E0716 12:27:24.198039 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-j7d31.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.42:6443: connect: connection refused" interval="200ms" Jul 16 12:27:24.198848 kubelet[1762]: I0716 12:27:24.198824 1762 factory.go:221] Registration of the containerd container factory successfully Jul 16 12:27:24.239197 kubelet[1762]: E0716 12:27:24.233324 1762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.12.42:6443/api/v1/namespaces/default/events\": dial tcp 10.230.12.42:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-j7d31.gb1.brightbox.com.1852bb0719aceebb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-j7d31.gb1.brightbox.com,UID:srv-j7d31.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-j7d31.gb1.brightbox.com,},FirstTimestamp:2025-07-16 12:27:24.170473147 +0000 UTC m=+0.621340127,LastTimestamp:2025-07-16 12:27:24.170473147 +0000 UTC m=+0.621340127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-j7d31.gb1.brightbox.com,}" Jul 16 12:27:24.241551 kubelet[1762]: I0716 12:27:24.241483 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 12:27:24.243403 kubelet[1762]: I0716 12:27:24.243367 1762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 12:27:24.243520 kubelet[1762]: I0716 12:27:24.243428 1762 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 12:27:24.243520 kubelet[1762]: I0716 12:27:24.243475 1762 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 12:27:24.243633 kubelet[1762]: E0716 12:27:24.243554 1762 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 12:27:24.247669 kubelet[1762]: W0716 12:27:24.247635 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:24.247867 kubelet[1762]: E0716 12:27:24.247836 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:24.248099 kubelet[1762]: I0716 12:27:24.248073 1762 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 12:27:24.248238 kubelet[1762]: I0716 12:27:24.248213 1762 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 12:27:24.248397 kubelet[1762]: I0716 12:27:24.248374 1762 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:27:24.250535 kubelet[1762]: I0716 12:27:24.250508 1762 policy_none.go:49] "None policy: Start" Jul 16 12:27:24.251629 kubelet[1762]: I0716 12:27:24.251594 1762 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 12:27:24.251724 kubelet[1762]: I0716 12:27:24.251637 1762 state_mem.go:35] "Initializing new in-memory state store" Jul 16 12:27:24.258384 kubelet[1762]: I0716 12:27:24.258341 1762 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 12:27:24.258617 kubelet[1762]: I0716 12:27:24.258585 1762 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 12:27:24.258708 kubelet[1762]: I0716 12:27:24.258626 1762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 12:27:24.260768 kubelet[1762]: I0716 12:27:24.260714 1762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 12:27:24.267028 kubelet[1762]: E0716 12:27:24.266979 1762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-j7d31.gb1.brightbox.com\" not found" Jul 16 12:27:24.365797 kubelet[1762]: I0716 12:27:24.364267 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.365797 kubelet[1762]: E0716 12:27:24.364714 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.42:6443/api/v1/nodes\": dial tcp 10.230.12.42:6443: connect: connection refused" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.394897 kubelet[1762]: I0716 12:27:24.394839 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-ca-certs\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.398953 kubelet[1762]: E0716 12:27:24.398912 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-j7d31.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.42:6443: connect: connection refused" interval="400ms" Jul 16 12:27:24.495658 kubelet[1762]: I0716 12:27:24.495556 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-ca-certs\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.495967 kubelet[1762]: I0716 12:27:24.495928 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.496183 kubelet[1762]: I0716 12:27:24.496150 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.496353 kubelet[1762]: I0716 12:27:24.496324 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b37a305c01564b96cc7333cf2b4f2db-kubeconfig\") pod \"kube-scheduler-srv-j7d31.gb1.brightbox.com\" (UID: \"0b37a305c01564b96cc7333cf2b4f2db\") " pod="kube-system/kube-scheduler-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.496516 kubelet[1762]: I0716 12:27:24.496488 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-k8s-certs\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.496668 kubelet[1762]: I0716 12:27:24.496637 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-flexvolume-dir\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.497029 kubelet[1762]: I0716 12:27:24.496980 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-k8s-certs\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.497121 kubelet[1762]: I0716 12:27:24.497037 1762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-kubeconfig\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.568174 kubelet[1762]: I0716 12:27:24.568130 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.568640 kubelet[1762]: E0716 12:27:24.568601 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.42:6443/api/v1/nodes\": dial tcp 10.230.12.42:6443: connect: connection refused" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.663033 env[1299]: time="2025-07-16T12:27:24.661788103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-j7d31.gb1.brightbox.com,Uid:7d84554ca9268727e579da3245d41e3e,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:24.664698 env[1299]: time="2025-07-16T12:27:24.664572007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-j7d31.gb1.brightbox.com,Uid:0b37a305c01564b96cc7333cf2b4f2db,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:24.665891 env[1299]: time="2025-07-16T12:27:24.665836633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-j7d31.gb1.brightbox.com,Uid:ee9e452f527071cb6307fde66d82403c,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:24.800578 kubelet[1762]: E0716 12:27:24.800493 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-j7d31.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.42:6443: connect: connection refused" interval="800ms" Jul 16 12:27:24.972661 kubelet[1762]: I0716 12:27:24.972589 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:24.973521 kubelet[1762]: E0716 12:27:24.973481 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.42:6443/api/v1/nodes\": dial tcp 10.230.12.42:6443: connect: connection refused" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:25.090609 kubelet[1762]: W0716 12:27:25.090482 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:25.090609 kubelet[1762]: E0716 12:27:25.090596 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:25.151403 kubelet[1762]: W0716 12:27:25.151281 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:25.151781 kubelet[1762]: E0716 12:27:25.151712 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:25.327661 kubelet[1762]: W0716 12:27:25.326809 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:25.327661 kubelet[1762]: E0716 12:27:25.327551 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:25.562137 kubelet[1762]: W0716 12:27:25.562037 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-j7d31.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:25.562137 kubelet[1762]: E0716 12:27:25.562142 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-j7d31.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:25.602498 kubelet[1762]: E0716 12:27:25.602325 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-j7d31.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.42:6443: connect: connection refused" interval="1.6s" Jul 16 12:27:25.777121 kubelet[1762]: I0716 12:27:25.777074 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:25.777644 kubelet[1762]: E0716 12:27:25.777604 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.42:6443/api/v1/nodes\": dial tcp 10.230.12.42:6443: connect: connection refused" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:25.901336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641568378.mount: Deactivated successfully. Jul 16 12:27:25.907787 env[1299]: time="2025-07-16T12:27:25.907717666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.908998 env[1299]: time="2025-07-16T12:27:25.908947421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.911213 env[1299]: time="2025-07-16T12:27:25.911175792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.912931 env[1299]: time="2025-07-16T12:27:25.912894883Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.915215 env[1299]: time="2025-07-16T12:27:25.915180585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.920180 env[1299]: time="2025-07-16T12:27:25.920142658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.921703 env[1299]: time="2025-07-16T12:27:25.921644035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.926117 env[1299]: time="2025-07-16T12:27:25.926037350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.927137 env[1299]: time="2025-07-16T12:27:25.927059063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.939800 env[1299]: time="2025-07-16T12:27:25.939688960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.945225 env[1299]: time="2025-07-16T12:27:25.945184269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.949356 env[1299]: time="2025-07-16T12:27:25.949316972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:25.991528 env[1299]: time="2025-07-16T12:27:25.991396073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:25.991804 env[1299]: time="2025-07-16T12:27:25.991730488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:25.991990 env[1299]: time="2025-07-16T12:27:25.991942286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:25.992634 env[1299]: time="2025-07-16T12:27:25.992573914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/895be96849a9fdb1d02236346ccbb895cd6cfea0ce005a05b7e50ec6fc2ee261 pid=1810 runtime=io.containerd.runc.v2 Jul 16 12:27:25.995608 env[1299]: time="2025-07-16T12:27:25.995333182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:25.995608 env[1299]: time="2025-07-16T12:27:25.995378131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:25.995608 env[1299]: time="2025-07-16T12:27:25.995423886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:25.996062 env[1299]: time="2025-07-16T12:27:25.995977793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6978a750e42cbac8ddbdd060ce4ca7ada577a1132da00b93556df03858901a82 pid=1809 runtime=io.containerd.runc.v2 Jul 16 12:27:26.006094 env[1299]: time="2025-07-16T12:27:26.004958956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:26.006355 env[1299]: time="2025-07-16T12:27:26.006047691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:26.006355 env[1299]: time="2025-07-16T12:27:26.006200264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:26.007023 env[1299]: time="2025-07-16T12:27:26.006942349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9f2d87d5cb89c0f5ff8bcb678c52d59709adbf0120a50cf3963a36ebc4523ab pid=1827 runtime=io.containerd.runc.v2 Jul 16 12:27:26.136532 kubelet[1762]: E0716 12:27:26.136455 1762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:26.153889 env[1299]: time="2025-07-16T12:27:26.153729385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-j7d31.gb1.brightbox.com,Uid:ee9e452f527071cb6307fde66d82403c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f2d87d5cb89c0f5ff8bcb678c52d59709adbf0120a50cf3963a36ebc4523ab\"" Jul 16 12:27:26.166106 env[1299]: time="2025-07-16T12:27:26.166049830Z" level=info msg="CreateContainer within sandbox \"a9f2d87d5cb89c0f5ff8bcb678c52d59709adbf0120a50cf3963a36ebc4523ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 12:27:26.197209 env[1299]: time="2025-07-16T12:27:26.197125779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-j7d31.gb1.brightbox.com,Uid:7d84554ca9268727e579da3245d41e3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"895be96849a9fdb1d02236346ccbb895cd6cfea0ce005a05b7e50ec6fc2ee261\"" Jul 16 12:27:26.200265 env[1299]: time="2025-07-16T12:27:26.200216704Z" level=info msg="CreateContainer within sandbox \"895be96849a9fdb1d02236346ccbb895cd6cfea0ce005a05b7e50ec6fc2ee261\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 12:27:26.209149 env[1299]: time="2025-07-16T12:27:26.209099621Z" level=info msg="CreateContainer within sandbox \"a9f2d87d5cb89c0f5ff8bcb678c52d59709adbf0120a50cf3963a36ebc4523ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b814cde5a4ad00ebba5b03126ca828975ff8f415f03f9288b4bb7bf46e805103\"" Jul 16 12:27:26.209939 env[1299]: time="2025-07-16T12:27:26.209889916Z" level=info msg="StartContainer for \"b814cde5a4ad00ebba5b03126ca828975ff8f415f03f9288b4bb7bf46e805103\"" Jul 16 12:27:26.217178 env[1299]: time="2025-07-16T12:27:26.217119046Z" level=info msg="CreateContainer within sandbox \"895be96849a9fdb1d02236346ccbb895cd6cfea0ce005a05b7e50ec6fc2ee261\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c0da0225a81ab92f18174fec4cb10ddbfd6071253b50d1830d7bdd694e90502\"" Jul 16 12:27:26.217941 env[1299]: time="2025-07-16T12:27:26.217907754Z" level=info msg="StartContainer for \"5c0da0225a81ab92f18174fec4cb10ddbfd6071253b50d1830d7bdd694e90502\"" Jul 16 12:27:26.220107 env[1299]: time="2025-07-16T12:27:26.218801839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-j7d31.gb1.brightbox.com,Uid:0b37a305c01564b96cc7333cf2b4f2db,Namespace:kube-system,Attempt:0,} returns sandbox id \"6978a750e42cbac8ddbdd060ce4ca7ada577a1132da00b93556df03858901a82\"" Jul 16 12:27:26.223974 env[1299]: time="2025-07-16T12:27:26.223923206Z" level=info msg="CreateContainer within sandbox \"6978a750e42cbac8ddbdd060ce4ca7ada577a1132da00b93556df03858901a82\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 12:27:26.237797 env[1299]: time="2025-07-16T12:27:26.237724633Z" level=info msg="CreateContainer within sandbox \"6978a750e42cbac8ddbdd060ce4ca7ada577a1132da00b93556df03858901a82\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0b56c766879db437c8773b859f5470869665c387a10e1b74e93eea8e30fe3fe1\"" Jul 16 12:27:26.238805 env[1299]: time="2025-07-16T12:27:26.238734735Z" level=info msg="StartContainer for \"0b56c766879db437c8773b859f5470869665c387a10e1b74e93eea8e30fe3fe1\"" Jul 16 12:27:26.390995 env[1299]: time="2025-07-16T12:27:26.390931092Z" level=info msg="StartContainer for \"b814cde5a4ad00ebba5b03126ca828975ff8f415f03f9288b4bb7bf46e805103\" returns successfully" Jul 16 12:27:26.397061 env[1299]: time="2025-07-16T12:27:26.396985681Z" level=info msg="StartContainer for \"5c0da0225a81ab92f18174fec4cb10ddbfd6071253b50d1830d7bdd694e90502\" returns successfully" Jul 16 12:27:26.447486 env[1299]: time="2025-07-16T12:27:26.447429698Z" level=info msg="StartContainer for \"0b56c766879db437c8773b859f5470869665c387a10e1b74e93eea8e30fe3fe1\" returns successfully" Jul 16 12:27:26.871689 kubelet[1762]: W0716 12:27:26.871527 1762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.42:6443: connect: connection refused Jul 16 12:27:26.872003 kubelet[1762]: E0716 12:27:26.871966 1762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.42:6443: connect: connection refused" logger="UnhandledError" Jul 16 12:27:27.204021 kubelet[1762]: E0716 12:27:27.203914 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-j7d31.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.42:6443: connect: connection refused" interval="3.2s" Jul 16 12:27:27.384454 kubelet[1762]: I0716 12:27:27.384417 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:27.385176 kubelet[1762]: E0716 12:27:27.385132 1762 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.42:6443/api/v1/nodes\": dial tcp 10.230.12.42:6443: connect: connection refused" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:29.780990 kubelet[1762]: E0716 12:27:29.780806 1762 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-j7d31.gb1.brightbox.com.1852bb0719aceebb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-j7d31.gb1.brightbox.com,UID:srv-j7d31.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-j7d31.gb1.brightbox.com,},FirstTimestamp:2025-07-16 12:27:24.170473147 +0000 UTC m=+0.621340127,LastTimestamp:2025-07-16 12:27:24.170473147 +0000 UTC m=+0.621340127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-j7d31.gb1.brightbox.com,}" Jul 16 12:27:29.984397 systemd[1]: Started sshd@5-10.230.12.42:22-194.0.234.19:30338.service. Jul 16 12:27:30.091461 kubelet[1762]: E0716 12:27:30.091249 1762 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "srv-j7d31.gb1.brightbox.com" not found Jul 16 12:27:30.168127 kubelet[1762]: I0716 12:27:30.168065 1762 apiserver.go:52] "Watching apiserver" Jul 16 12:27:30.193764 kubelet[1762]: I0716 12:27:30.193690 1762 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 12:27:30.409620 kubelet[1762]: E0716 12:27:30.409480 1762 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-j7d31.gb1.brightbox.com\" not found" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:30.450714 kubelet[1762]: E0716 12:27:30.450682 1762 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "srv-j7d31.gb1.brightbox.com" not found Jul 16 12:27:30.589017 kubelet[1762]: I0716 12:27:30.588965 1762 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:30.601278 kubelet[1762]: I0716 12:27:30.601242 1762 kubelet_node_status.go:75] "Successfully registered node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:30.783517 sshd[2038]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=194.0.234.19 user=root Jul 16 12:27:30.844620 kubelet[1762]: W0716 12:27:30.844570 1762 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:27:32.113084 systemd[1]: Reloading. Jul 16 12:27:32.219314 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-07-16T12:27:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 16 12:27:32.220080 /usr/lib/systemd/system-generators/torcx-generator[2064]: time="2025-07-16T12:27:32Z" level=info msg="torcx already run" Jul 16 12:27:32.365469 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 16 12:27:32.365506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 16 12:27:32.398408 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 12:27:32.558787 systemd[1]: Stopping kubelet.service... Jul 16 12:27:32.567087 sshd[2038]: Failed password for root from 194.0.234.19 port 30338 ssh2 Jul 16 12:27:32.579514 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 12:27:32.580071 systemd[1]: Stopped kubelet.service. Jul 16 12:27:32.593918 systemd[1]: Starting kubelet.service... Jul 16 12:27:33.781586 sshd[2038]: Connection closed by authenticating user root 194.0.234.19 port 30338 [preauth] Jul 16 12:27:33.784278 systemd[1]: sshd@5-10.230.12.42:22-194.0.234.19:30338.service: Deactivated successfully. Jul 16 12:27:34.059284 systemd[1]: Started kubelet.service. Jul 16 12:27:34.193107 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:27:34.193810 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 12:27:34.193922 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 12:27:34.194202 kubelet[2127]: I0716 12:27:34.194118 2127 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 12:27:34.209269 kubelet[2127]: I0716 12:27:34.209212 2127 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 12:27:34.209529 kubelet[2127]: I0716 12:27:34.209504 2127 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 12:27:34.210077 kubelet[2127]: I0716 12:27:34.210053 2127 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 12:27:34.213786 kubelet[2127]: I0716 12:27:34.213734 2127 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 12:27:34.225280 kubelet[2127]: I0716 12:27:34.225233 2127 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 12:27:34.238678 sudo[2142]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 16 12:27:34.239186 sudo[2142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 16 12:27:34.247139 kubelet[2127]: E0716 12:27:34.247083 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 16 12:27:34.247708 kubelet[2127]: I0716 12:27:34.247677 2127 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 16 12:27:34.256457 kubelet[2127]: I0716 12:27:34.256430 2127 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 12:27:34.257241 kubelet[2127]: I0716 12:27:34.257203 2127 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 12:27:34.257651 kubelet[2127]: I0716 12:27:34.257596 2127 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 12:27:34.258090 kubelet[2127]: I0716 12:27:34.257802 2127 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-j7d31.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 16 12:27:34.258369 kubelet[2127]: I0716 12:27:34.258340 2127 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 12:27:34.258509 kubelet[2127]: I0716 12:27:34.258484 2127 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 12:27:34.258696 kubelet[2127]: I0716 12:27:34.258670 2127 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:27:34.259098 kubelet[2127]: I0716 12:27:34.259074 2127 kubelet.go:408] "Attempting to sync node with API server" Jul 16 12:27:34.264470 kubelet[2127]: I0716 12:27:34.262956 2127 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 12:27:34.264470 kubelet[2127]: I0716 12:27:34.263111 2127 kubelet.go:314] "Adding apiserver pod source" Jul 16 12:27:34.268875 kubelet[2127]: I0716 12:27:34.268514 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 12:27:34.283412 kubelet[2127]: I0716 12:27:34.279922 2127 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 16 12:27:34.283412 kubelet[2127]: I0716 12:27:34.283120 2127 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 12:27:34.285288 kubelet[2127]: I0716 12:27:34.285257 2127 server.go:1274] "Started kubelet" Jul 16 12:27:34.297401 kubelet[2127]: E0716 12:27:34.297340 2127 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 12:27:34.299011 kubelet[2127]: I0716 12:27:34.298369 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 12:27:34.302725 kubelet[2127]: I0716 12:27:34.302657 2127 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 12:27:34.306403 kubelet[2127]: I0716 12:27:34.306370 2127 server.go:449] "Adding debug handlers to kubelet server" Jul 16 12:27:34.310209 kubelet[2127]: I0716 12:27:34.310168 2127 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 12:27:34.315037 kubelet[2127]: I0716 12:27:34.310841 2127 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 12:27:34.315037 kubelet[2127]: I0716 12:27:34.311053 2127 reconciler.go:26] "Reconciler: start to sync state" Jul 16 12:27:34.315037 kubelet[2127]: I0716 12:27:34.312441 2127 factory.go:221] Registration of the systemd container factory successfully Jul 16 12:27:34.315037 kubelet[2127]: I0716 12:27:34.312573 2127 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 12:27:34.317365 kubelet[2127]: I0716 12:27:34.317321 2127 factory.go:221] Registration of the containerd container factory successfully Jul 16 12:27:34.317934 kubelet[2127]: I0716 12:27:34.317863 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 12:27:34.324148 kubelet[2127]: I0716 12:27:34.323933 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 12:27:34.328028 kubelet[2127]: I0716 12:27:34.327998 2127 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 12:27:34.375997 kubelet[2127]: I0716 12:27:34.375915 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 12:27:34.383421 kubelet[2127]: I0716 12:27:34.382867 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 12:27:34.383421 kubelet[2127]: I0716 12:27:34.382914 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 12:27:34.383421 kubelet[2127]: I0716 12:27:34.382942 2127 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 12:27:34.383421 kubelet[2127]: E0716 12:27:34.383008 2127 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 12:27:34.485245 kubelet[2127]: E0716 12:27:34.485175 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 16 12:27:34.503533 kubelet[2127]: I0716 12:27:34.503494 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 12:27:34.503533 kubelet[2127]: I0716 12:27:34.503526 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 12:27:34.503783 kubelet[2127]: I0716 12:27:34.503553 2127 state_mem.go:36] "Initialized new in-memory state store" Jul 16 12:27:34.503876 kubelet[2127]: I0716 12:27:34.503845 2127 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 12:27:34.503970 kubelet[2127]: I0716 12:27:34.503875 2127 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 12:27:34.503970 kubelet[2127]: I0716 12:27:34.503915 2127 policy_none.go:49] "None policy: Start" Jul 16 12:27:34.505889 kubelet[2127]: I0716 12:27:34.505858 2127 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 12:27:34.505980 kubelet[2127]: I0716 12:27:34.505899 2127 state_mem.go:35] "Initializing new in-memory state store" Jul 16 12:27:34.506836 kubelet[2127]: I0716 12:27:34.506788 2127 state_mem.go:75] "Updated machine memory state" Jul 16 12:27:34.511576 kubelet[2127]: I0716 12:27:34.511543 2127 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 12:27:34.511898 kubelet[2127]: I0716 12:27:34.511854 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 12:27:34.511987 kubelet[2127]: I0716 12:27:34.511902 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 12:27:34.517376 kubelet[2127]: I0716 12:27:34.515363 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 12:27:34.640413 kubelet[2127]: I0716 12:27:34.636846 2127 kubelet_node_status.go:72] "Attempting to register node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.651665 kubelet[2127]: I0716 12:27:34.651609 2127 kubelet_node_status.go:111] "Node was previously registered" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.651865 kubelet[2127]: I0716 12:27:34.651765 2127 kubelet_node_status.go:75] "Successfully registered node" node="srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.697590 kubelet[2127]: W0716 12:27:34.697530 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:27:34.704177 kubelet[2127]: W0716 12:27:34.704145 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:27:34.704492 kubelet[2127]: W0716 12:27:34.704467 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 12:27:34.704602 kubelet[2127]: E0716 12:27:34.704564 2127 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.713856 kubelet[2127]: I0716 12:27:34.713797 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.713980 kubelet[2127]: I0716 12:27:34.713871 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-ca-certs\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.713980 kubelet[2127]: I0716 12:27:34.713925 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-kubeconfig\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.713980 kubelet[2127]: I0716 12:27:34.713962 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.714168 kubelet[2127]: I0716 12:27:34.714025 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-ca-certs\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.714168 kubelet[2127]: I0716 12:27:34.714088 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee9e452f527071cb6307fde66d82403c-k8s-certs\") pod \"kube-apiserver-srv-j7d31.gb1.brightbox.com\" (UID: \"ee9e452f527071cb6307fde66d82403c\") " pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.714168 kubelet[2127]: I0716 12:27:34.714119 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-flexvolume-dir\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.714340 kubelet[2127]: I0716 12:27:34.714168 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d84554ca9268727e579da3245d41e3e-k8s-certs\") pod \"kube-controller-manager-srv-j7d31.gb1.brightbox.com\" (UID: \"7d84554ca9268727e579da3245d41e3e\") " pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:34.714340 kubelet[2127]: I0716 12:27:34.714202 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b37a305c01564b96cc7333cf2b4f2db-kubeconfig\") pod \"kube-scheduler-srv-j7d31.gb1.brightbox.com\" (UID: \"0b37a305c01564b96cc7333cf2b4f2db\") " pod="kube-system/kube-scheduler-srv-j7d31.gb1.brightbox.com" Jul 16 12:27:35.226201 sudo[2142]: pam_unix(sudo:session): session closed for user root Jul 16 12:27:35.271729 kubelet[2127]: I0716 12:27:35.271662 2127 apiserver.go:52] "Watching apiserver" Jul 16 12:27:35.312032 kubelet[2127]: I0716 12:27:35.311955 2127 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 12:27:35.465813 kubelet[2127]: I0716 12:27:35.465629 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-j7d31.gb1.brightbox.com" podStartSLOduration=1.465593541 podStartE2EDuration="1.465593541s" podCreationTimestamp="2025-07-16 12:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:27:35.463714159 +0000 UTC m=+1.383750019" watchObservedRunningTime="2025-07-16 12:27:35.465593541 +0000 UTC m=+1.385629398" Jul 16 12:27:35.517238 kubelet[2127]: I0716 12:27:35.517030 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-j7d31.gb1.brightbox.com" podStartSLOduration=5.516980882 podStartE2EDuration="5.516980882s" podCreationTimestamp="2025-07-16 12:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:27:35.514682239 +0000 UTC m=+1.434718105" watchObservedRunningTime="2025-07-16 12:27:35.516980882 +0000 UTC m=+1.437016729" Jul 16 12:27:35.517548 kubelet[2127]: I0716 12:27:35.517226 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-j7d31.gb1.brightbox.com" podStartSLOduration=1.517211003 podStartE2EDuration="1.517211003s" podCreationTimestamp="2025-07-16 12:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:27:35.481701089 +0000 UTC m=+1.401736981" watchObservedRunningTime="2025-07-16 12:27:35.517211003 +0000 UTC m=+1.437246863" Jul 16 12:27:37.279794 kubelet[2127]: I0716 12:27:37.279688 2127 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 12:27:37.280923 env[1299]: time="2025-07-16T12:27:37.280869001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 12:27:37.281786 kubelet[2127]: I0716 12:27:37.281758 2127 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 12:27:37.903385 sudo[1450]: pam_unix(sudo:session): session closed for user root Jul 16 12:27:38.048275 sshd[1446]: pam_unix(sshd:session): session closed for user core Jul 16 12:27:38.053057 systemd[1]: sshd@4-10.230.12.42:22-147.75.109.163:34194.service: Deactivated successfully. Jul 16 12:27:38.054539 systemd[1]: session-5.scope: Deactivated successfully. Jul 16 12:27:38.058566 systemd-logind[1280]: Session 5 logged out. Waiting for processes to exit. Jul 16 12:27:38.062445 systemd-logind[1280]: Removed session 5. Jul 16 12:27:38.100663 kubelet[2127]: E0716 12:27:38.100565 2127 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zqbwm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jn8dx" podUID="84487a86-da0a-4401-a3a1-2956513c093b" Jul 16 12:27:38.138174 kubelet[2127]: I0716 12:27:38.137999 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/504e727a-564d-449d-b9af-baea5fd4ad0b-xtables-lock\") pod \"kube-proxy-7w9rx\" (UID: \"504e727a-564d-449d-b9af-baea5fd4ad0b\") " pod="kube-system/kube-proxy-7w9rx" Jul 16 12:27:38.138174 kubelet[2127]: I0716 12:27:38.138085 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-run\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138174 kubelet[2127]: I0716 12:27:38.138149 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/504e727a-564d-449d-b9af-baea5fd4ad0b-lib-modules\") pod \"kube-proxy-7w9rx\" (UID: \"504e727a-564d-449d-b9af-baea5fd4ad0b\") " pod="kube-system/kube-proxy-7w9rx" Jul 16 12:27:38.138174 kubelet[2127]: I0716 12:27:38.138181 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-bpf-maps\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138576 kubelet[2127]: I0716 12:27:38.138228 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84487a86-da0a-4401-a3a1-2956513c093b-cilium-config-path\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138576 kubelet[2127]: I0716 12:27:38.138280 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/504e727a-564d-449d-b9af-baea5fd4ad0b-kube-proxy\") pod \"kube-proxy-7w9rx\" (UID: \"504e727a-564d-449d-b9af-baea5fd4ad0b\") " pod="kube-system/kube-proxy-7w9rx" Jul 16 12:27:38.138576 kubelet[2127]: I0716 12:27:38.138314 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84487a86-da0a-4401-a3a1-2956513c093b-clustermesh-secrets\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138576 kubelet[2127]: I0716 12:27:38.138380 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-net\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138576 kubelet[2127]: I0716 12:27:38.138411 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-kernel\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138455 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqbwm\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-kube-api-access-zqbwm\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138486 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cni-path\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138514 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-lib-modules\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138567 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-cgroup\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138608 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-hubble-tls\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.138895 kubelet[2127]: I0716 12:27:38.138659 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x5k\" (UniqueName: \"kubernetes.io/projected/504e727a-564d-449d-b9af-baea5fd4ad0b-kube-api-access-27x5k\") pod \"kube-proxy-7w9rx\" (UID: \"504e727a-564d-449d-b9af-baea5fd4ad0b\") " pod="kube-system/kube-proxy-7w9rx" Jul 16 12:27:38.139245 kubelet[2127]: I0716 12:27:38.138696 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-hostproc\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.139245 kubelet[2127]: I0716 12:27:38.138771 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-etc-cni-netd\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.139245 kubelet[2127]: I0716 12:27:38.138805 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-xtables-lock\") pod \"cilium-jn8dx\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " pod="kube-system/cilium-jn8dx" Jul 16 12:27:38.240582 kubelet[2127]: I0716 12:27:38.240515 2127 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 16 12:27:38.331029 env[1299]: time="2025-07-16T12:27:38.330945267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7w9rx,Uid:504e727a-564d-449d-b9af-baea5fd4ad0b,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:38.409799 env[1299]: time="2025-07-16T12:27:38.405305131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:38.409799 env[1299]: time="2025-07-16T12:27:38.405447984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:38.409799 env[1299]: time="2025-07-16T12:27:38.405470387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:38.409799 env[1299]: time="2025-07-16T12:27:38.405887553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b7bdb3434f01eada7173d9ccef24b9cb8b9e5780c705fd454d62050d0d894bb pid=2209 runtime=io.containerd.runc.v2 Jul 16 12:27:38.444130 kubelet[2127]: I0716 12:27:38.444071 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnj9j\" (UniqueName: \"kubernetes.io/projected/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-kube-api-access-pnj9j\") pod \"cilium-operator-5d85765b45-5fq7x\" (UID: \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\") " pod="kube-system/cilium-operator-5d85765b45-5fq7x" Jul 16 12:27:38.444130 kubelet[2127]: I0716 12:27:38.444133 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-cilium-config-path\") pod \"cilium-operator-5d85765b45-5fq7x\" (UID: \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\") " pod="kube-system/cilium-operator-5d85765b45-5fq7x" Jul 16 12:27:38.533095 env[1299]: time="2025-07-16T12:27:38.532767810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7w9rx,Uid:504e727a-564d-449d-b9af-baea5fd4ad0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b7bdb3434f01eada7173d9ccef24b9cb8b9e5780c705fd454d62050d0d894bb\"" Jul 16 12:27:38.539187 env[1299]: time="2025-07-16T12:27:38.539140813Z" level=info msg="CreateContainer within sandbox \"6b7bdb3434f01eada7173d9ccef24b9cb8b9e5780c705fd454d62050d0d894bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 12:27:38.544837 kubelet[2127]: I0716 12:27:38.544789 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-bpf-maps\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.544975 kubelet[2127]: I0716 12:27:38.544849 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84487a86-da0a-4401-a3a1-2956513c093b-clustermesh-secrets\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.544975 kubelet[2127]: I0716 12:27:38.544879 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-cgroup\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.544975 kubelet[2127]: I0716 12:27:38.544911 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqbwm\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-kube-api-access-zqbwm\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.544975 kubelet[2127]: I0716 12:27:38.544939 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-etc-cni-netd\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.544976 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-hubble-tls\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.545005 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-kernel\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.545042 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-lib-modules\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.545070 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cni-path\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.545107 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-xtables-lock\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545268 kubelet[2127]: I0716 12:27:38.545133 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-run\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545600 kubelet[2127]: I0716 12:27:38.545172 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84487a86-da0a-4401-a3a1-2956513c093b-cilium-config-path\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545600 kubelet[2127]: I0716 12:27:38.545197 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-net\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.545600 kubelet[2127]: I0716 12:27:38.545249 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-hostproc\") pod \"84487a86-da0a-4401-a3a1-2956513c093b\" (UID: \"84487a86-da0a-4401-a3a1-2956513c093b\") " Jul 16 12:27:38.546913 kubelet[2127]: I0716 12:27:38.545885 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.546913 kubelet[2127]: I0716 12:27:38.545969 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.546913 kubelet[2127]: I0716 12:27:38.546816 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.550449 kubelet[2127]: I0716 12:27:38.550390 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.553157 kubelet[2127]: I0716 12:27:38.553124 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cni-path" (OuterVolumeSpecName: "cni-path") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.553263 kubelet[2127]: I0716 12:27:38.553190 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.553263 kubelet[2127]: I0716 12:27:38.553244 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.555836 kubelet[2127]: I0716 12:27:38.555802 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.556023 kubelet[2127]: I0716 12:27:38.555992 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.556223 kubelet[2127]: I0716 12:27:38.556186 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-hostproc" (OuterVolumeSpecName: "hostproc") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:27:38.566296 kubelet[2127]: I0716 12:27:38.566255 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84487a86-da0a-4401-a3a1-2956513c093b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 12:27:38.566590 kubelet[2127]: I0716 12:27:38.566560 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:27:38.567281 kubelet[2127]: I0716 12:27:38.567193 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-kube-api-access-zqbwm" (OuterVolumeSpecName: "kube-api-access-zqbwm") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "kube-api-access-zqbwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:27:38.568198 kubelet[2127]: I0716 12:27:38.568127 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84487a86-da0a-4401-a3a1-2956513c093b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84487a86-da0a-4401-a3a1-2956513c093b" (UID: "84487a86-da0a-4401-a3a1-2956513c093b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 12:27:38.579321 env[1299]: time="2025-07-16T12:27:38.579247955Z" level=info msg="CreateContainer within sandbox \"6b7bdb3434f01eada7173d9ccef24b9cb8b9e5780c705fd454d62050d0d894bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef5a6170a5ce158845cba691f88ea50b462fb366e6e6c62b06d4c3f44206c852\"" Jul 16 12:27:38.582999 env[1299]: time="2025-07-16T12:27:38.582960443Z" level=info msg="StartContainer for \"ef5a6170a5ce158845cba691f88ea50b462fb366e6e6c62b06d4c3f44206c852\"" Jul 16 12:27:38.623842 env[1299]: time="2025-07-16T12:27:38.620642917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5fq7x,Uid:7e2a4cf9-24ff-4256-a773-c4e7de03ed15,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.645901 2127 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-hostproc\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.645955 2127 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-bpf-maps\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.645974 2127 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84487a86-da0a-4401-a3a1-2956513c093b-clustermesh-secrets\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.645992 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-cgroup\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.646014 2127 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqbwm\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-kube-api-access-zqbwm\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.646031 2127 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-etc-cni-netd\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.646048 2127 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84487a86-da0a-4401-a3a1-2956513c093b-hubble-tls\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646209 kubelet[2127]: I0716 12:27:38.646080 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-kernel\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646097 2127 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-lib-modules\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646113 2127 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-xtables-lock\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646128 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cilium-run\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646143 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84487a86-da0a-4401-a3a1-2956513c093b-cilium-config-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646158 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-host-proc-sys-net\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.646854 kubelet[2127]: I0716 12:27:38.646174 2127 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84487a86-da0a-4401-a3a1-2956513c093b-cni-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:27:38.670067 env[1299]: time="2025-07-16T12:27:38.669938013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:38.670067 env[1299]: time="2025-07-16T12:27:38.670016653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:38.670067 env[1299]: time="2025-07-16T12:27:38.670033956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:38.670827 env[1299]: time="2025-07-16T12:27:38.670684190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706 pid=2280 runtime=io.containerd.runc.v2 Jul 16 12:27:38.680599 env[1299]: time="2025-07-16T12:27:38.680385498Z" level=info msg="StartContainer for \"ef5a6170a5ce158845cba691f88ea50b462fb366e6e6c62b06d4c3f44206c852\" returns successfully" Jul 16 12:27:38.774852 env[1299]: time="2025-07-16T12:27:38.774779901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5fq7x,Uid:7e2a4cf9-24ff-4256-a773-c4e7de03ed15,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\"" Jul 16 12:27:38.779852 env[1299]: time="2025-07-16T12:27:38.779362339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 16 12:27:39.255444 systemd[1]: var-lib-kubelet-pods-84487a86\x2dda0a\x2d4401\x2da3a1\x2d2956513c093b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzqbwm.mount: Deactivated successfully. Jul 16 12:27:39.255701 systemd[1]: var-lib-kubelet-pods-84487a86\x2dda0a\x2d4401\x2da3a1\x2d2956513c093b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 16 12:27:39.255873 systemd[1]: var-lib-kubelet-pods-84487a86\x2dda0a\x2d4401\x2da3a1\x2d2956513c093b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 16 12:27:39.536465 kubelet[2127]: I0716 12:27:39.536298 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7w9rx" podStartSLOduration=2.536268827 podStartE2EDuration="2.536268827s" podCreationTimestamp="2025-07-16 12:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:27:39.473010593 +0000 UTC m=+5.393046460" watchObservedRunningTime="2025-07-16 12:27:39.536268827 +0000 UTC m=+5.456304687" Jul 16 12:27:39.758298 kubelet[2127]: I0716 12:27:39.758245 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-bpf-maps\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758344 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hostproc\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758383 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-cgroup\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758413 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-config-path\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758440 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hubble-tls\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758466 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-etc-cni-netd\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758537 kubelet[2127]: I0716 12:27:39.758507 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-net\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758532 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-kernel\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758558 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtd8\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-kube-api-access-pmtd8\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758593 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cni-path\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758618 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-lib-modules\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758653 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19e1072b-fa63-49e6-8ae3-efe7556ebbab-clustermesh-secrets\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.758976 kubelet[2127]: I0716 12:27:39.758701 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-xtables-lock\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.759333 kubelet[2127]: I0716 12:27:39.758731 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-run\") pod \"cilium-8rj4p\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " pod="kube-system/cilium-8rj4p" Jul 16 12:27:39.916667 env[1299]: time="2025-07-16T12:27:39.916570732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rj4p,Uid:19e1072b-fa63-49e6-8ae3-efe7556ebbab,Namespace:kube-system,Attempt:0,}" Jul 16 12:27:39.939977 env[1299]: time="2025-07-16T12:27:39.939547262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:27:39.939977 env[1299]: time="2025-07-16T12:27:39.939621756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:27:39.939977 env[1299]: time="2025-07-16T12:27:39.939639194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:27:39.941078 env[1299]: time="2025-07-16T12:27:39.940082620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c pid=2464 runtime=io.containerd.runc.v2 Jul 16 12:27:40.006518 env[1299]: time="2025-07-16T12:27:40.006449505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rj4p,Uid:19e1072b-fa63-49e6-8ae3-efe7556ebbab,Namespace:kube-system,Attempt:0,} returns sandbox id \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\"" Jul 16 12:27:40.387434 kubelet[2127]: I0716 12:27:40.387308 2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84487a86-da0a-4401-a3a1-2956513c093b" path="/var/lib/kubelet/pods/84487a86-da0a-4401-a3a1-2956513c093b/volumes" Jul 16 12:27:40.900759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239957150.mount: Deactivated successfully. Jul 16 12:27:42.076170 env[1299]: time="2025-07-16T12:27:42.076095126Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:42.080188 env[1299]: time="2025-07-16T12:27:42.080108431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:42.083224 env[1299]: time="2025-07-16T12:27:42.083181723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:42.085192 env[1299]: time="2025-07-16T12:27:42.084199382Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 16 12:27:42.089630 env[1299]: time="2025-07-16T12:27:42.089185095Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 16 12:27:42.090584 env[1299]: time="2025-07-16T12:27:42.090512246Z" level=info msg="CreateContainer within sandbox \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 16 12:27:42.130018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726981994.mount: Deactivated successfully. Jul 16 12:27:42.143641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750920470.mount: Deactivated successfully. Jul 16 12:27:42.146032 env[1299]: time="2025-07-16T12:27:42.145959432Z" level=info msg="CreateContainer within sandbox \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\"" Jul 16 12:27:42.149093 env[1299]: time="2025-07-16T12:27:42.148198510Z" level=info msg="StartContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\"" Jul 16 12:27:42.244201 env[1299]: time="2025-07-16T12:27:42.243362355Z" level=info msg="StartContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" returns successfully" Jul 16 12:27:42.635641 kubelet[2127]: I0716 12:27:42.635546 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5fq7x" podStartSLOduration=1.326120014 podStartE2EDuration="4.635525446s" podCreationTimestamp="2025-07-16 12:27:38 +0000 UTC" firstStartedPulling="2025-07-16 12:27:38.777212699 +0000 UTC m=+4.697248546" lastFinishedPulling="2025-07-16 12:27:42.086618118 +0000 UTC m=+8.006653978" observedRunningTime="2025-07-16 12:27:42.544240424 +0000 UTC m=+8.464276295" watchObservedRunningTime="2025-07-16 12:27:42.635525446 +0000 UTC m=+8.555561301" Jul 16 12:27:50.648953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642540807.mount: Deactivated successfully. Jul 16 12:27:55.863532 env[1299]: time="2025-07-16T12:27:55.863441337Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:55.867416 env[1299]: time="2025-07-16T12:27:55.866962506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:55.880137 env[1299]: time="2025-07-16T12:27:55.880085606Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 16 12:27:55.880560 env[1299]: time="2025-07-16T12:27:55.880521733Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 16 12:27:55.887353 env[1299]: time="2025-07-16T12:27:55.887308525Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 16 12:27:55.917426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505439428.mount: Deactivated successfully. Jul 16 12:27:55.928389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448048680.mount: Deactivated successfully. Jul 16 12:27:55.933109 env[1299]: time="2025-07-16T12:27:55.933049940Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\"" Jul 16 12:27:55.936826 env[1299]: time="2025-07-16T12:27:55.936781152Z" level=info msg="StartContainer for \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\"" Jul 16 12:27:56.029914 env[1299]: time="2025-07-16T12:27:56.029827490Z" level=info msg="StartContainer for \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\" returns successfully" Jul 16 12:27:56.188937 env[1299]: time="2025-07-16T12:27:56.188860075Z" level=info msg="shim disconnected" id=e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08 Jul 16 12:27:56.189648 env[1299]: time="2025-07-16T12:27:56.189334396Z" level=warning msg="cleaning up after shim disconnected" id=e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08 namespace=k8s.io Jul 16 12:27:56.189830 env[1299]: time="2025-07-16T12:27:56.189797562Z" level=info msg="cleaning up dead shim" Jul 16 12:27:56.207026 env[1299]: time="2025-07-16T12:27:56.206943739Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:27:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2584 runtime=io.containerd.runc.v2\n" Jul 16 12:27:56.497584 env[1299]: time="2025-07-16T12:27:56.496632294Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 16 12:27:56.525323 env[1299]: time="2025-07-16T12:27:56.525220775Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\"" Jul 16 12:27:56.530637 env[1299]: time="2025-07-16T12:27:56.530582118Z" level=info msg="StartContainer for \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\"" Jul 16 12:27:56.622543 env[1299]: time="2025-07-16T12:27:56.622191037Z" level=info msg="StartContainer for \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\" returns successfully" Jul 16 12:27:56.637826 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 12:27:56.638332 systemd[1]: Stopped systemd-sysctl.service. Jul 16 12:27:56.640405 systemd[1]: Stopping systemd-sysctl.service... Jul 16 12:27:56.645114 systemd[1]: Starting systemd-sysctl.service... Jul 16 12:27:56.659337 systemd[1]: Finished systemd-sysctl.service. Jul 16 12:27:56.684466 env[1299]: time="2025-07-16T12:27:56.684369580Z" level=info msg="shim disconnected" id=06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf Jul 16 12:27:56.684992 env[1299]: time="2025-07-16T12:27:56.684772008Z" level=warning msg="cleaning up after shim disconnected" id=06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf namespace=k8s.io Jul 16 12:27:56.685129 env[1299]: time="2025-07-16T12:27:56.685098511Z" level=info msg="cleaning up dead shim" Jul 16 12:27:56.698781 env[1299]: time="2025-07-16T12:27:56.698677637Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:27:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2649 runtime=io.containerd.runc.v2\n" Jul 16 12:27:56.908215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08-rootfs.mount: Deactivated successfully. Jul 16 12:27:57.507877 env[1299]: time="2025-07-16T12:27:57.503958435Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 16 12:27:57.542565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216429725.mount: Deactivated successfully. Jul 16 12:27:57.553399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529413585.mount: Deactivated successfully. Jul 16 12:27:57.557006 env[1299]: time="2025-07-16T12:27:57.556943990Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\"" Jul 16 12:27:57.559075 env[1299]: time="2025-07-16T12:27:57.559000118Z" level=info msg="StartContainer for \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\"" Jul 16 12:27:57.652857 env[1299]: time="2025-07-16T12:27:57.652443076Z" level=info msg="StartContainer for \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\" returns successfully" Jul 16 12:27:57.699131 env[1299]: time="2025-07-16T12:27:57.699038924Z" level=info msg="shim disconnected" id=88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690 Jul 16 12:27:57.699131 env[1299]: time="2025-07-16T12:27:57.699119154Z" level=warning msg="cleaning up after shim disconnected" id=88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690 namespace=k8s.io Jul 16 12:27:57.699131 env[1299]: time="2025-07-16T12:27:57.699138036Z" level=info msg="cleaning up dead shim" Jul 16 12:27:57.712536 env[1299]: time="2025-07-16T12:27:57.712431494Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:27:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2707 runtime=io.containerd.runc.v2\n" Jul 16 12:27:58.505113 env[1299]: time="2025-07-16T12:27:58.503151365Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 16 12:27:58.527400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536829580.mount: Deactivated successfully. Jul 16 12:27:58.540295 env[1299]: time="2025-07-16T12:27:58.540235313Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\"" Jul 16 12:27:58.544155 env[1299]: time="2025-07-16T12:27:58.543163738Z" level=info msg="StartContainer for \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\"" Jul 16 12:27:58.629259 env[1299]: time="2025-07-16T12:27:58.629188002Z" level=info msg="StartContainer for \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\" returns successfully" Jul 16 12:27:58.670897 env[1299]: time="2025-07-16T12:27:58.670722550Z" level=info msg="shim disconnected" id=ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2 Jul 16 12:27:58.671333 env[1299]: time="2025-07-16T12:27:58.671286638Z" level=warning msg="cleaning up after shim disconnected" id=ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2 namespace=k8s.io Jul 16 12:27:58.671501 env[1299]: time="2025-07-16T12:27:58.671459617Z" level=info msg="cleaning up dead shim" Jul 16 12:27:58.685285 env[1299]: time="2025-07-16T12:27:58.685240085Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:27:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2763 runtime=io.containerd.runc.v2\n" Jul 16 12:27:58.907910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2-rootfs.mount: Deactivated successfully. Jul 16 12:27:59.508910 env[1299]: time="2025-07-16T12:27:59.508835769Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 16 12:27:59.530751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042205161.mount: Deactivated successfully. Jul 16 12:27:59.547568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125746036.mount: Deactivated successfully. Jul 16 12:27:59.553078 env[1299]: time="2025-07-16T12:27:59.553024266Z" level=info msg="CreateContainer within sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\"" Jul 16 12:27:59.554106 env[1299]: time="2025-07-16T12:27:59.554071045Z" level=info msg="StartContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\"" Jul 16 12:27:59.652122 env[1299]: time="2025-07-16T12:27:59.652041448Z" level=info msg="StartContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" returns successfully" Jul 16 12:27:59.932894 kubelet[2127]: I0716 12:27:59.932848 2127 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 16 12:28:00.030498 kubelet[2127]: I0716 12:28:00.030317 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwf5d\" (UniqueName: \"kubernetes.io/projected/1255231f-c5c2-4367-914b-7f650ff17abb-kube-api-access-qwf5d\") pod \"coredns-7c65d6cfc9-99cxx\" (UID: \"1255231f-c5c2-4367-914b-7f650ff17abb\") " pod="kube-system/coredns-7c65d6cfc9-99cxx" Jul 16 12:28:00.030498 kubelet[2127]: I0716 12:28:00.030452 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4786517f-30bf-4b58-8110-f5f1b483f61a-config-volume\") pod \"coredns-7c65d6cfc9-9kx6h\" (UID: \"4786517f-30bf-4b58-8110-f5f1b483f61a\") " pod="kube-system/coredns-7c65d6cfc9-9kx6h" Jul 16 12:28:00.031135 kubelet[2127]: I0716 12:28:00.030496 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrbx\" (UniqueName: \"kubernetes.io/projected/4786517f-30bf-4b58-8110-f5f1b483f61a-kube-api-access-vhrbx\") pod \"coredns-7c65d6cfc9-9kx6h\" (UID: \"4786517f-30bf-4b58-8110-f5f1b483f61a\") " pod="kube-system/coredns-7c65d6cfc9-9kx6h" Jul 16 12:28:00.031135 kubelet[2127]: I0716 12:28:00.030558 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1255231f-c5c2-4367-914b-7f650ff17abb-config-volume\") pod \"coredns-7c65d6cfc9-99cxx\" (UID: \"1255231f-c5c2-4367-914b-7f650ff17abb\") " pod="kube-system/coredns-7c65d6cfc9-99cxx" Jul 16 12:28:00.326133 env[1299]: time="2025-07-16T12:28:00.325438310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kx6h,Uid:4786517f-30bf-4b58-8110-f5f1b483f61a,Namespace:kube-system,Attempt:0,}" Jul 16 12:28:00.341100 env[1299]: time="2025-07-16T12:28:00.341028265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99cxx,Uid:1255231f-c5c2-4367-914b-7f650ff17abb,Namespace:kube-system,Attempt:0,}" Jul 16 12:28:00.544642 kubelet[2127]: I0716 12:28:00.544544 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8rj4p" podStartSLOduration=5.670466178 podStartE2EDuration="21.544510459s" podCreationTimestamp="2025-07-16 12:27:39 +0000 UTC" firstStartedPulling="2025-07-16 12:27:40.008334623 +0000 UTC m=+5.928370476" lastFinishedPulling="2025-07-16 12:27:55.882378907 +0000 UTC m=+21.802414757" observedRunningTime="2025-07-16 12:28:00.542466495 +0000 UTC m=+26.462502375" watchObservedRunningTime="2025-07-16 12:28:00.544510459 +0000 UTC m=+26.464546336" Jul 16 12:28:02.507008 systemd-networkd[1070]: cilium_host: Link UP Jul 16 12:28:02.517210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 16 12:28:02.512607 systemd-networkd[1070]: cilium_net: Link UP Jul 16 12:28:02.512619 systemd-networkd[1070]: cilium_net: Gained carrier Jul 16 12:28:02.515577 systemd-networkd[1070]: cilium_host: Gained carrier Jul 16 12:28:02.691053 systemd-networkd[1070]: cilium_vxlan: Link UP Jul 16 12:28:02.691065 systemd-networkd[1070]: cilium_vxlan: Gained carrier Jul 16 12:28:03.282844 kernel: NET: Registered PF_ALG protocol family Jul 16 12:28:03.354311 systemd-networkd[1070]: cilium_net: Gained IPv6LL Jul 16 12:28:03.354980 systemd-networkd[1070]: cilium_host: Gained IPv6LL Jul 16 12:28:04.378018 systemd-networkd[1070]: cilium_vxlan: Gained IPv6LL Jul 16 12:28:04.467910 systemd-networkd[1070]: lxc_health: Link UP Jul 16 12:28:04.481583 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 16 12:28:04.483385 systemd-networkd[1070]: lxc_health: Gained carrier Jul 16 12:28:04.921478 systemd-networkd[1070]: lxcd27ab9e63478: Link UP Jul 16 12:28:04.941796 kernel: eth0: renamed from tmp00057 Jul 16 12:28:04.961832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd27ab9e63478: link becomes ready Jul 16 12:28:04.958927 systemd-networkd[1070]: lxcd27ab9e63478: Gained carrier Jul 16 12:28:05.024440 systemd-networkd[1070]: lxc61ac94e267fb: Link UP Jul 16 12:28:05.044980 kernel: eth0: renamed from tmpfd8d9 Jul 16 12:28:05.048813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc61ac94e267fb: link becomes ready Jul 16 12:28:05.050051 systemd-networkd[1070]: lxc61ac94e267fb: Gained carrier Jul 16 12:28:05.885983 systemd-networkd[1070]: lxc_health: Gained IPv6LL Jul 16 12:28:06.196026 systemd-networkd[1070]: lxc61ac94e267fb: Gained IPv6LL Jul 16 12:28:06.618764 systemd-networkd[1070]: lxcd27ab9e63478: Gained IPv6LL Jul 16 12:28:11.051900 env[1299]: time="2025-07-16T12:28:11.051677361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:28:11.051900 env[1299]: time="2025-07-16T12:28:11.051885608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:28:11.053795 env[1299]: time="2025-07-16T12:28:11.051996561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:28:11.053795 env[1299]: time="2025-07-16T12:28:11.052451007Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd8d92822db9e1ccd8c25a3d2c2176350299376189930a7dbb03554ad4445544 pid=3309 runtime=io.containerd.runc.v2 Jul 16 12:28:11.151382 env[1299]: time="2025-07-16T12:28:11.151252140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:28:11.151857 env[1299]: time="2025-07-16T12:28:11.151731609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:28:11.152159 env[1299]: time="2025-07-16T12:28:11.152061962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:28:11.153427 env[1299]: time="2025-07-16T12:28:11.153361113Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00057d909fbeaad22efd6283b9a2415b04cba0822955cb4f968b1206bad05d34 pid=3333 runtime=io.containerd.runc.v2 Jul 16 12:28:11.188122 systemd[1]: run-containerd-runc-k8s.io-00057d909fbeaad22efd6283b9a2415b04cba0822955cb4f968b1206bad05d34-runc.qKqMEt.mount: Deactivated successfully. Jul 16 12:28:11.328798 env[1299]: time="2025-07-16T12:28:11.328189057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99cxx,Uid:1255231f-c5c2-4367-914b-7f650ff17abb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd8d92822db9e1ccd8c25a3d2c2176350299376189930a7dbb03554ad4445544\"" Jul 16 12:28:11.338895 env[1299]: time="2025-07-16T12:28:11.338820935Z" level=info msg="CreateContainer within sandbox \"fd8d92822db9e1ccd8c25a3d2c2176350299376189930a7dbb03554ad4445544\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 12:28:11.349182 env[1299]: time="2025-07-16T12:28:11.349133896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9kx6h,Uid:4786517f-30bf-4b58-8110-f5f1b483f61a,Namespace:kube-system,Attempt:0,} returns sandbox id \"00057d909fbeaad22efd6283b9a2415b04cba0822955cb4f968b1206bad05d34\"" Jul 16 12:28:11.355159 env[1299]: time="2025-07-16T12:28:11.355096469Z" level=info msg="CreateContainer within sandbox \"00057d909fbeaad22efd6283b9a2415b04cba0822955cb4f968b1206bad05d34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 12:28:11.370798 env[1299]: time="2025-07-16T12:28:11.370716579Z" level=info msg="CreateContainer within sandbox \"00057d909fbeaad22efd6283b9a2415b04cba0822955cb4f968b1206bad05d34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f24bc7be4a2cd3393e5bc19d7732a0d12a0b3867eb6859eebdaad5cb265a3679\"" Jul 16 12:28:11.371767 env[1299]: time="2025-07-16T12:28:11.371702229Z" level=info msg="StartContainer for \"f24bc7be4a2cd3393e5bc19d7732a0d12a0b3867eb6859eebdaad5cb265a3679\"" Jul 16 12:28:11.373107 env[1299]: time="2025-07-16T12:28:11.373067462Z" level=info msg="CreateContainer within sandbox \"fd8d92822db9e1ccd8c25a3d2c2176350299376189930a7dbb03554ad4445544\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a35a26a140280e6e714e77ed76e4ba10f9e7323b182919c57a3a7e44d75aa8cf\"" Jul 16 12:28:11.374000 env[1299]: time="2025-07-16T12:28:11.373964184Z" level=info msg="StartContainer for \"a35a26a140280e6e714e77ed76e4ba10f9e7323b182919c57a3a7e44d75aa8cf\"" Jul 16 12:28:11.486186 env[1299]: time="2025-07-16T12:28:11.486095295Z" level=info msg="StartContainer for \"f24bc7be4a2cd3393e5bc19d7732a0d12a0b3867eb6859eebdaad5cb265a3679\" returns successfully" Jul 16 12:28:11.494597 env[1299]: time="2025-07-16T12:28:11.494469466Z" level=info msg="StartContainer for \"a35a26a140280e6e714e77ed76e4ba10f9e7323b182919c57a3a7e44d75aa8cf\" returns successfully" Jul 16 12:28:11.612777 kubelet[2127]: I0716 12:28:11.612510 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-99cxx" podStartSLOduration=33.612426156 podStartE2EDuration="33.612426156s" podCreationTimestamp="2025-07-16 12:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:28:11.611524362 +0000 UTC m=+37.531560239" watchObservedRunningTime="2025-07-16 12:28:11.612426156 +0000 UTC m=+37.532462020" Jul 16 12:28:11.645531 kubelet[2127]: I0716 12:28:11.645410 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9kx6h" podStartSLOduration=33.645385483 podStartE2EDuration="33.645385483s" podCreationTimestamp="2025-07-16 12:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:28:11.643057387 +0000 UTC m=+37.563093251" watchObservedRunningTime="2025-07-16 12:28:11.645385483 +0000 UTC m=+37.565421337" Jul 16 12:28:42.474393 systemd[1]: Started sshd@6-10.230.12.42:22-147.75.109.163:58554.service. Jul 16 12:28:43.388073 sshd[3471]: Accepted publickey for core from 147.75.109.163 port 58554 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:28:43.390979 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:28:43.405667 systemd[1]: Started session-6.scope. Jul 16 12:28:43.406053 systemd-logind[1280]: New session 6 of user core. Jul 16 12:28:44.210652 sshd[3471]: pam_unix(sshd:session): session closed for user core Jul 16 12:28:44.214821 systemd[1]: sshd@6-10.230.12.42:22-147.75.109.163:58554.service: Deactivated successfully. Jul 16 12:28:44.216320 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 12:28:44.216939 systemd-logind[1280]: Session 6 logged out. Waiting for processes to exit. Jul 16 12:28:44.218334 systemd-logind[1280]: Removed session 6. Jul 16 12:28:49.383590 systemd[1]: Started sshd@7-10.230.12.42:22-147.75.109.163:49096.service. Jul 16 12:28:50.358770 sshd[3485]: Accepted publickey for core from 147.75.109.163 port 49096 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:28:50.363141 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:28:50.371687 systemd-logind[1280]: New session 7 of user core. Jul 16 12:28:50.372419 systemd[1]: Started session-7.scope. Jul 16 12:28:51.194187 sshd[3485]: pam_unix(sshd:session): session closed for user core Jul 16 12:28:51.200056 systemd-logind[1280]: Session 7 logged out. Waiting for processes to exit. Jul 16 12:28:51.200447 systemd[1]: sshd@7-10.230.12.42:22-147.75.109.163:49096.service: Deactivated successfully. Jul 16 12:28:51.201724 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 12:28:51.203330 systemd-logind[1280]: Removed session 7. Jul 16 12:28:56.324969 systemd[1]: Started sshd@8-10.230.12.42:22-147.75.109.163:49102.service. Jul 16 12:28:57.214899 sshd[3498]: Accepted publickey for core from 147.75.109.163 port 49102 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:28:57.216858 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:28:57.224336 systemd[1]: Started session-8.scope. Jul 16 12:28:57.224847 systemd-logind[1280]: New session 8 of user core. Jul 16 12:28:57.994427 sshd[3498]: pam_unix(sshd:session): session closed for user core Jul 16 12:28:58.005254 systemd[1]: sshd@8-10.230.12.42:22-147.75.109.163:49102.service: Deactivated successfully. Jul 16 12:28:58.006982 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 12:28:58.007027 systemd-logind[1280]: Session 8 logged out. Waiting for processes to exit. Jul 16 12:28:58.009313 systemd-logind[1280]: Removed session 8. Jul 16 12:29:03.138697 systemd[1]: Started sshd@9-10.230.12.42:22-147.75.109.163:44264.service. Jul 16 12:29:04.027942 sshd[3513]: Accepted publickey for core from 147.75.109.163 port 44264 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:04.030338 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:04.038712 systemd-logind[1280]: New session 9 of user core. Jul 16 12:29:04.039767 systemd[1]: Started session-9.scope. Jul 16 12:29:04.762067 sshd[3513]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:04.767358 systemd[1]: sshd@9-10.230.12.42:22-147.75.109.163:44264.service: Deactivated successfully. Jul 16 12:29:04.769479 systemd-logind[1280]: Session 9 logged out. Waiting for processes to exit. Jul 16 12:29:04.770018 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 12:29:04.773286 systemd-logind[1280]: Removed session 9. Jul 16 12:29:04.907820 systemd[1]: Started sshd@10-10.230.12.42:22-147.75.109.163:44270.service. Jul 16 12:29:05.791459 sshd[3527]: Accepted publickey for core from 147.75.109.163 port 44270 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:05.794530 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:05.801789 systemd-logind[1280]: New session 10 of user core. Jul 16 12:29:05.803187 systemd[1]: Started session-10.scope. Jul 16 12:29:06.623661 sshd[3527]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:06.628587 systemd[1]: sshd@10-10.230.12.42:22-147.75.109.163:44270.service: Deactivated successfully. Jul 16 12:29:06.630164 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 12:29:06.631277 systemd-logind[1280]: Session 10 logged out. Waiting for processes to exit. Jul 16 12:29:06.633106 systemd-logind[1280]: Removed session 10. Jul 16 12:29:06.769485 systemd[1]: Started sshd@11-10.230.12.42:22-147.75.109.163:44274.service. Jul 16 12:29:07.669665 sshd[3538]: Accepted publickey for core from 147.75.109.163 port 44274 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:07.672967 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:07.680640 systemd-logind[1280]: New session 11 of user core. Jul 16 12:29:07.681792 systemd[1]: Started session-11.scope. Jul 16 12:29:08.407215 sshd[3538]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:08.412330 systemd-logind[1280]: Session 11 logged out. Waiting for processes to exit. Jul 16 12:29:08.412985 systemd[1]: sshd@11-10.230.12.42:22-147.75.109.163:44274.service: Deactivated successfully. Jul 16 12:29:08.414097 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 12:29:08.415182 systemd-logind[1280]: Removed session 11. Jul 16 12:29:13.582736 systemd[1]: Started sshd@12-10.230.12.42:22-147.75.109.163:34796.service. Jul 16 12:29:14.552895 sshd[3553]: Accepted publickey for core from 147.75.109.163 port 34796 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:14.554907 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:14.562493 systemd[1]: Started session-12.scope. Jul 16 12:29:14.562854 systemd-logind[1280]: New session 12 of user core. Jul 16 12:29:15.321076 sshd[3553]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:15.326145 systemd[1]: sshd@12-10.230.12.42:22-147.75.109.163:34796.service: Deactivated successfully. Jul 16 12:29:15.327994 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 12:29:15.328465 systemd-logind[1280]: Session 12 logged out. Waiting for processes to exit. Jul 16 12:29:15.329715 systemd-logind[1280]: Removed session 12. Jul 16 12:29:15.455027 systemd[1]: Started sshd@13-10.230.12.42:22-147.75.109.163:34810.service. Jul 16 12:29:16.348756 sshd[3566]: Accepted publickey for core from 147.75.109.163 port 34810 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:16.350504 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:16.358912 systemd[1]: Started session-13.scope. Jul 16 12:29:16.359225 systemd-logind[1280]: New session 13 of user core. Jul 16 12:29:17.364404 sshd[3566]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:17.371065 systemd[1]: sshd@13-10.230.12.42:22-147.75.109.163:34810.service: Deactivated successfully. Jul 16 12:29:17.373040 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 12:29:17.373727 systemd-logind[1280]: Session 13 logged out. Waiting for processes to exit. Jul 16 12:29:17.375202 systemd-logind[1280]: Removed session 13. Jul 16 12:29:17.510105 systemd[1]: Started sshd@14-10.230.12.42:22-147.75.109.163:34818.service. Jul 16 12:29:18.407645 sshd[3577]: Accepted publickey for core from 147.75.109.163 port 34818 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:18.409675 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:18.418515 systemd-logind[1280]: New session 14 of user core. Jul 16 12:29:18.419299 systemd[1]: Started session-14.scope. Jul 16 12:29:21.475346 sshd[3577]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:21.483757 systemd[1]: sshd@14-10.230.12.42:22-147.75.109.163:34818.service: Deactivated successfully. Jul 16 12:29:21.485510 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 12:29:21.486125 systemd-logind[1280]: Session 14 logged out. Waiting for processes to exit. Jul 16 12:29:21.487495 systemd-logind[1280]: Removed session 14. Jul 16 12:29:21.618618 systemd[1]: Started sshd@15-10.230.12.42:22-147.75.109.163:36746.service. Jul 16 12:29:22.507564 sshd[3595]: Accepted publickey for core from 147.75.109.163 port 36746 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:22.510687 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:22.517794 systemd-logind[1280]: New session 15 of user core. Jul 16 12:29:22.518792 systemd[1]: Started session-15.scope. Jul 16 12:29:23.553203 sshd[3595]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:23.558060 systemd[1]: sshd@15-10.230.12.42:22-147.75.109.163:36746.service: Deactivated successfully. Jul 16 12:29:23.559829 systemd-logind[1280]: Session 15 logged out. Waiting for processes to exit. Jul 16 12:29:23.559903 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 12:29:23.562156 systemd-logind[1280]: Removed session 15. Jul 16 12:29:23.697010 systemd[1]: Started sshd@16-10.230.12.42:22-147.75.109.163:36758.service. Jul 16 12:29:24.587926 sshd[3606]: Accepted publickey for core from 147.75.109.163 port 36758 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:24.591578 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:24.600625 systemd[1]: Started session-16.scope. Jul 16 12:29:24.602104 systemd-logind[1280]: New session 16 of user core. Jul 16 12:29:25.323095 sshd[3606]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:25.326984 systemd-logind[1280]: Session 16 logged out. Waiting for processes to exit. Jul 16 12:29:25.328075 systemd[1]: sshd@16-10.230.12.42:22-147.75.109.163:36758.service: Deactivated successfully. Jul 16 12:29:25.329341 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 12:29:25.330808 systemd-logind[1280]: Removed session 16. Jul 16 12:29:30.470138 systemd[1]: Started sshd@17-10.230.12.42:22-147.75.109.163:38574.service. Jul 16 12:29:31.361939 sshd[3622]: Accepted publickey for core from 147.75.109.163 port 38574 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:31.364055 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:31.371696 systemd[1]: Started session-17.scope. Jul 16 12:29:31.372025 systemd-logind[1280]: New session 17 of user core. Jul 16 12:29:32.067501 sshd[3622]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:32.071624 systemd-logind[1280]: Session 17 logged out. Waiting for processes to exit. Jul 16 12:29:32.072638 systemd[1]: sshd@17-10.230.12.42:22-147.75.109.163:38574.service: Deactivated successfully. Jul 16 12:29:32.074293 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 12:29:32.075381 systemd-logind[1280]: Removed session 17. Jul 16 12:29:37.213993 systemd[1]: Started sshd@18-10.230.12.42:22-147.75.109.163:38586.service. Jul 16 12:29:38.100512 sshd[3637]: Accepted publickey for core from 147.75.109.163 port 38586 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:38.103254 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:38.110628 systemd[1]: Started session-18.scope. Jul 16 12:29:38.110942 systemd-logind[1280]: New session 18 of user core. Jul 16 12:29:39.142317 sshd[3637]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:39.146878 systemd[1]: sshd@18-10.230.12.42:22-147.75.109.163:38586.service: Deactivated successfully. Jul 16 12:29:39.149334 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 12:29:39.150165 systemd-logind[1280]: Session 18 logged out. Waiting for processes to exit. Jul 16 12:29:39.152197 systemd-logind[1280]: Removed session 18. Jul 16 12:29:44.289690 systemd[1]: Started sshd@19-10.230.12.42:22-147.75.109.163:53240.service. Jul 16 12:29:45.184947 sshd[3652]: Accepted publickey for core from 147.75.109.163 port 53240 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:45.186563 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:45.197896 systemd-logind[1280]: New session 19 of user core. Jul 16 12:29:45.198086 systemd[1]: Started session-19.scope. Jul 16 12:29:45.901564 sshd[3652]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:45.906220 systemd[1]: sshd@19-10.230.12.42:22-147.75.109.163:53240.service: Deactivated successfully. Jul 16 12:29:45.907869 systemd-logind[1280]: Session 19 logged out. Waiting for processes to exit. Jul 16 12:29:45.907966 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 12:29:45.909725 systemd-logind[1280]: Removed session 19. Jul 16 12:29:46.047211 systemd[1]: Started sshd@20-10.230.12.42:22-147.75.109.163:53250.service. Jul 16 12:29:46.932519 sshd[3665]: Accepted publickey for core from 147.75.109.163 port 53250 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:46.935101 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:46.943494 systemd-logind[1280]: New session 20 of user core. Jul 16 12:29:46.944352 systemd[1]: Started session-20.scope. Jul 16 12:29:49.128282 env[1299]: time="2025-07-16T12:29:49.128182441Z" level=info msg="StopContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" with timeout 30 (s)" Jul 16 12:29:49.130887 env[1299]: time="2025-07-16T12:29:49.130847684Z" level=info msg="Stop container \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" with signal terminated" Jul 16 12:29:49.212575 env[1299]: time="2025-07-16T12:29:49.210951404Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 12:29:49.219289 env[1299]: time="2025-07-16T12:29:49.219234640Z" level=info msg="StopContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" with timeout 2 (s)" Jul 16 12:29:49.220138 env[1299]: time="2025-07-16T12:29:49.220101809Z" level=info msg="Stop container \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" with signal terminated" Jul 16 12:29:49.240307 systemd-networkd[1070]: lxc_health: Link DOWN Jul 16 12:29:49.240322 systemd-networkd[1070]: lxc_health: Lost carrier Jul 16 12:29:49.240709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647-rootfs.mount: Deactivated successfully. Jul 16 12:29:49.269224 env[1299]: time="2025-07-16T12:29:49.269139627Z" level=info msg="shim disconnected" id=3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647 Jul 16 12:29:49.269561 env[1299]: time="2025-07-16T12:29:49.269501152Z" level=warning msg="cleaning up after shim disconnected" id=3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647 namespace=k8s.io Jul 16 12:29:49.270032 env[1299]: time="2025-07-16T12:29:49.269966089Z" level=info msg="cleaning up dead shim" Jul 16 12:29:49.327220 env[1299]: time="2025-07-16T12:29:49.327152347Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\n" Jul 16 12:29:49.330355 env[1299]: time="2025-07-16T12:29:49.330305601Z" level=info msg="StopContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" returns successfully" Jul 16 12:29:49.332327 env[1299]: time="2025-07-16T12:29:49.332268740Z" level=info msg="StopPodSandbox for \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\"" Jul 16 12:29:49.338133 env[1299]: time="2025-07-16T12:29:49.332390253Z" level=info msg="Container to stop \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.335887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706-shm.mount: Deactivated successfully. Jul 16 12:29:49.350205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649-rootfs.mount: Deactivated successfully. Jul 16 12:29:49.358995 env[1299]: time="2025-07-16T12:29:49.358937053Z" level=info msg="shim disconnected" id=8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649 Jul 16 12:29:49.359356 env[1299]: time="2025-07-16T12:29:49.359312543Z" level=warning msg="cleaning up after shim disconnected" id=8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649 namespace=k8s.io Jul 16 12:29:49.359534 env[1299]: time="2025-07-16T12:29:49.359503547Z" level=info msg="cleaning up dead shim" Jul 16 12:29:49.383103 env[1299]: time="2025-07-16T12:29:49.382921198Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3754 runtime=io.containerd.runc.v2\n" Jul 16 12:29:49.389290 env[1299]: time="2025-07-16T12:29:49.389247706Z" level=info msg="StopContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" returns successfully" Jul 16 12:29:49.390183 env[1299]: time="2025-07-16T12:29:49.390146198Z" level=info msg="StopPodSandbox for \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\"" Jul 16 12:29:49.390478 env[1299]: time="2025-07-16T12:29:49.390438080Z" level=info msg="Container to stop \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.390686 env[1299]: time="2025-07-16T12:29:49.390644573Z" level=info msg="Container to stop \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.390907 env[1299]: time="2025-07-16T12:29:49.390860168Z" level=info msg="Container to stop \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.391140 env[1299]: time="2025-07-16T12:29:49.391061755Z" level=info msg="Container to stop \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.391305 env[1299]: time="2025-07-16T12:29:49.391271231Z" level=info msg="Container to stop \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:49.394543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c-shm.mount: Deactivated successfully. Jul 16 12:29:49.412222 env[1299]: time="2025-07-16T12:29:49.412166460Z" level=info msg="shim disconnected" id=ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706 Jul 16 12:29:49.412713 env[1299]: time="2025-07-16T12:29:49.412681275Z" level=warning msg="cleaning up after shim disconnected" id=ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706 namespace=k8s.io Jul 16 12:29:49.413107 env[1299]: time="2025-07-16T12:29:49.412842447Z" level=info msg="cleaning up dead shim" Jul 16 12:29:49.428919 env[1299]: time="2025-07-16T12:29:49.428849174Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3787 runtime=io.containerd.runc.v2\n" Jul 16 12:29:49.430090 env[1299]: time="2025-07-16T12:29:49.430040070Z" level=info msg="TearDown network for sandbox \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\" successfully" Jul 16 12:29:49.430199 env[1299]: time="2025-07-16T12:29:49.430101806Z" level=info msg="StopPodSandbox for \"ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706\" returns successfully" Jul 16 12:29:49.454918 env[1299]: time="2025-07-16T12:29:49.454849620Z" level=info msg="shim disconnected" id=12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c Jul 16 12:29:49.455422 env[1299]: time="2025-07-16T12:29:49.455390870Z" level=warning msg="cleaning up after shim disconnected" id=12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c namespace=k8s.io Jul 16 12:29:49.455598 env[1299]: time="2025-07-16T12:29:49.455567889Z" level=info msg="cleaning up dead shim" Jul 16 12:29:49.469861 env[1299]: time="2025-07-16T12:29:49.469795488Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3812 runtime=io.containerd.runc.v2\n" Jul 16 12:29:49.470699 env[1299]: time="2025-07-16T12:29:49.470641956Z" level=info msg="TearDown network for sandbox \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" successfully" Jul 16 12:29:49.470699 env[1299]: time="2025-07-16T12:29:49.470684229Z" level=info msg="StopPodSandbox for \"12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c\" returns successfully" Jul 16 12:29:49.555785 kubelet[2127]: I0716 12:29:49.555708 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnj9j\" (UniqueName: \"kubernetes.io/projected/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-kube-api-access-pnj9j\") pod \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\" (UID: \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\") " Jul 16 12:29:49.556492 kubelet[2127]: I0716 12:29:49.556395 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmtd8\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-kube-api-access-pmtd8\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556492 kubelet[2127]: I0716 12:29:49.556448 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19e1072b-fa63-49e6-8ae3-efe7556ebbab-clustermesh-secrets\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556645 kubelet[2127]: I0716 12:29:49.556502 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-cgroup\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556645 kubelet[2127]: I0716 12:29:49.556545 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-bpf-maps\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556645 kubelet[2127]: I0716 12:29:49.556578 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hostproc\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556645 kubelet[2127]: I0716 12:29:49.556609 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-run\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.556896 kubelet[2127]: I0716 12:29:49.556648 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-cilium-config-path\") pod \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\" (UID: \"7e2a4cf9-24ff-4256-a773-c4e7de03ed15\") " Jul 16 12:29:49.561913 kubelet[2127]: I0716 12:29:49.557302 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.562094 kubelet[2127]: I0716 12:29:49.557303 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.562454 kubelet[2127]: I0716 12:29:49.562424 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.562822 kubelet[2127]: I0716 12:29:49.562783 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hostproc" (OuterVolumeSpecName: "hostproc") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.572633 kubelet[2127]: I0716 12:29:49.572594 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7e2a4cf9-24ff-4256-a773-c4e7de03ed15" (UID: "7e2a4cf9-24ff-4256-a773-c4e7de03ed15"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 12:29:49.574225 kubelet[2127]: I0716 12:29:49.574187 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e1072b-fa63-49e6-8ae3-efe7556ebbab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 12:29:49.574781 kubelet[2127]: I0716 12:29:49.574727 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-kube-api-access-pmtd8" (OuterVolumeSpecName: "kube-api-access-pmtd8") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "kube-api-access-pmtd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:29:49.578388 kubelet[2127]: I0716 12:29:49.578283 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-kube-api-access-pnj9j" (OuterVolumeSpecName: "kube-api-access-pnj9j") pod "7e2a4cf9-24ff-4256-a773-c4e7de03ed15" (UID: "7e2a4cf9-24ff-4256-a773-c4e7de03ed15"). InnerVolumeSpecName "kube-api-access-pnj9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:29:49.592337 kubelet[2127]: E0716 12:29:49.592205 2127 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657170 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-lib-modules\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657253 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-net\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657291 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-xtables-lock\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657323 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cni-path\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657364 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-config-path\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.658501 kubelet[2127]: I0716 12:29:49.657406 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-etc-cni-netd\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.659092 kubelet[2127]: I0716 12:29:49.657443 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hubble-tls\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.659092 kubelet[2127]: I0716 12:29:49.657470 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-kernel\") pod \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\" (UID: \"19e1072b-fa63-49e6-8ae3-efe7556ebbab\") " Jul 16 12:29:49.659092 kubelet[2127]: I0716 12:29:49.657963 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cni-path" (OuterVolumeSpecName: "cni-path") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659092 kubelet[2127]: I0716 12:29:49.658011 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659092 kubelet[2127]: I0716 12:29:49.658157 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.658240 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.658373 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.658804 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.658890 2127 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnj9j\" (UniqueName: \"kubernetes.io/projected/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-kube-api-access-pnj9j\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.658940 2127 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmtd8\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-kube-api-access-pmtd8\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659355 kubelet[2127]: I0716 12:29:49.659005 2127 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19e1072b-fa63-49e6-8ae3-efe7556ebbab-clustermesh-secrets\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659694 kubelet[2127]: I0716 12:29:49.659044 2127 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hostproc\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659694 kubelet[2127]: I0716 12:29:49.659079 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-cgroup\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659694 kubelet[2127]: I0716 12:29:49.659098 2127 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-bpf-maps\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659694 kubelet[2127]: I0716 12:29:49.659115 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e2a4cf9-24ff-4256-a773-c4e7de03ed15-cilium-config-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.659694 kubelet[2127]: I0716 12:29:49.659131 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-run\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.664152 kubelet[2127]: I0716 12:29:49.664111 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 12:29:49.665042 kubelet[2127]: I0716 12:29:49.664984 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19e1072b-fa63-49e6-8ae3-efe7556ebbab" (UID: "19e1072b-fa63-49e6-8ae3-efe7556ebbab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760436 2127 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19e1072b-fa63-49e6-8ae3-efe7556ebbab-hubble-tls\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760484 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-kernel\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760511 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-host-proc-sys-net\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760527 2127 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-lib-modules\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760546 2127 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-xtables-lock\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760561 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cilium-config-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760575 2127 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-etc-cni-netd\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.760952 kubelet[2127]: I0716 12:29:49.760592 2127 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19e1072b-fa63-49e6-8ae3-efe7556ebbab-cni-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:49.857980 kubelet[2127]: I0716 12:29:49.857932 2127 scope.go:117] "RemoveContainer" containerID="3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647" Jul 16 12:29:49.861904 env[1299]: time="2025-07-16T12:29:49.861854971Z" level=info msg="RemoveContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\"" Jul 16 12:29:49.866156 env[1299]: time="2025-07-16T12:29:49.866115086Z" level=info msg="RemoveContainer for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" returns successfully" Jul 16 12:29:49.867979 kubelet[2127]: I0716 12:29:49.867931 2127 scope.go:117] "RemoveContainer" containerID="3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647" Jul 16 12:29:49.871288 env[1299]: time="2025-07-16T12:29:49.870781130Z" level=error msg="ContainerStatus for \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\": not found" Jul 16 12:29:49.872039 kubelet[2127]: E0716 12:29:49.871945 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\": not found" containerID="3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647" Jul 16 12:29:49.872166 kubelet[2127]: I0716 12:29:49.872002 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647"} err="failed to get container status \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ee6e560ef835c847f735a5e36333330d865b3bc06f0ab352a17f824e4af0647\": not found" Jul 16 12:29:49.872166 kubelet[2127]: I0716 12:29:49.872163 2127 scope.go:117] "RemoveContainer" containerID="8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649" Jul 16 12:29:49.879571 env[1299]: time="2025-07-16T12:29:49.878109330Z" level=info msg="RemoveContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\"" Jul 16 12:29:49.885014 env[1299]: time="2025-07-16T12:29:49.884963294Z" level=info msg="RemoveContainer for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" returns successfully" Jul 16 12:29:49.885388 kubelet[2127]: I0716 12:29:49.885322 2127 scope.go:117] "RemoveContainer" containerID="ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2" Jul 16 12:29:49.888946 env[1299]: time="2025-07-16T12:29:49.888902524Z" level=info msg="RemoveContainer for \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\"" Jul 16 12:29:49.893684 env[1299]: time="2025-07-16T12:29:49.893651205Z" level=info msg="RemoveContainer for \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\" returns successfully" Jul 16 12:29:49.894979 kubelet[2127]: I0716 12:29:49.894478 2127 scope.go:117] "RemoveContainer" containerID="88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690" Jul 16 12:29:49.898129 env[1299]: time="2025-07-16T12:29:49.898089527Z" level=info msg="RemoveContainer for \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\"" Jul 16 12:29:49.902740 env[1299]: time="2025-07-16T12:29:49.902677706Z" level=info msg="RemoveContainer for \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\" returns successfully" Jul 16 12:29:49.903463 kubelet[2127]: I0716 12:29:49.903297 2127 scope.go:117] "RemoveContainer" containerID="06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf" Jul 16 12:29:49.908360 env[1299]: time="2025-07-16T12:29:49.908304700Z" level=info msg="RemoveContainer for \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\"" Jul 16 12:29:49.916090 env[1299]: time="2025-07-16T12:29:49.914602712Z" level=info msg="RemoveContainer for \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\" returns successfully" Jul 16 12:29:49.916542 kubelet[2127]: I0716 12:29:49.916397 2127 scope.go:117] "RemoveContainer" containerID="e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08" Jul 16 12:29:49.918615 env[1299]: time="2025-07-16T12:29:49.918547266Z" level=info msg="RemoveContainer for \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\"" Jul 16 12:29:49.922366 env[1299]: time="2025-07-16T12:29:49.922312509Z" level=info msg="RemoveContainer for \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\" returns successfully" Jul 16 12:29:49.922633 kubelet[2127]: I0716 12:29:49.922590 2127 scope.go:117] "RemoveContainer" containerID="8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649" Jul 16 12:29:49.922998 env[1299]: time="2025-07-16T12:29:49.922923328Z" level=error msg="ContainerStatus for \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\": not found" Jul 16 12:29:49.923423 kubelet[2127]: E0716 12:29:49.923394 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\": not found" containerID="8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649" Jul 16 12:29:49.923607 kubelet[2127]: I0716 12:29:49.923568 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649"} err="failed to get container status \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\": rpc error: code = NotFound desc = an error occurred when try to find container \"8370a0cbaca1aae21785f83a4120f74660e87b3f008e1c8906d5a021e228c649\": not found" Jul 16 12:29:49.923796 kubelet[2127]: I0716 12:29:49.923747 2127 scope.go:117] "RemoveContainer" containerID="ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2" Jul 16 12:29:49.924341 env[1299]: time="2025-07-16T12:29:49.924244068Z" level=error msg="ContainerStatus for \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\": not found" Jul 16 12:29:49.924599 kubelet[2127]: E0716 12:29:49.924565 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\": not found" containerID="ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2" Jul 16 12:29:49.924710 kubelet[2127]: I0716 12:29:49.924605 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2"} err="failed to get container status \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac80d2262f7dcbddd8fde8cb6718d726626a185e195c37c804689da793ef3bd2\": not found" Jul 16 12:29:49.924710 kubelet[2127]: I0716 12:29:49.924630 2127 scope.go:117] "RemoveContainer" containerID="88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690" Jul 16 12:29:49.925126 env[1299]: time="2025-07-16T12:29:49.925048025Z" level=error msg="ContainerStatus for \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\": not found" Jul 16 12:29:49.925477 kubelet[2127]: E0716 12:29:49.925448 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\": not found" containerID="88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690" Jul 16 12:29:49.925563 kubelet[2127]: I0716 12:29:49.925480 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690"} err="failed to get container status \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\": rpc error: code = NotFound desc = an error occurred when try to find container \"88f4cb85eb8ab1cefdbade053610af54940308728f504a075fd6607e3c6a9690\": not found" Jul 16 12:29:49.925563 kubelet[2127]: I0716 12:29:49.925502 2127 scope.go:117] "RemoveContainer" containerID="06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf" Jul 16 12:29:49.925920 env[1299]: time="2025-07-16T12:29:49.925809480Z" level=error msg="ContainerStatus for \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\": not found" Jul 16 12:29:49.926137 kubelet[2127]: E0716 12:29:49.926106 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\": not found" containerID="06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf" Jul 16 12:29:49.926235 kubelet[2127]: I0716 12:29:49.926143 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf"} err="failed to get container status \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\": rpc error: code = NotFound desc = an error occurred when try to find container \"06782102e7d18cbce485d0802bf761a0882aef11298c33a77c065e540cca5abf\": not found" Jul 16 12:29:49.926235 kubelet[2127]: I0716 12:29:49.926169 2127 scope.go:117] "RemoveContainer" containerID="e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08" Jul 16 12:29:49.926583 env[1299]: time="2025-07-16T12:29:49.926523291Z" level=error msg="ContainerStatus for \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\": not found" Jul 16 12:29:49.926907 kubelet[2127]: E0716 12:29:49.926876 2127 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\": not found" containerID="e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08" Jul 16 12:29:49.927117 kubelet[2127]: I0716 12:29:49.927072 2127 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08"} err="failed to get container status \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"e52a4031b68142807d340b047081ea613d691d3a9a3b4c1ad4661151ca196a08\": not found" Jul 16 12:29:50.153589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12af3456a32f23bc3d011e473f9dc5efe942efb737f1a0ee8d947b42fcffc52c-rootfs.mount: Deactivated successfully. Jul 16 12:29:50.153910 systemd[1]: var-lib-kubelet-pods-19e1072b\x2dfa63\x2d49e6\x2d8ae3\x2defe7556ebbab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmtd8.mount: Deactivated successfully. Jul 16 12:29:50.154105 systemd[1]: var-lib-kubelet-pods-19e1072b\x2dfa63\x2d49e6\x2d8ae3\x2defe7556ebbab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 16 12:29:50.154275 systemd[1]: var-lib-kubelet-pods-19e1072b\x2dfa63\x2d49e6\x2d8ae3\x2defe7556ebbab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 16 12:29:50.154509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea3a8f8d1c5742d2d9435c1e176d57beeee0831627d181d11153a6a989d60706-rootfs.mount: Deactivated successfully. Jul 16 12:29:50.154674 systemd[1]: var-lib-kubelet-pods-7e2a4cf9\x2d24ff\x2d4256\x2da773\x2dc4e7de03ed15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnj9j.mount: Deactivated successfully. Jul 16 12:29:50.388177 kubelet[2127]: I0716 12:29:50.387910 2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" path="/var/lib/kubelet/pods/19e1072b-fa63-49e6-8ae3-efe7556ebbab/volumes" Jul 16 12:29:50.391144 kubelet[2127]: I0716 12:29:50.391096 2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2a4cf9-24ff-4256-a773-c4e7de03ed15" path="/var/lib/kubelet/pods/7e2a4cf9-24ff-4256-a773-c4e7de03ed15/volumes" Jul 16 12:29:51.158764 sshd[3665]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:51.163258 systemd[1]: sshd@20-10.230.12.42:22-147.75.109.163:53250.service: Deactivated successfully. Jul 16 12:29:51.165507 systemd-logind[1280]: Session 20 logged out. Waiting for processes to exit. Jul 16 12:29:51.165515 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 12:29:51.170942 systemd-logind[1280]: Removed session 20. Jul 16 12:29:51.305800 systemd[1]: Started sshd@21-10.230.12.42:22-147.75.109.163:44976.service. Jul 16 12:29:52.209947 sshd[3833]: Accepted publickey for core from 147.75.109.163 port 44976 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:52.212448 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:52.221868 systemd-logind[1280]: New session 21 of user core. Jul 16 12:29:52.221898 systemd[1]: Started session-21.scope. Jul 16 12:29:53.888309 kubelet[2127]: E0716 12:29:53.888255 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="apply-sysctl-overwrites" Jul 16 12:29:53.889155 kubelet[2127]: E0716 12:29:53.889129 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="clean-cilium-state" Jul 16 12:29:53.889284 kubelet[2127]: E0716 12:29:53.889259 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="mount-bpf-fs" Jul 16 12:29:53.889421 kubelet[2127]: E0716 12:29:53.889396 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="cilium-agent" Jul 16 12:29:53.889581 kubelet[2127]: E0716 12:29:53.889556 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e2a4cf9-24ff-4256-a773-c4e7de03ed15" containerName="cilium-operator" Jul 16 12:29:53.889702 kubelet[2127]: E0716 12:29:53.889678 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="mount-cgroup" Jul 16 12:29:53.889935 kubelet[2127]: I0716 12:29:53.889890 2127 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2a4cf9-24ff-4256-a773-c4e7de03ed15" containerName="cilium-operator" Jul 16 12:29:53.890074 kubelet[2127]: I0716 12:29:53.890049 2127 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e1072b-fa63-49e6-8ae3-efe7556ebbab" containerName="cilium-agent" Jul 16 12:29:53.975080 sshd[3833]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:53.979191 systemd[1]: sshd@21-10.230.12.42:22-147.75.109.163:44976.service: Deactivated successfully. Jul 16 12:29:53.981238 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 12:29:53.981283 systemd-logind[1280]: Session 21 logged out. Waiting for processes to exit. Jul 16 12:29:53.983552 systemd-logind[1280]: Removed session 21. Jul 16 12:29:53.986042 kubelet[2127]: I0716 12:29:53.985992 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-clustermesh-secrets\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.986305 kubelet[2127]: I0716 12:29:53.986229 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrj2g\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-kube-api-access-vrj2g\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.986620 kubelet[2127]: I0716 12:29:53.986580 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-etc-cni-netd\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.986818 kubelet[2127]: I0716 12:29:53.986791 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-run\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.987070 kubelet[2127]: I0716 12:29:53.986972 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-cgroup\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.987359 kubelet[2127]: I0716 12:29:53.987273 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-net\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.987604 kubelet[2127]: I0716 12:29:53.987532 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hubble-tls\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.987862 kubelet[2127]: I0716 12:29:53.987835 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hostproc\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.988111 kubelet[2127]: I0716 12:29:53.988083 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-config-path\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.988295 kubelet[2127]: I0716 12:29:53.988260 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-lib-modules\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.988507 kubelet[2127]: I0716 12:29:53.988481 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cni-path\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.988710 kubelet[2127]: I0716 12:29:53.988675 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-xtables-lock\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.988941 kubelet[2127]: I0716 12:29:53.988914 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-ipsec-secrets\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.989133 kubelet[2127]: I0716 12:29:53.989098 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-kernel\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:53.989360 kubelet[2127]: I0716 12:29:53.989332 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-bpf-maps\") pod \"cilium-tfsbx\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " pod="kube-system/cilium-tfsbx" Jul 16 12:29:54.132524 systemd[1]: Started sshd@22-10.230.12.42:22-147.75.109.163:44988.service. Jul 16 12:29:54.208442 env[1299]: time="2025-07-16T12:29:54.208317945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tfsbx,Uid:458d7f25-9393-4b2d-bcda-3a32e05f78ca,Namespace:kube-system,Attempt:0,}" Jul 16 12:29:54.231671 env[1299]: time="2025-07-16T12:29:54.231563011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:29:54.232063 env[1299]: time="2025-07-16T12:29:54.232008721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:29:54.232221 env[1299]: time="2025-07-16T12:29:54.232177195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:29:54.232603 env[1299]: time="2025-07-16T12:29:54.232549315Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec pid=3856 runtime=io.containerd.runc.v2 Jul 16 12:29:54.300186 env[1299]: time="2025-07-16T12:29:54.300117846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tfsbx,Uid:458d7f25-9393-4b2d-bcda-3a32e05f78ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\"" Jul 16 12:29:54.308175 env[1299]: time="2025-07-16T12:29:54.308078864Z" level=info msg="CreateContainer within sandbox \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 16 12:29:54.322954 env[1299]: time="2025-07-16T12:29:54.322884717Z" level=info msg="CreateContainer within sandbox \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\"" Jul 16 12:29:54.325640 env[1299]: time="2025-07-16T12:29:54.325591816Z" level=info msg="StartContainer for \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\"" Jul 16 12:29:54.420485 env[1299]: time="2025-07-16T12:29:54.420144063Z" level=info msg="StartContainer for \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\" returns successfully" Jul 16 12:29:54.480793 env[1299]: time="2025-07-16T12:29:54.480195977Z" level=info msg="shim disconnected" id=881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132 Jul 16 12:29:54.481238 env[1299]: time="2025-07-16T12:29:54.481205968Z" level=warning msg="cleaning up after shim disconnected" id=881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132 namespace=k8s.io Jul 16 12:29:54.481400 env[1299]: time="2025-07-16T12:29:54.481371534Z" level=info msg="cleaning up dead shim" Jul 16 12:29:54.494365 env[1299]: time="2025-07-16T12:29:54.494274500Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3942 runtime=io.containerd.runc.v2\n" Jul 16 12:29:54.594019 kubelet[2127]: E0716 12:29:54.593940 2127 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 16 12:29:54.915884 env[1299]: time="2025-07-16T12:29:54.914012231Z" level=info msg="CreateContainer within sandbox \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 16 12:29:54.926697 env[1299]: time="2025-07-16T12:29:54.926635424Z" level=info msg="CreateContainer within sandbox \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\"" Jul 16 12:29:54.928238 env[1299]: time="2025-07-16T12:29:54.928198778Z" level=info msg="StartContainer for \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\"" Jul 16 12:29:55.003697 env[1299]: time="2025-07-16T12:29:55.003629210Z" level=info msg="StartContainer for \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\" returns successfully" Jul 16 12:29:55.035720 sshd[3848]: Accepted publickey for core from 147.75.109.163 port 44988 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:55.036821 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:55.045788 systemd[1]: Started session-22.scope. Jul 16 12:29:55.046907 systemd-logind[1280]: New session 22 of user core. Jul 16 12:29:55.051895 env[1299]: time="2025-07-16T12:29:55.051726672Z" level=info msg="shim disconnected" id=3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1 Jul 16 12:29:55.052210 env[1299]: time="2025-07-16T12:29:55.052166728Z" level=warning msg="cleaning up after shim disconnected" id=3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1 namespace=k8s.io Jul 16 12:29:55.052366 env[1299]: time="2025-07-16T12:29:55.052336191Z" level=info msg="cleaning up dead shim" Jul 16 12:29:55.070383 env[1299]: time="2025-07-16T12:29:55.070318590Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4006 runtime=io.containerd.runc.v2\n" Jul 16 12:29:55.792142 sshd[3848]: pam_unix(sshd:session): session closed for user core Jul 16 12:29:55.798897 systemd[1]: sshd@22-10.230.12.42:22-147.75.109.163:44988.service: Deactivated successfully. Jul 16 12:29:55.800890 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 12:29:55.800931 systemd-logind[1280]: Session 22 logged out. Waiting for processes to exit. Jul 16 12:29:55.802732 systemd-logind[1280]: Removed session 22. Jul 16 12:29:55.898920 env[1299]: time="2025-07-16T12:29:55.893928314Z" level=info msg="StopPodSandbox for \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\"" Jul 16 12:29:55.898920 env[1299]: time="2025-07-16T12:29:55.894046612Z" level=info msg="Container to stop \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:55.898920 env[1299]: time="2025-07-16T12:29:55.894076674Z" level=info msg="Container to stop \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 16 12:29:55.897473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec-shm.mount: Deactivated successfully. Jul 16 12:29:55.938581 systemd[1]: Started sshd@23-10.230.12.42:22-147.75.109.163:44990.service. Jul 16 12:29:55.957243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec-rootfs.mount: Deactivated successfully. Jul 16 12:29:55.963667 env[1299]: time="2025-07-16T12:29:55.963604118Z" level=info msg="shim disconnected" id=1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec Jul 16 12:29:55.964056 env[1299]: time="2025-07-16T12:29:55.964025350Z" level=warning msg="cleaning up after shim disconnected" id=1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec namespace=k8s.io Jul 16 12:29:55.964212 env[1299]: time="2025-07-16T12:29:55.964183180Z" level=info msg="cleaning up dead shim" Jul 16 12:29:55.977708 env[1299]: time="2025-07-16T12:29:55.977644288Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4051 runtime=io.containerd.runc.v2\n" Jul 16 12:29:55.978523 env[1299]: time="2025-07-16T12:29:55.978480680Z" level=info msg="TearDown network for sandbox \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" successfully" Jul 16 12:29:55.978723 env[1299]: time="2025-07-16T12:29:55.978688587Z" level=info msg="StopPodSandbox for \"1747628f4cc7902bd36d7b415391612e1884fb7ad2043eb73d9637e255bf3cec\" returns successfully" Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103065 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cni-path\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103148 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-xtables-lock\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103194 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-clustermesh-secrets\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103243 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-run\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103270 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-lib-modules\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.104838 kubelet[2127]: I0716 12:29:56.103327 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hubble-tls\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103358 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-cgroup\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103415 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-bpf-maps\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103444 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-net\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103583 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hostproc\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103640 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.105861 kubelet[2127]: I0716 12:29:56.103717 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-etc-cni-netd\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.103798 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.104857 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-ipsec-secrets\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.104897 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-kernel\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.105045 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrj2g\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-kube-api-access-vrj2g\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.105105 2127 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-config-path\") pod \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\" (UID: \"458d7f25-9393-4b2d-bcda-3a32e05f78ca\") " Jul 16 12:29:56.108148 kubelet[2127]: I0716 12:29:56.105186 2127 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-xtables-lock\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.108506 kubelet[2127]: I0716 12:29:56.106444 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108506 kubelet[2127]: I0716 12:29:56.106526 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108506 kubelet[2127]: I0716 12:29:56.106905 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108506 kubelet[2127]: I0716 12:29:56.106980 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108506 kubelet[2127]: I0716 12:29:56.107805 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108883 kubelet[2127]: I0716 12:29:56.107855 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.108883 kubelet[2127]: I0716 12:29:56.107889 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.113836 systemd[1]: var-lib-kubelet-pods-458d7f25\x2d9393\x2d4b2d\x2dbcda\x2d3a32e05f78ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 16 12:29:56.116871 kubelet[2127]: I0716 12:29:56.114024 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 12:29:56.116871 kubelet[2127]: I0716 12:29:56.114079 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 16 12:29:56.117733 kubelet[2127]: I0716 12:29:56.117649 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 12:29:56.122837 systemd[1]: var-lib-kubelet-pods-458d7f25\x2d9393\x2d4b2d\x2dbcda\x2d3a32e05f78ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvrj2g.mount: Deactivated successfully. Jul 16 12:29:56.127662 systemd[1]: var-lib-kubelet-pods-458d7f25\x2d9393\x2d4b2d\x2dbcda\x2d3a32e05f78ca-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 16 12:29:56.132794 kubelet[2127]: I0716 12:29:56.131391 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 12:29:56.132794 kubelet[2127]: I0716 12:29:56.131908 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:29:56.132193 systemd[1]: var-lib-kubelet-pods-458d7f25\x2d9393\x2d4b2d\x2dbcda\x2d3a32e05f78ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 16 12:29:56.134078 kubelet[2127]: I0716 12:29:56.134038 2127 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-kube-api-access-vrj2g" (OuterVolumeSpecName: "kube-api-access-vrj2g") pod "458d7f25-9393-4b2d-bcda-3a32e05f78ca" (UID: "458d7f25-9393-4b2d-bcda-3a32e05f78ca"). InnerVolumeSpecName "kube-api-access-vrj2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 12:29:56.205849 kubelet[2127]: I0716 12:29:56.205726 2127 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-bpf-maps\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.205849 kubelet[2127]: I0716 12:29:56.205831 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-net\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.205849 kubelet[2127]: I0716 12:29:56.205853 2127 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hostproc\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.205873 2127 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrj2g\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-kube-api-access-vrj2g\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.205892 2127 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-etc-cni-netd\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.205912 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-ipsec-secrets\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.205936 2127 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-host-proc-sys-kernel\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.205990 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-config-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.206048 2127 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cni-path\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206328 kubelet[2127]: I0716 12:29:56.206069 2127 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/458d7f25-9393-4b2d-bcda-3a32e05f78ca-clustermesh-secrets\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206785 kubelet[2127]: I0716 12:29:56.206085 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-run\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206785 kubelet[2127]: I0716 12:29:56.206102 2127 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-lib-modules\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206785 kubelet[2127]: I0716 12:29:56.206121 2127 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/458d7f25-9393-4b2d-bcda-3a32e05f78ca-cilium-cgroup\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.206785 kubelet[2127]: I0716 12:29:56.206137 2127 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/458d7f25-9393-4b2d-bcda-3a32e05f78ca-hubble-tls\") on node \"srv-j7d31.gb1.brightbox.com\" DevicePath \"\"" Jul 16 12:29:56.842993 sshd[4045]: Accepted publickey for core from 147.75.109.163 port 44990 ssh2: RSA SHA256:Ivm2+8c70H684DujjfFb+2an2jxY3RhHoDsFm0/t2Rg Jul 16 12:29:56.845300 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 16 12:29:56.857035 systemd[1]: Started session-23.scope. Jul 16 12:29:56.857778 systemd-logind[1280]: New session 23 of user core. Jul 16 12:29:56.900249 kubelet[2127]: I0716 12:29:56.900202 2127 scope.go:117] "RemoveContainer" containerID="3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1" Jul 16 12:29:56.903718 env[1299]: time="2025-07-16T12:29:56.903210094Z" level=info msg="RemoveContainer for \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\"" Jul 16 12:29:56.907489 env[1299]: time="2025-07-16T12:29:56.907452473Z" level=info msg="RemoveContainer for \"3b9c35d719386973067ad216ceefba3a9fc6f65abacf0f446b1043e433bcf2d1\" returns successfully" Jul 16 12:29:56.908136 kubelet[2127]: I0716 12:29:56.908081 2127 scope.go:117] "RemoveContainer" containerID="881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132" Jul 16 12:29:56.910042 env[1299]: time="2025-07-16T12:29:56.909991755Z" level=info msg="RemoveContainer for \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\"" Jul 16 12:29:56.914791 env[1299]: time="2025-07-16T12:29:56.914726679Z" level=info msg="RemoveContainer for \"881b4bba24215a7a65a8ed2e8792d0c1120cbcd6e311799d6c322e94e7f48132\" returns successfully" Jul 16 12:29:56.970857 kubelet[2127]: E0716 12:29:56.970760 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="458d7f25-9393-4b2d-bcda-3a32e05f78ca" containerName="mount-cgroup" Jul 16 12:29:56.971232 kubelet[2127]: E0716 12:29:56.971197 2127 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="458d7f25-9393-4b2d-bcda-3a32e05f78ca" containerName="apply-sysctl-overwrites" Jul 16 12:29:56.971469 kubelet[2127]: I0716 12:29:56.971421 2127 memory_manager.go:354] "RemoveStaleState removing state" podUID="458d7f25-9393-4b2d-bcda-3a32e05f78ca" containerName="apply-sysctl-overwrites" Jul 16 12:29:56.986701 kubelet[2127]: W0716 12:29:56.985435 2127 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-j7d31.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object Jul 16 12:29:56.988488 kubelet[2127]: E0716 12:29:56.988440 2127 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-j7d31.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 16 12:29:56.989906 kubelet[2127]: W0716 12:29:56.989874 2127 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-j7d31.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object Jul 16 12:29:56.990023 kubelet[2127]: E0716 12:29:56.989953 2127 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-j7d31.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 16 12:29:57.001313 kubelet[2127]: W0716 12:29:57.001231 2127 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-j7d31.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object Jul 16 12:29:57.001603 kubelet[2127]: E0716 12:29:57.001356 2127 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:srv-j7d31.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-j7d31.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 16 12:29:57.011528 kubelet[2127]: I0716 12:29:57.011450 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-host-proc-sys-kernel\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011528 kubelet[2127]: I0716 12:29:57.011515 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-cilium-run\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011547 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-lib-modules\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011575 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-cilium-config-path\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011603 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-hostproc\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011661 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-clustermesh-secrets\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011710 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-cilium-cgroup\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.011805 kubelet[2127]: I0716 12:29:57.011777 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-etc-cni-netd\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011805 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-host-proc-sys-net\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011831 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-hubble-tls\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011865 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-cilium-ipsec-secrets\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011894 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpv8\" (UniqueName: \"kubernetes.io/projected/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-kube-api-access-tjpv8\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011922 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-xtables-lock\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012148 kubelet[2127]: I0716 12:29:57.011965 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-bpf-maps\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:57.012470 kubelet[2127]: I0716 12:29:57.011993 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-cni-path\") pod \"cilium-pj5lv\" (UID: \"5a254684-6a54-4b5c-bac0-fbfa4bed8ab9\") " pod="kube-system/cilium-pj5lv" Jul 16 12:29:58.117225 kubelet[2127]: E0716 12:29:58.117122 2127 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 16 12:29:58.117225 kubelet[2127]: E0716 12:29:58.117220 2127 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-pj5lv: failed to sync secret cache: timed out waiting for the condition Jul 16 12:29:58.118151 kubelet[2127]: E0716 12:29:58.117373 2127 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 16 12:29:58.119156 kubelet[2127]: E0716 12:29:58.119023 2127 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-hubble-tls podName:5a254684-6a54-4b5c-bac0-fbfa4bed8ab9 nodeName:}" failed. No retries permitted until 2025-07-16 12:29:58.61733336 +0000 UTC m=+144.537369213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-hubble-tls") pod "cilium-pj5lv" (UID: "5a254684-6a54-4b5c-bac0-fbfa4bed8ab9") : failed to sync secret cache: timed out waiting for the condition Jul 16 12:29:58.119156 kubelet[2127]: E0716 12:29:58.119075 2127 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-clustermesh-secrets podName:5a254684-6a54-4b5c-bac0-fbfa4bed8ab9 nodeName:}" failed. No retries permitted until 2025-07-16 12:29:58.619060142 +0000 UTC m=+144.539095996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5a254684-6a54-4b5c-bac0-fbfa4bed8ab9-clustermesh-secrets") pod "cilium-pj5lv" (UID: "5a254684-6a54-4b5c-bac0-fbfa4bed8ab9") : failed to sync secret cache: timed out waiting for the condition Jul 16 12:29:58.158872 kubelet[2127]: I0716 12:29:58.156673 2127 setters.go:600] "Node became not ready" node="srv-j7d31.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-16T12:29:58Z","lastTransitionTime":"2025-07-16T12:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 16 12:29:58.389070 kubelet[2127]: I0716 12:29:58.388378 2127 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458d7f25-9393-4b2d-bcda-3a32e05f78ca" path="/var/lib/kubelet/pods/458d7f25-9393-4b2d-bcda-3a32e05f78ca/volumes" Jul 16 12:29:58.790136 env[1299]: time="2025-07-16T12:29:58.789984628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pj5lv,Uid:5a254684-6a54-4b5c-bac0-fbfa4bed8ab9,Namespace:kube-system,Attempt:0,}" Jul 16 12:29:58.818617 env[1299]: time="2025-07-16T12:29:58.818245942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 16 12:29:58.818617 env[1299]: time="2025-07-16T12:29:58.818382284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 16 12:29:58.818617 env[1299]: time="2025-07-16T12:29:58.818412182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 16 12:29:58.819278 env[1299]: time="2025-07-16T12:29:58.819215452Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d pid=4087 runtime=io.containerd.runc.v2 Jul 16 12:29:58.885255 env[1299]: time="2025-07-16T12:29:58.885179768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pj5lv,Uid:5a254684-6a54-4b5c-bac0-fbfa4bed8ab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\"" Jul 16 12:29:58.892968 env[1299]: time="2025-07-16T12:29:58.892913821Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 16 12:29:58.906122 env[1299]: time="2025-07-16T12:29:58.906071743Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36fc64b19cbe2857738fc0facea1e4cfbea20d8794919fa1ae9fdd622b584435\"" Jul 16 12:29:58.908295 env[1299]: time="2025-07-16T12:29:58.908254588Z" level=info msg="StartContainer for \"36fc64b19cbe2857738fc0facea1e4cfbea20d8794919fa1ae9fdd622b584435\"" Jul 16 12:29:58.989088 env[1299]: time="2025-07-16T12:29:58.989015827Z" level=info msg="StartContainer for \"36fc64b19cbe2857738fc0facea1e4cfbea20d8794919fa1ae9fdd622b584435\" returns successfully" Jul 16 12:29:59.026435 env[1299]: time="2025-07-16T12:29:59.026352279Z" level=info msg="shim disconnected" id=36fc64b19cbe2857738fc0facea1e4cfbea20d8794919fa1ae9fdd622b584435 Jul 16 12:29:59.026435 env[1299]: time="2025-07-16T12:29:59.026437014Z" level=warning msg="cleaning up after shim disconnected" id=36fc64b19cbe2857738fc0facea1e4cfbea20d8794919fa1ae9fdd622b584435 namespace=k8s.io Jul 16 12:29:59.026887 env[1299]: time="2025-07-16T12:29:59.026455145Z" level=info msg="cleaning up dead shim" Jul 16 12:29:59.040076 env[1299]: time="2025-07-16T12:29:59.040012301Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:29:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4170 runtime=io.containerd.runc.v2\n" Jul 16 12:29:59.596007 kubelet[2127]: E0716 12:29:59.595919 2127 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 16 12:29:59.632922 systemd[1]: run-containerd-runc-k8s.io-4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d-runc.s1WaFm.mount: Deactivated successfully. Jul 16 12:29:59.918838 env[1299]: time="2025-07-16T12:29:59.918770430Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 16 12:29:59.950295 env[1299]: time="2025-07-16T12:29:59.950229238Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512\"" Jul 16 12:29:59.951527 env[1299]: time="2025-07-16T12:29:59.951491087Z" level=info msg="StartContainer for \"68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512\"" Jul 16 12:30:00.060672 env[1299]: time="2025-07-16T12:30:00.058977321Z" level=info msg="StartContainer for \"68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512\" returns successfully" Jul 16 12:30:00.092911 env[1299]: time="2025-07-16T12:30:00.092818800Z" level=info msg="shim disconnected" id=68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512 Jul 16 12:30:00.092911 env[1299]: time="2025-07-16T12:30:00.092902888Z" level=warning msg="cleaning up after shim disconnected" id=68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512 namespace=k8s.io Jul 16 12:30:00.092911 env[1299]: time="2025-07-16T12:30:00.092922355Z" level=info msg="cleaning up dead shim" Jul 16 12:30:00.104695 env[1299]: time="2025-07-16T12:30:00.104630562Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4233 runtime=io.containerd.runc.v2\n" Jul 16 12:30:00.633172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68aee2b7ac569ddfe357d40cec4a7ed034a37af5a38de63b679ead5127821512-rootfs.mount: Deactivated successfully. Jul 16 12:30:00.922832 env[1299]: time="2025-07-16T12:30:00.922751559Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 16 12:30:00.942951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771334124.mount: Deactivated successfully. Jul 16 12:30:00.959385 env[1299]: time="2025-07-16T12:30:00.957062747Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608\"" Jul 16 12:30:00.959775 env[1299]: time="2025-07-16T12:30:00.959722511Z" level=info msg="StartContainer for \"0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608\"" Jul 16 12:30:01.058770 env[1299]: time="2025-07-16T12:30:01.058244256Z" level=info msg="StartContainer for \"0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608\" returns successfully" Jul 16 12:30:01.093134 env[1299]: time="2025-07-16T12:30:01.093055408Z" level=info msg="shim disconnected" id=0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608 Jul 16 12:30:01.093134 env[1299]: time="2025-07-16T12:30:01.093128542Z" level=warning msg="cleaning up after shim disconnected" id=0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608 namespace=k8s.io Jul 16 12:30:01.093458 env[1299]: time="2025-07-16T12:30:01.093147273Z" level=info msg="cleaning up dead shim" Jul 16 12:30:01.105272 env[1299]: time="2025-07-16T12:30:01.105186687Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4291 runtime=io.containerd.runc.v2\n" Jul 16 12:30:01.633612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bad0fcdbe98be6c61457db9cd33c30b165134a92bc8b8956a99907f68ed3608-rootfs.mount: Deactivated successfully. Jul 16 12:30:01.930091 env[1299]: time="2025-07-16T12:30:01.930030943Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 16 12:30:01.948867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121853565.mount: Deactivated successfully. Jul 16 12:30:01.960093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249886641.mount: Deactivated successfully. Jul 16 12:30:01.969850 env[1299]: time="2025-07-16T12:30:01.966194143Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"19be794085fd6acaf5fea2e1b8b8caab67c1acff1bc5745d56643431e371e0cb\"" Jul 16 12:30:01.970587 env[1299]: time="2025-07-16T12:30:01.970540595Z" level=info msg="StartContainer for \"19be794085fd6acaf5fea2e1b8b8caab67c1acff1bc5745d56643431e371e0cb\"" Jul 16 12:30:02.049772 env[1299]: time="2025-07-16T12:30:02.049690497Z" level=info msg="StartContainer for \"19be794085fd6acaf5fea2e1b8b8caab67c1acff1bc5745d56643431e371e0cb\" returns successfully" Jul 16 12:30:02.081479 env[1299]: time="2025-07-16T12:30:02.081342816Z" level=info msg="shim disconnected" id=19be794085fd6acaf5fea2e1b8b8caab67c1acff1bc5745d56643431e371e0cb Jul 16 12:30:02.081910 env[1299]: time="2025-07-16T12:30:02.081876372Z" level=warning msg="cleaning up after shim disconnected" id=19be794085fd6acaf5fea2e1b8b8caab67c1acff1bc5745d56643431e371e0cb namespace=k8s.io Jul 16 12:30:02.082032 env[1299]: time="2025-07-16T12:30:02.082003932Z" level=info msg="cleaning up dead shim" Jul 16 12:30:02.093096 env[1299]: time="2025-07-16T12:30:02.093024655Z" level=warning msg="cleanup warnings time=\"2025-07-16T12:30:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4347 runtime=io.containerd.runc.v2\n" Jul 16 12:30:02.933648 env[1299]: time="2025-07-16T12:30:02.933574869Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 16 12:30:02.971406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431416315.mount: Deactivated successfully. Jul 16 12:30:02.989131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572377850.mount: Deactivated successfully. Jul 16 12:30:02.996718 env[1299]: time="2025-07-16T12:30:02.996528861Z" level=info msg="CreateContainer within sandbox \"4c743eaa67793f90e075fdb6fa7bd1cd5c7fbb3d5414f50cda3dcb58d9bcab0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8\"" Jul 16 12:30:03.000573 env[1299]: time="2025-07-16T12:30:02.998812553Z" level=info msg="StartContainer for \"e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8\"" Jul 16 12:30:03.074930 env[1299]: time="2025-07-16T12:30:03.074861046Z" level=info msg="StartContainer for \"e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8\" returns successfully" Jul 16 12:30:03.861785 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 16 12:30:03.971269 kubelet[2127]: I0716 12:30:03.971144 2127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pj5lv" podStartSLOduration=7.971102545 podStartE2EDuration="7.971102545s" podCreationTimestamp="2025-07-16 12:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 12:30:03.969453459 +0000 UTC m=+149.889489340" watchObservedRunningTime="2025-07-16 12:30:03.971102545 +0000 UTC m=+149.891138398" Jul 16 12:30:05.952386 systemd[1]: run-containerd-runc-k8s.io-e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8-runc.ZCfuOY.mount: Deactivated successfully. Jul 16 12:30:07.554045 systemd-networkd[1070]: lxc_health: Link UP Jul 16 12:30:07.563840 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 16 12:30:07.563437 systemd-networkd[1070]: lxc_health: Gained carrier Jul 16 12:30:08.271187 systemd[1]: run-containerd-runc-k8s.io-e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8-runc.R1Muwr.mount: Deactivated successfully. Jul 16 12:30:08.759878 systemd-networkd[1070]: lxc_health: Gained IPv6LL Jul 16 12:30:10.573895 systemd[1]: run-containerd-runc-k8s.io-e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8-runc.Jwhgjk.mount: Deactivated successfully. Jul 16 12:30:12.796787 systemd[1]: run-containerd-runc-k8s.io-e8aa390abf2216b96f80938c539abeb4c78d817e8879231f62a0648709915ca8-runc.hTVfhg.mount: Deactivated successfully. Jul 16 12:30:13.029392 sshd[4045]: pam_unix(sshd:session): session closed for user core Jul 16 12:30:13.034264 systemd[1]: sshd@23-10.230.12.42:22-147.75.109.163:44990.service: Deactivated successfully. Jul 16 12:30:13.035477 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 12:30:13.035844 systemd-logind[1280]: Session 23 logged out. Waiting for processes to exit. Jul 16 12:30:13.038524 systemd-logind[1280]: Removed session 23. Jul 16 12:30:20.432474 systemd[1]: Started sshd@24-10.230.12.42:22-194.113.37.118:45196.service.