Oct 29 05:25:17.930089 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Oct 28 23:40:27 -00 2025 Oct 29 05:25:17.930133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 05:25:17.930181 kernel: BIOS-provided physical RAM map: Oct 29 05:25:17.930192 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 29 05:25:17.930201 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 29 05:25:17.930210 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 29 05:25:17.930239 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 29 05:25:17.930251 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 29 05:25:17.930260 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 29 05:25:17.930270 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 29 05:25:17.930284 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 05:25:17.930294 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 29 05:25:17.930303 kernel: NX (Execute Disable) protection: active Oct 29 05:25:17.930312 kernel: SMBIOS 2.8 present. Oct 29 05:25:17.930324 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 29 05:25:17.930335 kernel: Hypervisor detected: KVM Oct 29 05:25:17.930348 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 05:25:17.930359 kernel: kvm-clock: cpu 0, msr 391a0001, primary cpu clock Oct 29 05:25:17.930369 kernel: kvm-clock: using sched offset of 4873739166 cycles Oct 29 05:25:17.930380 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 05:25:17.930398 kernel: tsc: Detected 2799.998 MHz processor Oct 29 05:25:17.930409 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 05:25:17.930420 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 05:25:17.930430 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 29 05:25:17.930440 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 05:25:17.930454 kernel: Using GB pages for direct mapping Oct 29 05:25:17.930464 kernel: ACPI: Early table checksum verification disabled Oct 29 05:25:17.930474 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 29 05:25:17.930485 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930495 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930511 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930522 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 29 05:25:17.930532 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930543 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930564 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930574 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 05:25:17.930590 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 29 05:25:17.930601 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 29 05:25:17.930619 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 29 05:25:17.930637 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 29 05:25:17.930654 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 29 05:25:17.930668 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 29 05:25:17.930679 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 29 05:25:17.930690 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 29 05:25:17.930708 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 29 05:25:17.930719 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 29 05:25:17.930729 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 29 05:25:17.930740 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 29 05:25:17.930755 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 29 05:25:17.930766 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 29 05:25:17.933073 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 29 05:25:17.933087 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 29 05:25:17.933099 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 29 05:25:17.933109 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 29 05:25:17.933120 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 29 05:25:17.933131 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 29 05:25:17.933142 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 29 05:25:17.933152 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 29 05:25:17.933170 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 29 05:25:17.933181 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 29 05:25:17.933192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 29 05:25:17.933202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 29 05:25:17.933213 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 29 05:25:17.933224 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 29 05:25:17.933235 kernel: Zone ranges: Oct 29 05:25:17.933246 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 05:25:17.933257 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 29 05:25:17.933272 kernel: Normal empty Oct 29 05:25:17.933283 kernel: Movable zone start for each node Oct 29 05:25:17.933303 kernel: Early memory node ranges Oct 29 05:25:17.933315 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 29 05:25:17.933331 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 29 05:25:17.933343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 29 05:25:17.933353 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 05:25:17.933364 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 05:25:17.933375 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 29 05:25:17.933390 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 05:25:17.933401 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 05:25:17.933412 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 05:25:17.933423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 05:25:17.933434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 05:25:17.933445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 05:25:17.933456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 05:25:17.933466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 05:25:17.933477 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 05:25:17.933492 kernel: TSC deadline timer available Oct 29 05:25:17.933503 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 29 05:25:17.933513 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 29 05:25:17.933524 kernel: Booting paravirtualized kernel on KVM Oct 29 05:25:17.933535 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 05:25:17.933546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Oct 29 05:25:17.933557 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Oct 29 05:25:17.933568 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Oct 29 05:25:17.933579 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 29 05:25:17.933593 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Oct 29 05:25:17.933604 kernel: kvm-guest: PV spinlocks enabled Oct 29 05:25:17.933625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 05:25:17.933636 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 29 05:25:17.933647 kernel: Policy zone: DMA32 Oct 29 05:25:17.933666 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 05:25:17.933680 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 29 05:25:17.933691 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 05:25:17.933707 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 29 05:25:17.933718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 05:25:17.933729 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 192524K reserved, 0K cma-reserved) Oct 29 05:25:17.933740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 29 05:25:17.933751 kernel: Kernel/User page tables isolation: enabled Oct 29 05:25:17.933762 kernel: ftrace: allocating 34614 entries in 136 pages Oct 29 05:25:17.934862 kernel: ftrace: allocated 136 pages with 2 groups Oct 29 05:25:17.934882 kernel: rcu: Hierarchical RCU implementation. Oct 29 05:25:17.934894 kernel: rcu: RCU event tracing is enabled. Oct 29 05:25:17.934912 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 29 05:25:17.934924 kernel: Rude variant of Tasks RCU enabled. Oct 29 05:25:17.934935 kernel: Tracing variant of Tasks RCU enabled. Oct 29 05:25:17.934946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 05:25:17.934957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 29 05:25:17.934968 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 29 05:25:17.934979 kernel: random: crng init done Oct 29 05:25:17.935003 kernel: Console: colour VGA+ 80x25 Oct 29 05:25:17.935015 kernel: printk: console [tty0] enabled Oct 29 05:25:17.935026 kernel: printk: console [ttyS0] enabled Oct 29 05:25:17.935037 kernel: ACPI: Core revision 20210730 Oct 29 05:25:17.935049 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 05:25:17.935064 kernel: x2apic enabled Oct 29 05:25:17.935075 kernel: Switched APIC routing to physical x2apic. Oct 29 05:25:17.935087 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 29 05:25:17.935098 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 29 05:25:17.935110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 05:25:17.935125 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 29 05:25:17.935137 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 29 05:25:17.935148 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 05:25:17.935159 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 05:25:17.935170 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 05:25:17.935182 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 29 05:25:17.935193 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 05:25:17.935204 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 29 05:25:17.935215 kernel: MDS: Mitigation: Clear CPU buffers Oct 29 05:25:17.935227 kernel: MMIO Stale Data: Unknown: No mitigations Oct 29 05:25:17.935237 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 29 05:25:17.935253 kernel: active return thunk: its_return_thunk Oct 29 05:25:17.935264 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 29 05:25:17.935276 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 05:25:17.935297 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 05:25:17.935308 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 05:25:17.935319 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 05:25:17.935331 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 29 05:25:17.935342 kernel: Freeing SMP alternatives memory: 32K Oct 29 05:25:17.935353 kernel: pid_max: default: 32768 minimum: 301 Oct 29 05:25:17.935364 kernel: LSM: Security Framework initializing Oct 29 05:25:17.935375 kernel: SELinux: Initializing. Oct 29 05:25:17.935390 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 05:25:17.935402 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 05:25:17.935413 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 29 05:25:17.935425 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 29 05:25:17.935436 kernel: signal: max sigframe size: 1776 Oct 29 05:25:17.935447 kernel: rcu: Hierarchical SRCU implementation. Oct 29 05:25:17.935459 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 29 05:25:17.935470 kernel: smp: Bringing up secondary CPUs ... Oct 29 05:25:17.935481 kernel: x86: Booting SMP configuration: Oct 29 05:25:17.935493 kernel: .... node #0, CPUs: #1 Oct 29 05:25:17.935508 kernel: kvm-clock: cpu 1, msr 391a0041, secondary cpu clock Oct 29 05:25:17.935520 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 29 05:25:17.935531 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Oct 29 05:25:17.935543 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 05:25:17.935554 kernel: smpboot: Max logical packages: 16 Oct 29 05:25:17.935565 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Oct 29 05:25:17.935577 kernel: devtmpfs: initialized Oct 29 05:25:17.935588 kernel: x86/mm: Memory block size: 128MB Oct 29 05:25:17.935599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 05:25:17.935627 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 29 05:25:17.935639 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 05:25:17.935650 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 05:25:17.935662 kernel: audit: initializing netlink subsys (disabled) Oct 29 05:25:17.935673 kernel: audit: type=2000 audit(1761715516.462:1): state=initialized audit_enabled=0 res=1 Oct 29 05:25:17.935684 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 05:25:17.935695 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 05:25:17.935706 kernel: cpuidle: using governor menu Oct 29 05:25:17.935718 kernel: ACPI: bus type PCI registered Oct 29 05:25:17.935734 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 05:25:17.935745 kernel: dca service started, version 1.12.1 Oct 29 05:25:17.935757 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 29 05:25:17.935768 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Oct 29 05:25:17.935792 kernel: PCI: Using configuration type 1 for base access Oct 29 05:25:17.935803 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 05:25:17.935815 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 05:25:17.935826 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 05:25:17.935838 kernel: ACPI: Added _OSI(Module Device) Oct 29 05:25:17.935854 kernel: ACPI: Added _OSI(Processor Device) Oct 29 05:25:17.935866 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 05:25:17.935877 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 29 05:25:17.935888 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 29 05:25:17.935900 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 29 05:25:17.935911 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 05:25:17.935922 kernel: ACPI: Interpreter enabled Oct 29 05:25:17.935934 kernel: ACPI: PM: (supports S0 S5) Oct 29 05:25:17.935945 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 05:25:17.935960 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 05:25:17.935972 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 05:25:17.935983 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 05:25:17.936252 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 05:25:17.936405 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 05:25:17.936549 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 05:25:17.936566 kernel: PCI host bridge to bus 0000:00 Oct 29 05:25:17.936738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 05:25:17.936887 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 05:25:17.937019 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 05:25:17.937151 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 29 05:25:17.937283 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 29 05:25:17.937460 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 29 05:25:17.937602 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 05:25:17.937837 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 29 05:25:17.938066 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 29 05:25:17.938246 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 29 05:25:17.938396 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 29 05:25:17.938541 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 29 05:25:17.938723 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 05:25:17.945964 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.946142 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 29 05:25:17.946341 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.946505 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 29 05:25:17.946715 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.946961 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 29 05:25:17.947158 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.947313 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 29 05:25:17.947498 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.947660 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 29 05:25:17.947829 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.947974 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 29 05:25:17.948144 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.948308 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 29 05:25:17.948467 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 29 05:25:17.948656 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 29 05:25:17.948835 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 29 05:25:17.948994 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 29 05:25:17.949146 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 29 05:25:17.949296 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 29 05:25:17.949459 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 29 05:25:17.949639 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 29 05:25:17.949822 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 29 05:25:17.949986 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 29 05:25:17.950150 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 29 05:25:17.950357 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 29 05:25:17.950521 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 05:25:17.950730 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 29 05:25:17.950915 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 29 05:25:17.951104 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 29 05:25:17.951291 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 29 05:25:17.951471 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 29 05:25:17.951691 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 29 05:25:17.953932 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 29 05:25:17.954128 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 29 05:25:17.954284 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 29 05:25:17.954445 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 05:25:17.954635 kernel: pci_bus 0000:02: extended config space not accessible Oct 29 05:25:17.954839 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 29 05:25:17.955044 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 29 05:25:17.955207 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 29 05:25:17.955366 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 29 05:25:17.955559 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 29 05:25:17.955735 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 29 05:25:17.962960 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 29 05:25:17.963138 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 29 05:25:17.963289 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 05:25:17.963484 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 29 05:25:17.963655 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 29 05:25:17.963826 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 29 05:25:17.963974 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 29 05:25:17.964118 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 05:25:17.964280 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 29 05:25:17.964439 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 29 05:25:17.964594 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 05:25:17.964783 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 29 05:25:17.964932 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 29 05:25:17.965083 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 05:25:17.965229 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 29 05:25:17.965375 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 29 05:25:17.965527 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 05:25:17.965693 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 29 05:25:17.965858 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 29 05:25:17.966017 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 05:25:17.966165 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 29 05:25:17.966308 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 29 05:25:17.966449 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 05:25:17.966467 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 05:25:17.966480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 05:25:17.966499 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 05:25:17.966511 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 05:25:17.966523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 05:25:17.966534 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 05:25:17.966546 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 05:25:17.966558 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 05:25:17.966577 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 05:25:17.966588 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 05:25:17.966600 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 05:25:17.966626 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 05:25:17.966638 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 05:25:17.966650 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 05:25:17.966661 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 05:25:17.966673 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 05:25:17.966684 kernel: iommu: Default domain type: Translated Oct 29 05:25:17.966696 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 05:25:17.966867 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 05:25:17.967034 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 05:25:17.967182 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 05:25:17.967200 kernel: vgaarb: loaded Oct 29 05:25:17.967212 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 29 05:25:17.967224 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 29 05:25:17.967236 kernel: PTP clock support registered Oct 29 05:25:17.967247 kernel: PCI: Using ACPI for IRQ routing Oct 29 05:25:17.967259 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 05:25:17.967271 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 29 05:25:17.967282 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 29 05:25:17.967312 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 05:25:17.967323 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 05:25:17.967336 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 05:25:17.967347 kernel: pnp: PnP ACPI init Oct 29 05:25:17.967555 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 29 05:25:17.967575 kernel: pnp: PnP ACPI: found 5 devices Oct 29 05:25:17.967588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 05:25:17.967599 kernel: NET: Registered PF_INET protocol family Oct 29 05:25:17.967627 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 05:25:17.967639 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 29 05:25:17.967651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 05:25:17.967663 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 29 05:25:17.967675 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 29 05:25:17.967687 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 29 05:25:17.967698 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 05:25:17.967720 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 05:25:17.967737 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 05:25:17.967750 kernel: NET: Registered PF_XDP protocol family Oct 29 05:25:17.967907 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 29 05:25:17.968051 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 29 05:25:17.968194 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 29 05:25:17.968336 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 29 05:25:17.968486 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 29 05:25:17.968710 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 29 05:25:17.968889 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 29 05:25:17.969035 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 29 05:25:17.969191 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 29 05:25:17.969341 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 29 05:25:17.969485 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 29 05:25:17.969651 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 29 05:25:17.980729 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 29 05:25:17.980928 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 29 05:25:17.981078 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 29 05:25:17.981234 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 29 05:25:17.981398 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 29 05:25:17.981552 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 29 05:25:17.981714 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 29 05:25:17.981876 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 29 05:25:17.982029 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 29 05:25:17.982188 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 05:25:17.982367 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 29 05:25:17.982514 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 29 05:25:17.982686 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 29 05:25:17.982846 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 05:25:17.982992 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 29 05:25:17.983135 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 29 05:25:17.983297 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 29 05:25:17.983458 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 05:25:17.983608 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 29 05:25:17.983765 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 29 05:25:17.983951 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 29 05:25:17.984117 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 05:25:17.984275 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 29 05:25:17.984440 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 29 05:25:17.984597 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 29 05:25:17.984762 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 05:25:17.984946 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 29 05:25:17.985124 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 29 05:25:17.985282 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 29 05:25:17.985455 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 05:25:17.985607 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 29 05:25:17.985798 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 29 05:25:17.985955 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 29 05:25:17.986127 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 05:25:17.986308 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 29 05:25:17.986474 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 29 05:25:17.986674 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 29 05:25:17.986846 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 05:25:17.987044 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 05:25:17.987179 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 05:25:17.987346 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 05:25:17.987503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 29 05:25:17.987663 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 29 05:25:17.987825 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 29 05:25:17.988011 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 29 05:25:17.988172 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 29 05:25:17.988340 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 05:25:17.988520 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 29 05:25:17.988722 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 29 05:25:17.988963 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 29 05:25:17.989113 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 05:25:17.989266 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 29 05:25:17.989429 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 29 05:25:17.989566 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 05:25:17.989728 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 29 05:25:17.989882 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 29 05:25:17.990050 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 05:25:17.990247 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 29 05:25:17.990414 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 29 05:25:17.990552 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 05:25:17.990713 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 29 05:25:17.990878 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 29 05:25:17.991029 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 05:25:17.991176 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 29 05:25:17.991321 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 29 05:25:17.991466 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 05:25:17.991622 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 29 05:25:17.991785 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 29 05:25:17.991929 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 05:25:17.991948 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 05:25:17.991961 kernel: PCI: CLS 0 bytes, default 64 Oct 29 05:25:17.991974 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 29 05:25:17.991993 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 29 05:25:17.992013 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 29 05:25:17.992026 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 29 05:25:17.992039 kernel: Initialise system trusted keyrings Oct 29 05:25:17.992051 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 29 05:25:17.992064 kernel: Key type asymmetric registered Oct 29 05:25:17.992076 kernel: Asymmetric key parser 'x509' registered Oct 29 05:25:17.992088 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 05:25:17.992101 kernel: io scheduler mq-deadline registered Oct 29 05:25:17.992117 kernel: io scheduler kyber registered Oct 29 05:25:17.992129 kernel: io scheduler bfq registered Oct 29 05:25:17.992280 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 29 05:25:17.992631 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 29 05:25:17.992901 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.993071 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 29 05:25:17.993231 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 29 05:25:17.993400 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.993564 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 29 05:25:17.993728 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 29 05:25:17.993890 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.994042 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 29 05:25:17.994202 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 29 05:25:17.994365 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.994516 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 29 05:25:17.994691 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 29 05:25:17.994861 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.995012 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 29 05:25:17.995168 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 29 05:25:17.995352 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:17.995504 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 29 05:25:17.995664 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 29 05:25:18.002502 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:18.002705 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 29 05:25:18.002882 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 29 05:25:18.003039 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 05:25:18.003060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 05:25:18.003074 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 05:25:18.003099 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 05:25:18.003110 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 05:25:18.003122 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 05:25:18.003134 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 05:25:18.003146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 05:25:18.003176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 05:25:18.003188 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 05:25:18.003345 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 29 05:25:18.003518 kernel: rtc_cmos 00:03: registered as rtc0 Oct 29 05:25:18.003668 kernel: rtc_cmos 00:03: setting system clock to 2025-10-29T05:25:17 UTC (1761715517) Oct 29 05:25:18.009890 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 29 05:25:18.009920 kernel: intel_pstate: CPU model not supported Oct 29 05:25:18.009935 kernel: NET: Registered PF_INET6 protocol family Oct 29 05:25:18.009954 kernel: Segment Routing with IPv6 Oct 29 05:25:18.009967 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 05:25:18.009979 kernel: NET: Registered PF_PACKET protocol family Oct 29 05:25:18.009991 kernel: Key type dns_resolver registered Oct 29 05:25:18.010004 kernel: IPI shorthand broadcast: enabled Oct 29 05:25:18.010028 kernel: sched_clock: Marking stable (966684573, 218554967)->(1456507673, -271268133) Oct 29 05:25:18.010040 kernel: registered taskstats version 1 Oct 29 05:25:18.010051 kernel: Loading compiled-in X.509 certificates Oct 29 05:25:18.010063 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 88bc8a4d729b2f514b4a44a35b666d3248ded14a' Oct 29 05:25:18.010091 kernel: Key type .fscrypt registered Oct 29 05:25:18.010102 kernel: Key type fscrypt-provisioning registered Oct 29 05:25:18.010113 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 05:25:18.010125 kernel: ima: Allocated hash algorithm: sha1 Oct 29 05:25:18.010148 kernel: ima: No architecture policies found Oct 29 05:25:18.010160 kernel: clk: Disabling unused clocks Oct 29 05:25:18.010171 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 29 05:25:18.010193 kernel: Write protecting the kernel read-only data: 28672k Oct 29 05:25:18.010217 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 29 05:25:18.010233 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 29 05:25:18.010254 kernel: Run /init as init process Oct 29 05:25:18.010279 kernel: with arguments: Oct 29 05:25:18.010291 kernel: /init Oct 29 05:25:18.010307 kernel: with environment: Oct 29 05:25:18.010319 kernel: HOME=/ Oct 29 05:25:18.010331 kernel: TERM=linux Oct 29 05:25:18.010343 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 29 05:25:18.010364 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 05:25:18.010386 systemd[1]: Detected virtualization kvm. Oct 29 05:25:18.010400 systemd[1]: Detected architecture x86-64. Oct 29 05:25:18.010412 systemd[1]: Running in initrd. Oct 29 05:25:18.010431 systemd[1]: No hostname configured, using default hostname. Oct 29 05:25:18.010444 systemd[1]: Hostname set to . Oct 29 05:25:18.010457 systemd[1]: Initializing machine ID from VM UUID. Oct 29 05:25:18.010469 systemd[1]: Queued start job for default target initrd.target. Oct 29 05:25:18.010486 systemd[1]: Started systemd-ask-password-console.path. Oct 29 05:25:18.010499 systemd[1]: Reached target cryptsetup.target. Oct 29 05:25:18.010512 systemd[1]: Reached target paths.target. Oct 29 05:25:18.010536 systemd[1]: Reached target slices.target. Oct 29 05:25:18.010548 systemd[1]: Reached target swap.target. Oct 29 05:25:18.010560 systemd[1]: Reached target timers.target. Oct 29 05:25:18.010573 systemd[1]: Listening on iscsid.socket. Oct 29 05:25:18.010589 systemd[1]: Listening on iscsiuio.socket. Oct 29 05:25:18.010636 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 05:25:18.010649 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 05:25:18.010662 systemd[1]: Listening on systemd-journald.socket. Oct 29 05:25:18.010675 systemd[1]: Listening on systemd-networkd.socket. Oct 29 05:25:18.010688 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 05:25:18.010701 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 05:25:18.010714 systemd[1]: Reached target sockets.target. Oct 29 05:25:18.010726 systemd[1]: Starting kmod-static-nodes.service... Oct 29 05:25:18.010744 systemd[1]: Finished network-cleanup.service. Oct 29 05:25:18.010757 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 05:25:18.010787 systemd[1]: Starting systemd-journald.service... Oct 29 05:25:18.010802 systemd[1]: Starting systemd-modules-load.service... Oct 29 05:25:18.010816 systemd[1]: Starting systemd-resolved.service... Oct 29 05:25:18.010828 systemd[1]: Starting systemd-vconsole-setup.service... Oct 29 05:25:18.010842 systemd[1]: Finished kmod-static-nodes.service. Oct 29 05:25:18.010854 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 05:25:18.010867 kernel: Bridge firewalling registered Oct 29 05:25:18.010895 systemd-journald[202]: Journal started Oct 29 05:25:18.010973 systemd-journald[202]: Runtime Journal (/run/log/journal/2fabbf0100d9401bba3db6f65d3a7826) is 4.7M, max 38.1M, 33.3M free. Oct 29 05:25:17.930837 systemd-modules-load[203]: Inserted module 'overlay' Oct 29 05:25:17.981366 systemd-resolved[204]: Positive Trust Anchors: Oct 29 05:25:18.031893 systemd[1]: Started systemd-resolved.service. Oct 29 05:25:18.031921 kernel: audit: type=1130 audit(1761715518.024:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:17.981386 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 05:25:18.051877 systemd[1]: Started systemd-journald.service. Oct 29 05:25:18.051905 kernel: audit: type=1130 audit(1761715518.031:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.051925 kernel: SCSI subsystem initialized Oct 29 05:25:18.051946 kernel: audit: type=1130 audit(1761715518.039:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:17.981429 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 05:25:18.083004 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 05:25:18.083041 kernel: device-mapper: uevent: version 1.0.3 Oct 29 05:25:18.083072 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 29 05:25:18.083109 kernel: audit: type=1130 audit(1761715518.045:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.083128 kernel: audit: type=1130 audit(1761715518.046:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:17.989533 systemd-resolved[204]: Defaulting to hostname 'linux'. Oct 29 05:25:18.089162 kernel: audit: type=1130 audit(1761715518.082:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.011425 systemd-modules-load[203]: Inserted module 'br_netfilter' Oct 29 05:25:18.095061 kernel: audit: type=1130 audit(1761715518.089:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.040397 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 05:25:18.105212 kernel: audit: type=1130 audit(1761715518.094:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.046633 systemd[1]: Finished systemd-vconsole-setup.service. Oct 29 05:25:18.047366 systemd[1]: Reached target nss-lookup.target. Oct 29 05:25:18.049047 systemd[1]: Starting dracut-cmdline-ask.service... Oct 29 05:25:18.050583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 05:25:18.069468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 05:25:18.077250 systemd-modules-load[203]: Inserted module 'dm_multipath' Oct 29 05:25:18.083968 systemd[1]: Finished systemd-modules-load.service. Oct 29 05:25:18.090119 systemd[1]: Finished dracut-cmdline-ask.service. Oct 29 05:25:18.096928 systemd[1]: Starting dracut-cmdline.service... Oct 29 05:25:18.102715 systemd[1]: Starting systemd-sysctl.service... Oct 29 05:25:18.115748 systemd[1]: Finished systemd-sysctl.service. Oct 29 05:25:18.135371 kernel: audit: type=1130 audit(1761715518.129:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.135471 dracut-cmdline[221]: dracut-dracut-053 Oct 29 05:25:18.137810 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 05:25:18.232815 kernel: Loading iSCSI transport class v2.0-870. Oct 29 05:25:18.254818 kernel: iscsi: registered transport (tcp) Oct 29 05:25:18.281831 kernel: iscsi: registered transport (qla4xxx) Oct 29 05:25:18.281900 kernel: QLogic iSCSI HBA Driver Oct 29 05:25:18.334914 systemd[1]: Finished dracut-cmdline.service. Oct 29 05:25:18.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.336926 systemd[1]: Starting dracut-pre-udev.service... Oct 29 05:25:18.395809 kernel: raid6: sse2x4 gen() 13309 MB/s Oct 29 05:25:18.413822 kernel: raid6: sse2x4 xor() 7823 MB/s Oct 29 05:25:18.431833 kernel: raid6: sse2x2 gen() 8934 MB/s Oct 29 05:25:18.449817 kernel: raid6: sse2x2 xor() 8195 MB/s Oct 29 05:25:18.467834 kernel: raid6: sse2x1 gen() 9228 MB/s Oct 29 05:25:18.486493 kernel: raid6: sse2x1 xor() 7349 MB/s Oct 29 05:25:18.486548 kernel: raid6: using algorithm sse2x4 gen() 13309 MB/s Oct 29 05:25:18.486566 kernel: raid6: .... xor() 7823 MB/s, rmw enabled Oct 29 05:25:18.487885 kernel: raid6: using ssse3x2 recovery algorithm Oct 29 05:25:18.505808 kernel: xor: automatically using best checksumming function avx Oct 29 05:25:18.618820 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 29 05:25:18.631655 systemd[1]: Finished dracut-pre-udev.service. Oct 29 05:25:18.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.632000 audit: BPF prog-id=7 op=LOAD Oct 29 05:25:18.632000 audit: BPF prog-id=8 op=LOAD Oct 29 05:25:18.633680 systemd[1]: Starting systemd-udevd.service... Oct 29 05:25:18.651404 systemd-udevd[401]: Using default interface naming scheme 'v252'. Oct 29 05:25:18.660028 systemd[1]: Started systemd-udevd.service. Oct 29 05:25:18.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.662341 systemd[1]: Starting dracut-pre-trigger.service... Oct 29 05:25:18.680009 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Oct 29 05:25:18.719090 systemd[1]: Finished dracut-pre-trigger.service. Oct 29 05:25:18.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.720895 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 05:25:18.814937 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 05:25:18.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:18.911803 kernel: ACPI: bus type USB registered Oct 29 05:25:18.918791 kernel: usbcore: registered new interface driver usbfs Oct 29 05:25:18.927421 kernel: usbcore: registered new interface driver hub Oct 29 05:25:18.927453 kernel: usbcore: registered new device driver usb Oct 29 05:25:18.927483 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 29 05:25:18.995738 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 05:25:18.995762 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 05:25:18.995794 kernel: GPT:17805311 != 125829119 Oct 29 05:25:18.995811 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 05:25:18.995827 kernel: GPT:17805311 != 125829119 Oct 29 05:25:18.995841 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 05:25:18.995857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 05:25:18.995872 kernel: AVX version of gcm_enc/dec engaged. Oct 29 05:25:18.995892 kernel: AES CTR mode by8 optimization enabled Oct 29 05:25:18.995908 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 29 05:25:18.998571 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 29 05:25:18.998790 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 29 05:25:18.998963 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 29 05:25:18.999156 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 29 05:25:18.999349 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 29 05:25:18.999543 kernel: hub 1-0:1.0: USB hub found Oct 29 05:25:18.999761 kernel: hub 1-0:1.0: 4 ports detected Oct 29 05:25:18.999991 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 29 05:25:19.000178 kernel: hub 2-0:1.0: USB hub found Oct 29 05:25:19.000379 kernel: hub 2-0:1.0: 4 ports detected Oct 29 05:25:19.006795 kernel: libata version 3.00 loaded. Oct 29 05:25:19.037814 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (453) Oct 29 05:25:19.051164 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 29 05:25:19.147817 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 05:25:19.148050 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 05:25:19.148072 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 29 05:25:19.148235 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 05:25:19.148416 kernel: scsi host0: ahci Oct 29 05:25:19.148644 kernel: scsi host1: ahci Oct 29 05:25:19.148847 kernel: scsi host2: ahci Oct 29 05:25:19.149040 kernel: scsi host3: ahci Oct 29 05:25:19.149236 kernel: scsi host4: ahci Oct 29 05:25:19.149406 kernel: scsi host5: ahci Oct 29 05:25:19.149611 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Oct 29 05:25:19.149630 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Oct 29 05:25:19.149652 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Oct 29 05:25:19.149675 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Oct 29 05:25:19.149691 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Oct 29 05:25:19.149707 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Oct 29 05:25:19.150765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 29 05:25:19.152211 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 29 05:25:19.158157 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 29 05:25:19.163401 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 05:25:19.166376 systemd[1]: Starting disk-uuid.service... Oct 29 05:25:19.177815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 05:25:19.178868 disk-uuid[534]: Primary Header is updated. Oct 29 05:25:19.178868 disk-uuid[534]: Secondary Entries is updated. Oct 29 05:25:19.178868 disk-uuid[534]: Secondary Header is updated. Oct 29 05:25:19.237832 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 29 05:25:19.377803 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 05:25:19.396227 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.396272 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.397672 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.400794 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.400843 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.402797 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 05:25:19.414136 kernel: usbcore: registered new interface driver usbhid Oct 29 05:25:19.414178 kernel: usbhid: USB HID core driver Oct 29 05:25:19.423365 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 29 05:25:19.423398 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 29 05:25:20.194640 disk-uuid[535]: The operation has completed successfully. Oct 29 05:25:20.195575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 05:25:20.254123 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 05:25:20.254280 systemd[1]: Finished disk-uuid.service. Oct 29 05:25:20.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.256199 systemd[1]: Starting verity-setup.service... Oct 29 05:25:20.276878 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 29 05:25:20.328952 systemd[1]: Found device dev-mapper-usr.device. Oct 29 05:25:20.330847 systemd[1]: Mounting sysusr-usr.mount... Oct 29 05:25:20.332678 systemd[1]: Finished verity-setup.service. Oct 29 05:25:20.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.423809 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 29 05:25:20.424335 systemd[1]: Mounted sysusr-usr.mount. Oct 29 05:25:20.425159 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 29 05:25:20.426156 systemd[1]: Starting ignition-setup.service... Oct 29 05:25:20.429240 systemd[1]: Starting parse-ip-for-networkd.service... Oct 29 05:25:20.443385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:25:20.443450 kernel: BTRFS info (device vda6): using free space tree Oct 29 05:25:20.443469 kernel: BTRFS info (device vda6): has skinny extents Oct 29 05:25:20.459236 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 29 05:25:20.466019 systemd[1]: Finished ignition-setup.service. Oct 29 05:25:20.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.467761 systemd[1]: Starting ignition-fetch-offline.service... Oct 29 05:25:20.579101 systemd[1]: Finished parse-ip-for-networkd.service. Oct 29 05:25:20.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.580000 audit: BPF prog-id=9 op=LOAD Oct 29 05:25:20.582335 systemd[1]: Starting systemd-networkd.service... Oct 29 05:25:20.613995 systemd-networkd[709]: lo: Link UP Oct 29 05:25:20.614010 systemd-networkd[709]: lo: Gained carrier Oct 29 05:25:20.615387 systemd-networkd[709]: Enumeration completed Oct 29 05:25:20.616120 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 05:25:20.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.617764 systemd[1]: Started systemd-networkd.service. Oct 29 05:25:20.618385 systemd-networkd[709]: eth0: Link UP Oct 29 05:25:20.618391 systemd-networkd[709]: eth0: Gained carrier Oct 29 05:25:20.618664 systemd[1]: Reached target network.target. Oct 29 05:25:20.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.620343 systemd[1]: Starting iscsiuio.service... Oct 29 05:25:20.634263 systemd[1]: Started iscsiuio.service. Oct 29 05:25:20.636606 systemd[1]: Starting iscsid.service... Oct 29 05:25:20.642523 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 29 05:25:20.642523 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 29 05:25:20.642523 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 29 05:25:20.642523 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 29 05:25:20.642523 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 29 05:25:20.642523 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 29 05:25:20.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.654085 ignition[620]: Ignition 2.14.0 Oct 29 05:25:20.645256 systemd[1]: Started iscsid.service. Oct 29 05:25:20.654104 ignition[620]: Stage: fetch-offline Oct 29 05:25:20.647229 systemd[1]: Starting dracut-initqueue.service... Oct 29 05:25:20.654203 ignition[620]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:20.649215 systemd-networkd[709]: eth0: DHCPv4 address 10.230.52.194/30, gateway 10.230.52.193 acquired from 10.230.52.193 Oct 29 05:25:20.654255 ignition[620]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:20.658974 systemd[1]: Finished ignition-fetch-offline.service. Oct 29 05:25:20.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.656072 ignition[620]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:20.661153 systemd[1]: Starting ignition-fetch.service... Oct 29 05:25:20.656249 ignition[620]: parsed url from cmdline: "" Oct 29 05:25:20.667673 systemd[1]: Finished dracut-initqueue.service. Oct 29 05:25:20.656256 ignition[620]: no config URL provided Oct 29 05:25:20.668378 systemd[1]: Reached target remote-fs-pre.target. Oct 29 05:25:20.656265 ignition[620]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 05:25:20.669033 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 05:25:20.656293 ignition[620]: no config at "/usr/lib/ignition/user.ign" Oct 29 05:25:20.669614 systemd[1]: Reached target remote-fs.target. Oct 29 05:25:20.656319 ignition[620]: failed to fetch config: resource requires networking Oct 29 05:25:20.671221 systemd[1]: Starting dracut-pre-mount.service... Oct 29 05:25:20.656695 ignition[620]: Ignition finished successfully Oct 29 05:25:20.682474 systemd[1]: Finished dracut-pre-mount.service. Oct 29 05:25:20.686639 ignition[721]: Ignition 2.14.0 Oct 29 05:25:20.686649 ignition[721]: Stage: fetch Oct 29 05:25:20.687684 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:20.687719 ignition[721]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:20.688662 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:20.688798 ignition[721]: parsed url from cmdline: "" Oct 29 05:25:20.688808 ignition[721]: no config URL provided Oct 29 05:25:20.688818 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 05:25:20.688834 ignition[721]: no config at "/usr/lib/ignition/user.ign" Oct 29 05:25:20.692300 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 29 05:25:20.692364 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 29 05:25:20.693216 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 29 05:25:20.712035 ignition[721]: GET result: OK Oct 29 05:25:20.712149 ignition[721]: parsing config with SHA512: 0b0290a3376afe04f4a75b4cda9eca059e00f2e2e6572a48d471639a6be5d1f30a616913bcd80bfe1062f3b81baf001cb9e15b91ddd978b811e87f172e8934eb Oct 29 05:25:20.720694 unknown[721]: fetched base config from "system" Oct 29 05:25:20.721266 ignition[721]: fetch: fetch complete Oct 29 05:25:20.720712 unknown[721]: fetched base config from "system" Oct 29 05:25:20.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.721275 ignition[721]: fetch: fetch passed Oct 29 05:25:20.720721 unknown[721]: fetched user config from "openstack" Oct 29 05:25:20.721330 ignition[721]: Ignition finished successfully Oct 29 05:25:20.724910 systemd[1]: Finished ignition-fetch.service. Oct 29 05:25:20.726672 systemd[1]: Starting ignition-kargs.service... Oct 29 05:25:20.738883 ignition[735]: Ignition 2.14.0 Oct 29 05:25:20.738902 ignition[735]: Stage: kargs Oct 29 05:25:20.739071 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:20.739104 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:20.740283 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:20.741828 ignition[735]: kargs: kargs passed Oct 29 05:25:20.742900 systemd[1]: Finished ignition-kargs.service. Oct 29 05:25:20.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.741892 ignition[735]: Ignition finished successfully Oct 29 05:25:20.744928 systemd[1]: Starting ignition-disks.service... Oct 29 05:25:20.754673 ignition[740]: Ignition 2.14.0 Oct 29 05:25:20.754695 ignition[740]: Stage: disks Oct 29 05:25:20.754893 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:20.754928 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:20.756139 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:20.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.758746 systemd[1]: Finished ignition-disks.service. Oct 29 05:25:20.757666 ignition[740]: disks: disks passed Oct 29 05:25:20.759824 systemd[1]: Reached target initrd-root-device.target. Oct 29 05:25:20.757748 ignition[740]: Ignition finished successfully Oct 29 05:25:20.760446 systemd[1]: Reached target local-fs-pre.target. Oct 29 05:25:20.761022 systemd[1]: Reached target local-fs.target. Oct 29 05:25:20.761592 systemd[1]: Reached target sysinit.target. Oct 29 05:25:20.762166 systemd[1]: Reached target basic.target. Oct 29 05:25:20.764475 systemd[1]: Starting systemd-fsck-root.service... Oct 29 05:25:20.784529 systemd-fsck[748]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Oct 29 05:25:20.788622 systemd[1]: Finished systemd-fsck-root.service. Oct 29 05:25:20.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.790320 systemd[1]: Mounting sysroot.mount... Oct 29 05:25:20.800828 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 29 05:25:20.801669 systemd[1]: Mounted sysroot.mount. Oct 29 05:25:20.803124 systemd[1]: Reached target initrd-root-fs.target. Oct 29 05:25:20.805842 systemd[1]: Mounting sysroot-usr.mount... Oct 29 05:25:20.807816 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 29 05:25:20.809819 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 29 05:25:20.811372 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 05:25:20.812051 systemd[1]: Reached target ignition-diskful.target. Oct 29 05:25:20.815841 systemd[1]: Mounted sysroot-usr.mount. Oct 29 05:25:20.818764 systemd[1]: Starting initrd-setup-root.service... Oct 29 05:25:20.826160 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 05:25:20.840939 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Oct 29 05:25:20.852197 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 05:25:20.860180 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 05:25:20.917534 systemd[1]: Finished initrd-setup-root.service. Oct 29 05:25:20.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.919721 systemd[1]: Starting ignition-mount.service... Oct 29 05:25:20.921472 systemd[1]: Starting sysroot-boot.service... Oct 29 05:25:20.930936 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Oct 29 05:25:20.952470 ignition[804]: INFO : Ignition 2.14.0 Oct 29 05:25:20.953605 ignition[804]: INFO : Stage: mount Oct 29 05:25:20.954470 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:20.956519 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:20.958991 coreos-metadata[754]: Oct 29 05:25:20.957 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 29 05:25:20.961825 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:20.966675 ignition[804]: INFO : mount: mount passed Oct 29 05:25:20.967405 ignition[804]: INFO : Ignition finished successfully Oct 29 05:25:20.968397 systemd[1]: Finished ignition-mount.service. Oct 29 05:25:20.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.975364 coreos-metadata[754]: Oct 29 05:25:20.975 INFO Fetch successful Oct 29 05:25:20.976212 coreos-metadata[754]: Oct 29 05:25:20.975 INFO wrote hostname srv-clpdb.gb1.brightbox.com to /sysroot/etc/hostname Oct 29 05:25:20.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.977656 systemd[1]: Finished sysroot-boot.service. Oct 29 05:25:20.981435 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 29 05:25:20.981621 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 29 05:25:20.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:20.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:21.352240 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 29 05:25:21.366752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (813) Oct 29 05:25:21.366840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 05:25:21.366859 kernel: BTRFS info (device vda6): using free space tree Oct 29 05:25:21.368191 kernel: BTRFS info (device vda6): has skinny extents Oct 29 05:25:21.375125 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 29 05:25:21.377650 systemd[1]: Starting ignition-files.service... Oct 29 05:25:21.397668 ignition[833]: INFO : Ignition 2.14.0 Oct 29 05:25:21.397668 ignition[833]: INFO : Stage: files Oct 29 05:25:21.399220 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:21.399220 ignition[833]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:21.399220 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:21.402242 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Oct 29 05:25:21.402242 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 05:25:21.402242 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 05:25:21.406819 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 05:25:21.408015 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 05:25:21.409391 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 05:25:21.409378 unknown[833]: wrote ssh authorized keys file for user: core Oct 29 05:25:21.411392 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 05:25:21.411392 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 29 05:25:21.588747 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 05:25:21.706373 systemd-networkd[709]: eth0: Gained IPv6LL Oct 29 05:25:21.790405 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 05:25:21.791760 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 05:25:21.791760 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 29 05:25:22.044683 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 29 05:25:22.316128 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 05:25:22.317541 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 05:25:22.326089 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 29 05:25:22.570834 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 29 05:25:23.212233 systemd-networkd[709]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d30:24:19ff:fee6:34c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d30:24:19ff:fee6:34c2/64 assigned by NDisc. Oct 29 05:25:23.212248 systemd-networkd[709]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 29 05:25:23.695047 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Oct 29 05:25:23.696994 ignition[833]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 29 05:25:23.704218 ignition[833]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 29 05:25:23.704218 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 29 05:25:23.704218 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 05:25:23.707169 ignition[833]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 05:25:23.707169 ignition[833]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 05:25:23.707169 ignition[833]: INFO : files: files passed Oct 29 05:25:23.707169 ignition[833]: INFO : Ignition finished successfully Oct 29 05:25:23.722206 kernel: kauditd_printk_skb: 28 callbacks suppressed Oct 29 05:25:23.722248 kernel: audit: type=1130 audit(1761715523.711:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.709503 systemd[1]: Finished ignition-files.service. Oct 29 05:25:23.729862 kernel: audit: type=1130 audit(1761715523.724:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.713515 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 29 05:25:23.735863 kernel: audit: type=1131 audit(1761715523.729:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.719410 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 29 05:25:23.742180 kernel: audit: type=1130 audit(1761715523.735:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.742246 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 05:25:23.720428 systemd[1]: Starting ignition-quench.service... Oct 29 05:25:23.724228 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 05:25:23.724357 systemd[1]: Finished ignition-quench.service. Oct 29 05:25:23.730118 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 29 05:25:23.736260 systemd[1]: Reached target ignition-complete.target. Oct 29 05:25:23.743939 systemd[1]: Starting initrd-parse-etc.service... Oct 29 05:25:23.763278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 05:25:23.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.763488 systemd[1]: Finished initrd-parse-etc.service. Oct 29 05:25:23.779504 kernel: audit: type=1130 audit(1761715523.766:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.779541 kernel: audit: type=1131 audit(1761715523.771:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.772079 systemd[1]: Reached target initrd-fs.target. Oct 29 05:25:23.777689 systemd[1]: Reached target initrd.target. Oct 29 05:25:23.778455 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 29 05:25:23.780399 systemd[1]: Starting dracut-pre-pivot.service... Oct 29 05:25:23.796121 systemd[1]: Finished dracut-pre-pivot.service. Oct 29 05:25:23.815512 kernel: audit: type=1130 audit(1761715523.795:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.797888 systemd[1]: Starting initrd-cleanup.service... Oct 29 05:25:23.822964 systemd[1]: Stopped target nss-lookup.target. Oct 29 05:25:23.823700 systemd[1]: Stopped target remote-cryptsetup.target. Oct 29 05:25:23.825079 systemd[1]: Stopped target timers.target. Oct 29 05:25:23.826213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 05:25:23.832201 kernel: audit: type=1131 audit(1761715523.826:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.826415 systemd[1]: Stopped dracut-pre-pivot.service. Oct 29 05:25:23.827538 systemd[1]: Stopped target initrd.target. Oct 29 05:25:23.832980 systemd[1]: Stopped target basic.target. Oct 29 05:25:23.834099 systemd[1]: Stopped target ignition-complete.target. Oct 29 05:25:23.835254 systemd[1]: Stopped target ignition-diskful.target. Oct 29 05:25:23.836532 systemd[1]: Stopped target initrd-root-device.target. Oct 29 05:25:23.837741 systemd[1]: Stopped target remote-fs.target. Oct 29 05:25:23.838980 systemd[1]: Stopped target remote-fs-pre.target. Oct 29 05:25:23.840216 systemd[1]: Stopped target sysinit.target. Oct 29 05:25:23.841343 systemd[1]: Stopped target local-fs.target. Oct 29 05:25:23.842541 systemd[1]: Stopped target local-fs-pre.target. Oct 29 05:25:23.843679 systemd[1]: Stopped target swap.target. Oct 29 05:25:23.850842 kernel: audit: type=1131 audit(1761715523.845:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.844758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 05:25:23.844988 systemd[1]: Stopped dracut-pre-mount.service. Oct 29 05:25:23.857604 kernel: audit: type=1131 audit(1761715523.851:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.846164 systemd[1]: Stopped target cryptsetup.target. Oct 29 05:25:23.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.851601 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 05:25:23.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.851826 systemd[1]: Stopped dracut-initqueue.service. Oct 29 05:25:23.852886 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 05:25:23.853100 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 29 05:25:23.858501 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 05:25:23.858705 systemd[1]: Stopped ignition-files.service. Oct 29 05:25:23.860937 systemd[1]: Stopping ignition-mount.service... Oct 29 05:25:23.869179 iscsid[715]: iscsid shutting down. Oct 29 05:25:23.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.880947 ignition[871]: INFO : Ignition 2.14.0 Oct 29 05:25:23.880947 ignition[871]: INFO : Stage: umount Oct 29 05:25:23.880947 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 05:25:23.880947 ignition[871]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 05:25:23.880947 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 05:25:23.880947 ignition[871]: INFO : umount: umount passed Oct 29 05:25:23.880947 ignition[871]: INFO : Ignition finished successfully Oct 29 05:25:23.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.873241 systemd[1]: Stopping iscsid.service... Oct 29 05:25:23.873833 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 05:25:23.874079 systemd[1]: Stopped kmod-static-nodes.service. Oct 29 05:25:23.876442 systemd[1]: Stopping sysroot-boot.service... Oct 29 05:25:23.877236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 05:25:23.877522 systemd[1]: Stopped systemd-udev-trigger.service. Oct 29 05:25:23.878564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 05:25:23.878759 systemd[1]: Stopped dracut-pre-trigger.service. Oct 29 05:25:23.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.882999 systemd[1]: iscsid.service: Deactivated successfully. Oct 29 05:25:23.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.883225 systemd[1]: Stopped iscsid.service. Oct 29 05:25:23.885835 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 05:25:23.885992 systemd[1]: Finished initrd-cleanup.service. Oct 29 05:25:23.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.895060 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 05:25:23.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.895995 systemd[1]: Stopped ignition-mount.service. Oct 29 05:25:23.898338 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 05:25:23.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.898406 systemd[1]: Stopped ignition-disks.service. Oct 29 05:25:23.900496 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 05:25:23.900553 systemd[1]: Stopped ignition-kargs.service. Oct 29 05:25:23.901946 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 29 05:25:23.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.902009 systemd[1]: Stopped ignition-fetch.service. Oct 29 05:25:23.904974 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 05:25:23.905030 systemd[1]: Stopped ignition-fetch-offline.service. Oct 29 05:25:23.905722 systemd[1]: Stopped target paths.target. Oct 29 05:25:23.906314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 05:25:23.906955 systemd[1]: Stopped systemd-ask-password-console.path. Oct 29 05:25:23.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.907990 systemd[1]: Stopped target slices.target. Oct 29 05:25:23.908523 systemd[1]: Stopped target sockets.target. Oct 29 05:25:23.909181 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 05:25:23.909247 systemd[1]: Closed iscsid.socket. Oct 29 05:25:23.910606 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 05:25:23.910664 systemd[1]: Stopped ignition-setup.service. Oct 29 05:25:23.911917 systemd[1]: Stopping iscsiuio.service... Oct 29 05:25:23.917905 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 05:25:23.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.918481 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 29 05:25:23.918639 systemd[1]: Stopped iscsiuio.service. Oct 29 05:25:23.920752 systemd[1]: Stopped target network.target. Oct 29 05:25:23.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.921905 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 05:25:23.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.921961 systemd[1]: Closed iscsiuio.socket. Oct 29 05:25:23.923585 systemd[1]: Stopping systemd-networkd.service... Oct 29 05:25:23.924478 systemd[1]: Stopping systemd-resolved.service... Oct 29 05:25:23.927823 systemd-networkd[709]: eth0: DHCPv6 lease lost Oct 29 05:25:23.945000 audit: BPF prog-id=9 op=UNLOAD Oct 29 05:25:23.929094 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 05:25:23.929248 systemd[1]: Stopped systemd-networkd.service. Oct 29 05:25:23.931213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 05:25:23.931266 systemd[1]: Closed systemd-networkd.socket. Oct 29 05:25:23.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.933039 systemd[1]: Stopping network-cleanup.service... Oct 29 05:25:23.934440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 05:25:23.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.953000 audit: BPF prog-id=6 op=UNLOAD Oct 29 05:25:23.934522 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 29 05:25:23.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.935236 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 05:25:23.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.935306 systemd[1]: Stopped systemd-sysctl.service. Oct 29 05:25:23.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.937122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 05:25:23.937182 systemd[1]: Stopped systemd-modules-load.service. Oct 29 05:25:23.943436 systemd[1]: Stopping systemd-udevd.service... Oct 29 05:25:23.946489 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 05:25:23.948998 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 05:25:23.949147 systemd[1]: Stopped systemd-resolved.service. Oct 29 05:25:23.951096 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 05:25:23.951288 systemd[1]: Stopped systemd-udevd.service. Oct 29 05:25:23.953851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 05:25:23.953954 systemd[1]: Closed systemd-udevd-control.socket. Oct 29 05:25:23.954607 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 05:25:23.954658 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 29 05:25:23.955429 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 05:25:23.955520 systemd[1]: Stopped dracut-pre-udev.service. Oct 29 05:25:23.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.956281 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 05:25:23.956337 systemd[1]: Stopped dracut-cmdline.service. Oct 29 05:25:23.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.957592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 05:25:23.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:23.957649 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 29 05:25:23.960975 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 29 05:25:23.983910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 05:25:23.984064 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 29 05:25:23.985759 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 05:25:23.985921 systemd[1]: Stopped network-cleanup.service. Oct 29 05:25:23.987401 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 05:25:23.987539 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 29 05:25:24.126701 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 05:25:24.126896 systemd[1]: Stopped sysroot-boot.service. Oct 29 05:25:24.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:24.128611 systemd[1]: Reached target initrd-switch-root.target. Oct 29 05:25:24.129629 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 05:25:24.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:24.129695 systemd[1]: Stopped initrd-setup-root.service. Oct 29 05:25:24.132179 systemd[1]: Starting initrd-switch-root.service... Oct 29 05:25:24.149141 systemd[1]: Switching root. Oct 29 05:25:24.172205 systemd-journald[202]: Journal stopped Oct 29 05:25:28.069727 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Oct 29 05:25:28.069896 kernel: SELinux: Class mctp_socket not defined in policy. Oct 29 05:25:28.069926 kernel: SELinux: Class anon_inode not defined in policy. Oct 29 05:25:28.069946 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 29 05:25:28.069975 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 05:25:28.070018 kernel: SELinux: policy capability open_perms=1 Oct 29 05:25:28.070038 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 05:25:28.070055 kernel: SELinux: policy capability always_check_network=0 Oct 29 05:25:28.070084 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 05:25:28.070108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 05:25:28.070126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 05:25:28.070144 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 05:25:28.070171 systemd[1]: Successfully loaded SELinux policy in 59.070ms. Oct 29 05:25:28.070225 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.167ms. Oct 29 05:25:28.070262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 05:25:28.070289 systemd[1]: Detected virtualization kvm. Oct 29 05:25:28.070309 systemd[1]: Detected architecture x86-64. Oct 29 05:25:28.070344 systemd[1]: Detected first boot. Oct 29 05:25:28.070399 systemd[1]: Hostname set to . Oct 29 05:25:28.070421 systemd[1]: Initializing machine ID from VM UUID. Oct 29 05:25:28.070441 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 29 05:25:28.070477 systemd[1]: Populated /etc with preset unit settings. Oct 29 05:25:28.070506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 05:25:28.070536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 05:25:28.070559 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 05:25:28.070581 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 05:25:28.070600 systemd[1]: Stopped initrd-switch-root.service. Oct 29 05:25:28.070630 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 05:25:28.070656 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 29 05:25:28.070682 systemd[1]: Created slice system-addon\x2drun.slice. Oct 29 05:25:28.070718 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 29 05:25:28.070743 systemd[1]: Created slice system-getty.slice. Oct 29 05:25:28.070781 systemd[1]: Created slice system-modprobe.slice. Oct 29 05:25:28.071091 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 29 05:25:28.071119 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 29 05:25:28.071139 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 29 05:25:28.071174 systemd[1]: Created slice user.slice. Oct 29 05:25:28.071209 systemd[1]: Started systemd-ask-password-console.path. Oct 29 05:25:28.071230 systemd[1]: Started systemd-ask-password-wall.path. Oct 29 05:25:28.071257 systemd[1]: Set up automount boot.automount. Oct 29 05:25:28.071279 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 29 05:25:28.071304 systemd[1]: Stopped target initrd-switch-root.target. Oct 29 05:25:28.071335 systemd[1]: Stopped target initrd-fs.target. Oct 29 05:25:28.071373 systemd[1]: Stopped target initrd-root-fs.target. Oct 29 05:25:28.071395 systemd[1]: Reached target integritysetup.target. Oct 29 05:25:28.071415 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 05:25:28.071442 systemd[1]: Reached target remote-fs.target. Oct 29 05:25:28.071468 systemd[1]: Reached target slices.target. Oct 29 05:25:28.071488 systemd[1]: Reached target swap.target. Oct 29 05:25:28.071508 systemd[1]: Reached target torcx.target. Oct 29 05:25:28.071534 systemd[1]: Reached target veritysetup.target. Oct 29 05:25:28.071560 systemd[1]: Listening on systemd-coredump.socket. Oct 29 05:25:28.071592 systemd[1]: Listening on systemd-initctl.socket. Oct 29 05:25:28.071614 systemd[1]: Listening on systemd-networkd.socket. Oct 29 05:25:28.071643 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 05:25:28.071665 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 05:25:28.071685 systemd[1]: Listening on systemd-userdbd.socket. Oct 29 05:25:28.071715 systemd[1]: Mounting dev-hugepages.mount... Oct 29 05:25:28.071748 systemd[1]: Mounting dev-mqueue.mount... Oct 29 05:25:28.071789 systemd[1]: Mounting media.mount... Oct 29 05:25:28.071811 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:28.071855 systemd[1]: Mounting sys-kernel-debug.mount... Oct 29 05:25:28.071882 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 29 05:25:28.071903 systemd[1]: Mounting tmp.mount... Oct 29 05:25:28.071923 systemd[1]: Starting flatcar-tmpfiles.service... Oct 29 05:25:28.071942 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 05:25:28.071961 systemd[1]: Starting kmod-static-nodes.service... Oct 29 05:25:28.071986 systemd[1]: Starting modprobe@configfs.service... Oct 29 05:25:28.072024 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 05:25:28.072051 systemd[1]: Starting modprobe@drm.service... Oct 29 05:25:28.072082 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 05:25:28.072119 systemd[1]: Starting modprobe@fuse.service... Oct 29 05:25:28.072141 systemd[1]: Starting modprobe@loop.service... Oct 29 05:25:28.072161 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 05:25:28.072191 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 05:25:28.072229 systemd[1]: Stopped systemd-fsck-root.service. Oct 29 05:25:28.072253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 05:25:28.072277 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 05:25:28.072308 kernel: fuse: init (API version 7.34) Oct 29 05:25:28.072370 systemd[1]: Stopped systemd-journald.service. Oct 29 05:25:28.072395 systemd[1]: Starting systemd-journald.service... Oct 29 05:25:28.072415 systemd[1]: Starting systemd-modules-load.service... Oct 29 05:25:28.072435 systemd[1]: Starting systemd-network-generator.service... Oct 29 05:25:28.073460 systemd[1]: Starting systemd-remount-fs.service... Oct 29 05:25:28.073488 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 05:25:28.073518 systemd[1]: verity-setup.service: Deactivated successfully. Oct 29 05:25:28.073540 systemd[1]: Stopped verity-setup.service. Oct 29 05:25:28.073560 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:28.073592 systemd[1]: Mounted dev-hugepages.mount. Oct 29 05:25:28.073614 systemd[1]: Mounted dev-mqueue.mount. Oct 29 05:25:28.073644 systemd[1]: Mounted media.mount. Oct 29 05:25:28.073664 systemd[1]: Mounted sys-kernel-debug.mount. Oct 29 05:25:28.073683 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 29 05:25:28.073707 systemd[1]: Mounted tmp.mount. Oct 29 05:25:28.073733 systemd[1]: Finished flatcar-tmpfiles.service. Oct 29 05:25:28.073754 systemd[1]: Finished kmod-static-nodes.service. Oct 29 05:25:28.073809 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 05:25:28.073856 systemd[1]: Finished modprobe@configfs.service. Oct 29 05:25:28.073895 kernel: loop: module loaded Oct 29 05:25:28.073916 systemd-journald[985]: Journal started Oct 29 05:25:28.074033 systemd-journald[985]: Runtime Journal (/run/log/journal/2fabbf0100d9401bba3db6f65d3a7826) is 4.7M, max 38.1M, 33.3M free. Oct 29 05:25:24.348000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 05:25:24.419000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 05:25:24.419000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 05:25:24.419000 audit: BPF prog-id=10 op=LOAD Oct 29 05:25:24.419000 audit: BPF prog-id=10 op=UNLOAD Oct 29 05:25:24.420000 audit: BPF prog-id=11 op=LOAD Oct 29 05:25:24.420000 audit: BPF prog-id=11 op=UNLOAD Oct 29 05:25:24.541000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Oct 29 05:25:24.541000 audit[903]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d88c a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 05:25:24.541000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 29 05:25:24.544000 audit[903]: AVC avc: denied { associate } for pid=903 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Oct 29 05:25:24.544000 audit[903]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d965 a2=1ed a3=0 items=2 ppid=886 pid=903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 05:25:24.544000 audit: CWD cwd="/" Oct 29 05:25:24.544000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:24.544000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:24.544000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 29 05:25:27.788000 audit: BPF prog-id=12 op=LOAD Oct 29 05:25:27.788000 audit: BPF prog-id=3 op=UNLOAD Oct 29 05:25:27.788000 audit: BPF prog-id=13 op=LOAD Oct 29 05:25:27.788000 audit: BPF prog-id=14 op=LOAD Oct 29 05:25:27.788000 audit: BPF prog-id=4 op=UNLOAD Oct 29 05:25:27.788000 audit: BPF prog-id=5 op=UNLOAD Oct 29 05:25:27.789000 audit: BPF prog-id=15 op=LOAD Oct 29 05:25:27.789000 audit: BPF prog-id=12 op=UNLOAD Oct 29 05:25:27.789000 audit: BPF prog-id=16 op=LOAD Oct 29 05:25:27.789000 audit: BPF prog-id=17 op=LOAD Oct 29 05:25:27.789000 audit: BPF prog-id=13 op=UNLOAD Oct 29 05:25:27.789000 audit: BPF prog-id=14 op=UNLOAD Oct 29 05:25:27.791000 audit: BPF prog-id=18 op=LOAD Oct 29 05:25:27.791000 audit: BPF prog-id=15 op=UNLOAD Oct 29 05:25:27.791000 audit: BPF prog-id=19 op=LOAD Oct 29 05:25:27.792000 audit: BPF prog-id=20 op=LOAD Oct 29 05:25:27.792000 audit: BPF prog-id=16 op=UNLOAD Oct 29 05:25:27.792000 audit: BPF prog-id=17 op=UNLOAD Oct 29 05:25:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.805000 audit: BPF prog-id=18 op=UNLOAD Oct 29 05:25:27.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.990000 audit: BPF prog-id=21 op=LOAD Oct 29 05:25:27.990000 audit: BPF prog-id=22 op=LOAD Oct 29 05:25:27.990000 audit: BPF prog-id=23 op=LOAD Oct 29 05:25:27.990000 audit: BPF prog-id=19 op=UNLOAD Oct 29 05:25:27.990000 audit: BPF prog-id=20 op=UNLOAD Oct 29 05:25:28.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.065000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 29 05:25:28.065000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe4067a290 a2=4000 a3=7ffe4067a32c items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 05:25:28.065000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 29 05:25:28.076853 systemd[1]: Started systemd-journald.service. Oct 29 05:25:28.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:24.539117 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 05:25:27.784732 systemd[1]: Queued start job for default target multi-user.target. Oct 29 05:25:24.539741 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 29 05:25:27.784758 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 29 05:25:28.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:24.539799 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 29 05:25:27.794034 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 05:25:24.539916 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 29 05:25:28.076907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:25:24.539938 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 29 05:25:28.077100 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 05:25:24.539997 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 29 05:25:28.078181 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 05:25:24.540019 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 29 05:25:28.078416 systemd[1]: Finished modprobe@drm.service. Oct 29 05:25:24.540535 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 29 05:25:24.540604 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 29 05:25:24.540630 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 29 05:25:24.541347 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 29 05:25:28.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:24.541419 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 29 05:25:24.541476 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Oct 29 05:25:24.541503 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 29 05:25:24.541539 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Oct 29 05:25:24.541570 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 29 05:25:28.080935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:25:27.203341 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 05:25:28.081113 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 05:25:27.204030 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 05:25:28.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:27.204282 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 05:25:27.205259 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 05:25:27.205375 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 29 05:25:27.205527 /usr/lib/systemd/system-generators/torcx-generator[903]: time="2025-10-29T05:25:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 29 05:25:28.082617 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 05:25:28.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.084868 systemd[1]: Finished modprobe@fuse.service. Oct 29 05:25:28.088556 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 05:25:28.089390 systemd[1]: Finished modprobe@loop.service. Oct 29 05:25:28.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.090439 systemd[1]: Finished systemd-modules-load.service. Oct 29 05:25:28.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.091511 systemd[1]: Finished systemd-network-generator.service. Oct 29 05:25:28.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.092615 systemd[1]: Finished systemd-remount-fs.service. Oct 29 05:25:28.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.094503 systemd[1]: Reached target network-pre.target. Oct 29 05:25:28.097536 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 29 05:25:28.104661 systemd[1]: Mounting sys-kernel-config.mount... Oct 29 05:25:28.107877 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 05:25:28.110798 systemd[1]: Starting systemd-hwdb-update.service... Oct 29 05:25:28.117955 systemd[1]: Starting systemd-journal-flush.service... Oct 29 05:25:28.118749 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 05:25:28.120674 systemd[1]: Starting systemd-random-seed.service... Oct 29 05:25:28.121610 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 05:25:28.126128 systemd[1]: Starting systemd-sysctl.service... Oct 29 05:25:28.129747 systemd[1]: Starting systemd-sysusers.service... Oct 29 05:25:28.134834 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 29 05:25:28.136605 systemd[1]: Mounted sys-kernel-config.mount. Oct 29 05:25:28.139768 systemd-journald[985]: Time spent on flushing to /var/log/journal/2fabbf0100d9401bba3db6f65d3a7826 is 57.619ms for 1299 entries. Oct 29 05:25:28.139768 systemd-journald[985]: System Journal (/var/log/journal/2fabbf0100d9401bba3db6f65d3a7826) is 8.0M, max 584.8M, 576.8M free. Oct 29 05:25:28.225034 systemd-journald[985]: Received client request to flush runtime journal. Oct 29 05:25:28.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.156964 systemd[1]: Finished systemd-random-seed.service. Oct 29 05:25:28.157848 systemd[1]: Reached target first-boot-complete.target. Oct 29 05:25:28.164473 systemd[1]: Finished systemd-sysctl.service. Oct 29 05:25:28.177904 systemd[1]: Finished systemd-sysusers.service. Oct 29 05:25:28.226627 systemd[1]: Finished systemd-journal-flush.service. Oct 29 05:25:28.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.264338 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 05:25:28.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.266971 systemd[1]: Starting systemd-udev-settle.service... Oct 29 05:25:28.282056 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 29 05:25:28.738092 systemd[1]: Finished systemd-hwdb-update.service. Oct 29 05:25:28.745715 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 29 05:25:28.745850 kernel: audit: type=1130 audit(1761715528.738:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.744000 audit: BPF prog-id=24 op=LOAD Oct 29 05:25:28.747047 systemd[1]: Starting systemd-udevd.service... Oct 29 05:25:28.745000 audit: BPF prog-id=25 op=LOAD Oct 29 05:25:28.745000 audit: BPF prog-id=7 op=UNLOAD Oct 29 05:25:28.745000 audit: BPF prog-id=8 op=UNLOAD Oct 29 05:25:28.750129 kernel: audit: type=1334 audit(1761715528.744:147): prog-id=24 op=LOAD Oct 29 05:25:28.750198 kernel: audit: type=1334 audit(1761715528.745:148): prog-id=25 op=LOAD Oct 29 05:25:28.750235 kernel: audit: type=1334 audit(1761715528.745:149): prog-id=7 op=UNLOAD Oct 29 05:25:28.750284 kernel: audit: type=1334 audit(1761715528.745:150): prog-id=8 op=UNLOAD Oct 29 05:25:28.777794 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Oct 29 05:25:28.808127 systemd[1]: Started systemd-udevd.service. Oct 29 05:25:28.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.821331 kernel: audit: type=1130 audit(1761715528.808:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.820262 systemd[1]: Starting systemd-networkd.service... Oct 29 05:25:28.818000 audit: BPF prog-id=26 op=LOAD Oct 29 05:25:28.824798 kernel: audit: type=1334 audit(1761715528.818:152): prog-id=26 op=LOAD Oct 29 05:25:28.837024 kernel: audit: type=1334 audit(1761715528.828:153): prog-id=27 op=LOAD Oct 29 05:25:28.837118 kernel: audit: type=1334 audit(1761715528.829:154): prog-id=28 op=LOAD Oct 29 05:25:28.837164 kernel: audit: type=1334 audit(1761715528.829:155): prog-id=29 op=LOAD Oct 29 05:25:28.828000 audit: BPF prog-id=27 op=LOAD Oct 29 05:25:28.829000 audit: BPF prog-id=28 op=LOAD Oct 29 05:25:28.829000 audit: BPF prog-id=29 op=LOAD Oct 29 05:25:28.836173 systemd[1]: Starting systemd-userdbd.service... Oct 29 05:25:28.887902 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 29 05:25:28.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:28.898529 systemd[1]: Started systemd-userdbd.service. Oct 29 05:25:28.990186 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 29 05:25:29.009270 systemd-networkd[1029]: lo: Link UP Oct 29 05:25:29.009282 systemd-networkd[1029]: lo: Gained carrier Oct 29 05:25:29.010157 systemd-networkd[1029]: Enumeration completed Oct 29 05:25:29.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.010295 systemd[1]: Started systemd-networkd.service. Oct 29 05:25:29.010307 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 05:25:29.015968 systemd-networkd[1029]: eth0: Link UP Oct 29 05:25:29.015981 systemd-networkd[1029]: eth0: Gained carrier Oct 29 05:25:29.030993 systemd-networkd[1029]: eth0: DHCPv4 address 10.230.52.194/30, gateway 10.230.52.193 acquired from 10.230.52.193 Oct 29 05:25:29.043812 kernel: ACPI: button: Power Button [PWRF] Oct 29 05:25:29.081850 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 05:25:29.118000 audit[1016]: AVC avc: denied { confidentiality } for pid=1016 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 29 05:25:29.118000 audit[1016]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ad17b28230 a1=338ec a2=7f483832abc5 a3=5 items=110 ppid=1014 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 05:25:29.118000 audit: CWD cwd="/" Oct 29 05:25:29.118000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=1 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=2 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=3 name=(null) inode=14232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=4 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=5 name=(null) inode=14233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=6 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=7 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=8 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=9 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=10 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=11 name=(null) inode=14236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=12 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=13 name=(null) inode=14237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=14 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=15 name=(null) inode=14238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=16 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=17 name=(null) inode=14239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=18 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=19 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=20 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=21 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=22 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=23 name=(null) inode=14242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=24 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.150790 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 05:25:29.173541 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 29 05:25:29.174893 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 05:25:29.175467 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 29 05:25:29.118000 audit: PATH item=25 name=(null) inode=14243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=26 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=27 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=28 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=29 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=30 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=31 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=32 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=33 name=(null) inode=14247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=34 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=35 name=(null) inode=14248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=36 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=37 name=(null) inode=14249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=38 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=39 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=40 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=41 name=(null) inode=14251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=42 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=43 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=44 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=45 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=46 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=47 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=48 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=49 name=(null) inode=14255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=50 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=51 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=52 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=53 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=55 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=56 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=57 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=58 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=59 name=(null) inode=14260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=60 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=61 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=62 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=63 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=64 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=65 name=(null) inode=14263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=66 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=67 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=68 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=69 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=70 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=71 name=(null) inode=14266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=72 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=73 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=74 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=75 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=76 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=77 name=(null) inode=14269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=78 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=79 name=(null) inode=14270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=80 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=81 name=(null) inode=14271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=82 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=83 name=(null) inode=14272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=84 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=85 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=86 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=87 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=88 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=89 name=(null) inode=14275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=90 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=91 name=(null) inode=14276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=92 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=93 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=94 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=95 name=(null) inode=14278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=96 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=97 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=98 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=99 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=100 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=101 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=102 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=103 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=104 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=105 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=106 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=107 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PATH item=109 name=(null) inode=14289 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 05:25:29.118000 audit: PROCTITLE proctitle="(udev-worker)" Oct 29 05:25:29.152989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 05:25:29.301720 systemd[1]: Finished systemd-udev-settle.service. Oct 29 05:25:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.304513 systemd[1]: Starting lvm2-activation-early.service... Oct 29 05:25:29.329307 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 05:25:29.359409 systemd[1]: Finished lvm2-activation-early.service. Oct 29 05:25:29.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.360275 systemd[1]: Reached target cryptsetup.target. Oct 29 05:25:29.362643 systemd[1]: Starting lvm2-activation.service... Oct 29 05:25:29.368231 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 05:25:29.395202 systemd[1]: Finished lvm2-activation.service. Oct 29 05:25:29.396048 systemd[1]: Reached target local-fs-pre.target. Oct 29 05:25:29.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.396665 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 05:25:29.396709 systemd[1]: Reached target local-fs.target. Oct 29 05:25:29.397334 systemd[1]: Reached target machines.target. Oct 29 05:25:29.399840 systemd[1]: Starting ldconfig.service... Oct 29 05:25:29.401189 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 05:25:29.401252 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:29.403354 systemd[1]: Starting systemd-boot-update.service... Oct 29 05:25:29.405657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 29 05:25:29.408588 systemd[1]: Starting systemd-machine-id-commit.service... Oct 29 05:25:29.415944 systemd[1]: Starting systemd-sysext.service... Oct 29 05:25:29.427217 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Oct 29 05:25:29.429068 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 29 05:25:29.511204 systemd[1]: Unmounting usr-share-oem.mount... Oct 29 05:25:29.572922 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 29 05:25:29.573206 systemd[1]: Unmounted usr-share-oem.mount. Oct 29 05:25:29.575940 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 29 05:25:29.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.597482 kernel: loop0: detected capacity change from 0 to 224512 Oct 29 05:25:29.602080 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 05:25:29.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.604866 systemd[1]: Finished systemd-machine-id-commit.service. Oct 29 05:25:29.631921 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 05:25:29.649888 kernel: loop1: detected capacity change from 0 to 224512 Oct 29 05:25:29.668152 (sd-sysext)[1058]: Using extensions 'kubernetes'. Oct 29 05:25:29.669254 (sd-sysext)[1058]: Merged extensions into '/usr'. Oct 29 05:25:29.673288 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Oct 29 05:25:29.673288 systemd-fsck[1055]: /dev/vda1: 790 files, 120772/258078 clusters Oct 29 05:25:29.675598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 29 05:25:29.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.678427 systemd[1]: Mounting boot.mount... Oct 29 05:25:29.709349 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:29.711860 systemd[1]: Mounting usr-share-oem.mount... Oct 29 05:25:29.717473 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 05:25:29.719455 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 05:25:29.722384 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 05:25:29.725796 systemd[1]: Starting modprobe@loop.service... Oct 29 05:25:29.726507 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 05:25:29.726704 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:29.726920 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:29.735718 systemd[1]: Mounted boot.mount. Oct 29 05:25:29.736750 systemd[1]: Mounted usr-share-oem.mount. Oct 29 05:25:29.737869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:25:29.738086 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 05:25:29.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.739290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:25:29.739464 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 05:25:29.740610 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 05:25:29.740809 systemd[1]: Finished modprobe@loop.service. Oct 29 05:25:29.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.742028 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 05:25:29.742182 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 05:25:29.745018 systemd[1]: Finished systemd-sysext.service. Oct 29 05:25:29.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:29.749245 systemd[1]: Starting ensure-sysext.service... Oct 29 05:25:29.752107 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 29 05:25:29.761274 systemd[1]: Reloading. Oct 29 05:25:29.785829 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 29 05:25:29.795853 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 05:25:29.806484 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 05:25:29.884476 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2025-10-29T05:25:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 05:25:29.884527 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2025-10-29T05:25:29Z" level=info msg="torcx already run" Oct 29 05:25:30.023903 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 05:25:30.037047 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 05:25:30.037291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 05:25:30.064584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 05:25:30.153000 audit: BPF prog-id=30 op=LOAD Oct 29 05:25:30.153000 audit: BPF prog-id=26 op=UNLOAD Oct 29 05:25:30.156000 audit: BPF prog-id=31 op=LOAD Oct 29 05:25:30.157000 audit: BPF prog-id=27 op=UNLOAD Oct 29 05:25:30.157000 audit: BPF prog-id=32 op=LOAD Oct 29 05:25:30.157000 audit: BPF prog-id=33 op=LOAD Oct 29 05:25:30.157000 audit: BPF prog-id=28 op=UNLOAD Oct 29 05:25:30.157000 audit: BPF prog-id=29 op=UNLOAD Oct 29 05:25:30.160000 audit: BPF prog-id=34 op=LOAD Oct 29 05:25:30.160000 audit: BPF prog-id=21 op=UNLOAD Oct 29 05:25:30.160000 audit: BPF prog-id=35 op=LOAD Oct 29 05:25:30.161000 audit: BPF prog-id=36 op=LOAD Oct 29 05:25:30.161000 audit: BPF prog-id=22 op=UNLOAD Oct 29 05:25:30.161000 audit: BPF prog-id=23 op=UNLOAD Oct 29 05:25:30.161000 audit: BPF prog-id=37 op=LOAD Oct 29 05:25:30.162000 audit: BPF prog-id=38 op=LOAD Oct 29 05:25:30.162000 audit: BPF prog-id=24 op=UNLOAD Oct 29 05:25:30.162000 audit: BPF prog-id=25 op=UNLOAD Oct 29 05:25:30.167483 systemd[1]: Finished ldconfig.service. Oct 29 05:25:30.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.169072 systemd[1]: Finished systemd-boot-update.service. Oct 29 05:25:30.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.172103 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 29 05:25:30.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.179148 systemd[1]: Starting audit-rules.service... Oct 29 05:25:30.181812 systemd[1]: Starting clean-ca-certificates.service... Oct 29 05:25:30.185867 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 29 05:25:30.188000 audit: BPF prog-id=39 op=LOAD Oct 29 05:25:30.193000 audit: BPF prog-id=40 op=LOAD Oct 29 05:25:30.190506 systemd[1]: Starting systemd-resolved.service... Oct 29 05:25:30.196940 systemd[1]: Starting systemd-timesyncd.service... Oct 29 05:25:30.199460 systemd[1]: Starting systemd-update-utmp.service... Oct 29 05:25:30.201844 systemd[1]: Finished clean-ca-certificates.service. Oct 29 05:25:30.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.207290 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 05:25:30.211000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.217252 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.220483 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 05:25:30.223032 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 05:25:30.226703 systemd[1]: Starting modprobe@loop.service... Oct 29 05:25:30.227459 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.227743 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:30.228041 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 05:25:30.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.233114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:25:30.233338 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 05:25:30.234941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:25:30.235119 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 05:25:30.238591 systemd[1]: Finished systemd-update-utmp.service. Oct 29 05:25:30.242387 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.244179 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 05:25:30.247195 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 05:25:30.249097 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.249430 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:30.249653 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 05:25:30.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.256298 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 05:25:30.256533 systemd[1]: Finished modprobe@loop.service. Oct 29 05:25:30.258180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.260279 systemd[1]: Starting modprobe@drm.service... Oct 29 05:25:30.261890 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.262073 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:30.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.266142 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 29 05:25:30.266967 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 05:25:30.268354 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 05:25:30.268555 systemd[1]: Finished modprobe@drm.service. Oct 29 05:25:30.271882 systemd[1]: Finished ensure-sysext.service. Oct 29 05:25:30.275113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 05:25:30.275289 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 05:25:30.276126 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.285326 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 29 05:25:30.287954 systemd[1]: Starting systemd-update-done.service... Oct 29 05:25:30.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.290925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 05:25:30.291144 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 05:25:30.292045 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 05:25:30.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 05:25:30.300221 systemd[1]: Finished systemd-update-done.service. Oct 29 05:25:30.322000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 29 05:25:30.322000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd99e9d190 a2=420 a3=0 items=0 ppid=1133 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 05:25:30.322000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 29 05:25:30.323268 augenrules[1161]: No rules Oct 29 05:25:30.324114 systemd[1]: Finished audit-rules.service. Oct 29 05:25:30.345004 systemd[1]: Started systemd-timesyncd.service. Oct 29 05:25:30.346025 systemd[1]: Reached target time-set.target. Oct 29 05:25:30.351531 systemd-resolved[1136]: Positive Trust Anchors: Oct 29 05:25:30.351945 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 05:25:30.352096 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 05:25:30.360015 systemd-resolved[1136]: Using system hostname 'srv-clpdb.gb1.brightbox.com'. Oct 29 05:25:30.362810 systemd[1]: Started systemd-resolved.service. Oct 29 05:25:30.363588 systemd[1]: Reached target network.target. Oct 29 05:25:30.364287 systemd[1]: Reached target nss-lookup.target. Oct 29 05:25:30.365013 systemd[1]: Reached target sysinit.target. Oct 29 05:25:30.365749 systemd[1]: Started motdgen.path. Oct 29 05:25:30.366376 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 29 05:25:30.367350 systemd[1]: Started logrotate.timer. Oct 29 05:25:30.368069 systemd[1]: Started mdadm.timer. Oct 29 05:25:30.368625 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 29 05:25:30.369319 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 05:25:30.369390 systemd[1]: Reached target paths.target. Oct 29 05:25:30.383350 systemd[1]: Reached target timers.target. Oct 29 05:25:30.384621 systemd[1]: Listening on dbus.socket. Oct 29 05:25:30.387087 systemd[1]: Starting docker.socket... Oct 29 05:25:30.391369 systemd[1]: Listening on sshd.socket. Oct 29 05:25:30.392125 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:30.392719 systemd[1]: Listening on docker.socket. Oct 29 05:25:30.393499 systemd[1]: Reached target sockets.target. Oct 29 05:25:30.394115 systemd[1]: Reached target basic.target. Oct 29 05:25:30.394859 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.394922 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 05:25:30.396641 systemd[1]: Starting containerd.service... Oct 29 05:25:30.398911 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 29 05:25:30.401681 systemd[1]: Starting dbus.service... Oct 29 05:25:30.405169 systemd[1]: Starting enable-oem-cloudinit.service... Oct 29 05:25:30.409225 systemd[1]: Starting extend-filesystems.service... Oct 29 05:25:30.412983 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 29 05:25:30.417124 systemd[1]: Starting motdgen.service... Oct 29 05:25:30.419940 systemd[1]: Starting prepare-helm.service... Oct 29 05:25:30.423445 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 29 05:25:30.427729 systemd[1]: Starting sshd-keygen.service... Oct 29 05:25:30.436044 systemd[1]: Starting systemd-logind.service... Oct 29 05:25:30.436730 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 05:25:30.436924 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 05:25:30.438086 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 05:25:30.439143 systemd[1]: Starting update-engine.service... Oct 29 05:25:30.443926 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 29 05:25:30.448366 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 05:25:30.448641 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 29 05:25:30.471853 jq[1174]: false Oct 29 05:25:30.473455 jq[1186]: true Oct 29 05:25:30.472456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 05:25:30.472721 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 29 05:25:30.478973 tar[1189]: linux-amd64/LICENSE Oct 29 05:25:30.480593 systemd-timesyncd[1138]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Oct 29 05:25:30.486397 tar[1189]: linux-amd64/helm Oct 29 05:25:30.480689 systemd-timesyncd[1138]: Initial clock synchronization to Wed 2025-10-29 05:25:30.792673 UTC. Oct 29 05:25:30.497242 jq[1193]: true Oct 29 05:25:30.510893 extend-filesystems[1175]: Found loop1 Oct 29 05:25:30.510917 dbus-daemon[1171]: [system] SELinux support is enabled Oct 29 05:25:30.511153 systemd[1]: Started dbus.service. Oct 29 05:25:30.514918 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 05:25:30.514979 systemd[1]: Reached target system-config.target. Oct 29 05:25:30.515683 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 05:25:30.515744 systemd[1]: Reached target user-config.target. Oct 29 05:25:30.528650 extend-filesystems[1175]: Found vda Oct 29 05:25:30.528650 extend-filesystems[1175]: Found vda1 Oct 29 05:25:30.531198 dbus-daemon[1171]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1029 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 29 05:25:30.531704 extend-filesystems[1175]: Found vda2 Oct 29 05:25:30.531704 extend-filesystems[1175]: Found vda3 Oct 29 05:25:30.537884 extend-filesystems[1175]: Found usr Oct 29 05:25:30.537884 extend-filesystems[1175]: Found vda4 Oct 29 05:25:30.537884 extend-filesystems[1175]: Found vda6 Oct 29 05:25:30.537884 extend-filesystems[1175]: Found vda7 Oct 29 05:25:30.537884 extend-filesystems[1175]: Found vda9 Oct 29 05:25:30.537884 extend-filesystems[1175]: Checking size of /dev/vda9 Oct 29 05:25:30.536966 systemd[1]: Starting systemd-hostnamed.service... Oct 29 05:25:30.553965 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 05:25:30.554215 systemd[1]: Finished motdgen.service. Oct 29 05:25:30.574139 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:30.574190 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 05:25:30.605007 update_engine[1185]: I1029 05:25:30.604453 1185 main.cc:92] Flatcar Update Engine starting Oct 29 05:25:30.610579 extend-filesystems[1175]: Resized partition /dev/vda9 Oct 29 05:25:30.614149 systemd[1]: Started update-engine.service. Oct 29 05:25:30.614620 update_engine[1185]: I1029 05:25:30.614480 1185 update_check_scheduler.cc:74] Next update check in 3m13s Oct 29 05:25:30.617498 systemd[1]: Started locksmithd.service. Oct 29 05:25:30.626625 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Oct 29 05:25:30.640820 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 29 05:25:30.659067 bash[1222]: Updated "/home/core/.ssh/authorized_keys" Oct 29 05:25:30.659946 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 29 05:25:30.696003 env[1191]: time="2025-10-29T05:25:30.695895691Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 29 05:25:30.709629 systemd-logind[1183]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 05:25:30.709684 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 05:25:30.711572 systemd-logind[1183]: New seat seat0. Oct 29 05:25:30.720042 systemd[1]: Started systemd-logind.service. Oct 29 05:25:30.757464 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 29 05:25:30.764498 dbus-daemon[1171]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 29 05:25:30.764669 systemd[1]: Started systemd-hostnamed.service. Oct 29 05:25:30.766445 dbus-daemon[1171]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1207 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 29 05:25:30.773432 systemd[1]: Starting polkit.service... Oct 29 05:25:30.778029 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 05:25:30.778029 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 29 05:25:30.778029 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.774960262Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.775194552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777196459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777230487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777481753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777509077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777528304Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777543978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.777668096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.787478 env[1191]: time="2025-10-29T05:25:30.781752576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 29 05:25:30.779172 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 05:25:30.788362 extend-filesystems[1175]: Resized filesystem in /dev/vda9 Oct 29 05:25:30.789280 env[1191]: time="2025-10-29T05:25:30.786966687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 05:25:30.789280 env[1191]: time="2025-10-29T05:25:30.787005585Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 29 05:25:30.789280 env[1191]: time="2025-10-29T05:25:30.787079177Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 29 05:25:30.789280 env[1191]: time="2025-10-29T05:25:30.787110589Z" level=info msg="metadata content store policy set" policy=shared Oct 29 05:25:30.779436 systemd[1]: Finished extend-filesystems.service. Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792474629Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792514909Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792537729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792598322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792622705Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792646752Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792665908Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792686009Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792712879Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792743649Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792762447Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792821923Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.792977673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 29 05:25:30.797544 env[1191]: time="2025-10-29T05:25:30.793119871Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793469382Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793519565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793545077Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793623154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793658239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793683610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793708263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793733520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.793760625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.795855902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.795882351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798218 env[1191]: time="2025-10-29T05:25:30.795905222Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 29 05:25:30.798967 env[1191]: time="2025-10-29T05:25:30.798864293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798967 env[1191]: time="2025-10-29T05:25:30.798913703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798967 env[1191]: time="2025-10-29T05:25:30.798935705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.798967 env[1191]: time="2025-10-29T05:25:30.798954227Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 29 05:25:30.799180 env[1191]: time="2025-10-29T05:25:30.798974630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 29 05:25:30.799180 env[1191]: time="2025-10-29T05:25:30.798991502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 29 05:25:30.799180 env[1191]: time="2025-10-29T05:25:30.799036914Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 29 05:25:30.799180 env[1191]: time="2025-10-29T05:25:30.799096915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 29 05:25:30.799615 env[1191]: time="2025-10-29T05:25:30.799528228Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 29 05:25:30.801989 env[1191]: time="2025-10-29T05:25:30.799622027Z" level=info msg="Connect containerd service" Oct 29 05:25:30.801989 env[1191]: time="2025-10-29T05:25:30.799687095Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 29 05:25:30.808053 polkitd[1233]: Started polkitd version 121 Oct 29 05:25:30.809842 env[1191]: time="2025-10-29T05:25:30.809789326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 05:25:30.810043 env[1191]: time="2025-10-29T05:25:30.809949734Z" level=info msg="Start subscribing containerd event" Oct 29 05:25:30.810107 env[1191]: time="2025-10-29T05:25:30.810071671Z" level=info msg="Start recovering state" Oct 29 05:25:30.810204 env[1191]: time="2025-10-29T05:25:30.810177811Z" level=info msg="Start event monitor" Oct 29 05:25:30.810264 env[1191]: time="2025-10-29T05:25:30.810215007Z" level=info msg="Start snapshots syncer" Oct 29 05:25:30.810264 env[1191]: time="2025-10-29T05:25:30.810237531Z" level=info msg="Start cni network conf syncer for default" Oct 29 05:25:30.810264 env[1191]: time="2025-10-29T05:25:30.810250206Z" level=info msg="Start streaming server" Oct 29 05:25:30.817636 env[1191]: time="2025-10-29T05:25:30.817599436Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 05:25:30.817746 env[1191]: time="2025-10-29T05:25:30.817682292Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 05:25:30.821828 env[1191]: time="2025-10-29T05:25:30.819973133Z" level=info msg="containerd successfully booted in 0.136383s" Oct 29 05:25:30.820096 systemd[1]: Started containerd.service. Oct 29 05:25:30.835538 polkitd[1233]: Loading rules from directory /etc/polkit-1/rules.d Oct 29 05:25:30.835650 polkitd[1233]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 29 05:25:30.842129 polkitd[1233]: Finished loading, compiling and executing 2 rules Oct 29 05:25:30.842850 dbus-daemon[1171]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 29 05:25:30.843050 systemd[1]: Started polkit.service. Oct 29 05:25:30.844609 polkitd[1233]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 29 05:25:30.865978 systemd-hostnamed[1207]: Hostname set to (static) Oct 29 05:25:30.987947 systemd-networkd[1029]: eth0: Gained IPv6LL Oct 29 05:25:30.990752 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 29 05:25:30.991926 systemd[1]: Reached target network-online.target. Oct 29 05:25:30.994957 systemd[1]: Starting kubelet.service... Oct 29 05:25:31.444864 tar[1189]: linux-amd64/README.md Oct 29 05:25:31.454846 systemd[1]: Finished prepare-helm.service. Oct 29 05:25:31.555058 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 05:25:31.823979 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 05:25:31.858746 systemd[1]: Finished sshd-keygen.service. Oct 29 05:25:31.862468 systemd[1]: Starting issuegen.service... Oct 29 05:25:31.872573 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 05:25:31.872858 systemd[1]: Finished issuegen.service. Oct 29 05:25:31.875638 systemd[1]: Starting systemd-user-sessions.service... Oct 29 05:25:31.886604 systemd[1]: Finished systemd-user-sessions.service. Oct 29 05:25:31.889574 systemd[1]: Started getty@tty1.service. Oct 29 05:25:31.892935 systemd[1]: Started serial-getty@ttyS0.service. Oct 29 05:25:31.894312 systemd[1]: Reached target getty.target. Oct 29 05:25:32.389142 systemd[1]: Started kubelet.service. Oct 29 05:25:32.494999 systemd-networkd[1029]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d30:24:19ff:fee6:34c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d30:24:19ff:fee6:34c2/64 assigned by NDisc. Oct 29 05:25:32.495012 systemd-networkd[1029]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 29 05:25:33.046870 kubelet[1264]: E1029 05:25:33.046759 1264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:25:33.049182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:25:33.049429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:25:33.049918 systemd[1]: kubelet.service: Consumed 1.117s CPU time. Oct 29 05:25:37.577397 coreos-metadata[1170]: Oct 29 05:25:37.577 WARN failed to locate config-drive, using the metadata service API instead Oct 29 05:25:37.633608 coreos-metadata[1170]: Oct 29 05:25:37.633 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 29 05:25:37.660339 coreos-metadata[1170]: Oct 29 05:25:37.660 INFO Fetch successful Oct 29 05:25:37.660658 coreos-metadata[1170]: Oct 29 05:25:37.660 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 29 05:25:37.688840 coreos-metadata[1170]: Oct 29 05:25:37.688 INFO Fetch successful Oct 29 05:25:37.690394 unknown[1170]: wrote ssh authorized keys file for user: core Oct 29 05:25:37.717570 update-ssh-keys[1273]: Updated "/home/core/.ssh/authorized_keys" Oct 29 05:25:37.718261 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 29 05:25:37.718829 systemd[1]: Reached target multi-user.target. Oct 29 05:25:37.721061 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 29 05:25:37.731878 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 29 05:25:37.732124 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 29 05:25:37.732685 systemd[1]: Startup finished in 1.124s (kernel) + 6.600s (initrd) + 13.453s (userspace) = 21.178s. Oct 29 05:25:40.491664 systemd[1]: Created slice system-sshd.slice. Oct 29 05:25:40.494185 systemd[1]: Started sshd@0-10.230.52.194:22-147.75.109.163:59582.service. Oct 29 05:25:41.433316 sshd[1276]: Accepted publickey for core from 147.75.109.163 port 59582 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:25:41.436894 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:41.451680 systemd[1]: Created slice user-500.slice. Oct 29 05:25:41.453680 systemd[1]: Starting user-runtime-dir@500.service... Oct 29 05:25:41.462549 systemd-logind[1183]: New session 1 of user core. Oct 29 05:25:41.469183 systemd[1]: Finished user-runtime-dir@500.service. Oct 29 05:25:41.471868 systemd[1]: Starting user@500.service... Oct 29 05:25:41.478368 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:41.592830 systemd[1279]: Queued start job for default target default.target. Oct 29 05:25:41.594774 systemd[1279]: Reached target paths.target. Oct 29 05:25:41.595009 systemd[1279]: Reached target sockets.target. Oct 29 05:25:41.595209 systemd[1279]: Reached target timers.target. Oct 29 05:25:41.595384 systemd[1279]: Reached target basic.target. Oct 29 05:25:41.595618 systemd[1279]: Reached target default.target. Oct 29 05:25:41.595748 systemd[1]: Started user@500.service. Oct 29 05:25:41.596544 systemd[1279]: Startup finished in 108ms. Oct 29 05:25:41.597378 systemd[1]: Started session-1.scope. Oct 29 05:25:42.240735 systemd[1]: Started sshd@1-10.230.52.194:22-147.75.109.163:59592.service. Oct 29 05:25:43.067578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 05:25:43.067982 systemd[1]: Stopped kubelet.service. Oct 29 05:25:43.068050 systemd[1]: kubelet.service: Consumed 1.117s CPU time. Oct 29 05:25:43.070911 systemd[1]: Starting kubelet.service... Oct 29 05:25:43.151643 sshd[1288]: Accepted publickey for core from 147.75.109.163 port 59592 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:25:43.154487 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:43.161737 systemd-logind[1183]: New session 2 of user core. Oct 29 05:25:43.162818 systemd[1]: Started session-2.scope. Oct 29 05:25:43.238612 systemd[1]: Started kubelet.service. Oct 29 05:25:43.276689 systemd[1]: Started sshd@2-10.230.52.194:22-178.128.241.223:57444.service. Oct 29 05:25:43.349958 kubelet[1295]: E1029 05:25:43.349300 1295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:25:43.353981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:25:43.354214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:25:43.362371 sshd[1302]: Invalid user debian from 178.128.241.223 port 57444 Oct 29 05:25:43.384592 sshd[1302]: pam_faillock(sshd:auth): User unknown Oct 29 05:25:43.385812 sshd[1302]: pam_unix(sshd:auth): check pass; user unknown Oct 29 05:25:43.385974 sshd[1302]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.128.241.223 Oct 29 05:25:43.386858 sshd[1302]: pam_faillock(sshd:auth): User unknown Oct 29 05:25:43.786184 sshd[1288]: pam_unix(sshd:session): session closed for user core Oct 29 05:25:43.789964 systemd[1]: sshd@1-10.230.52.194:22-147.75.109.163:59592.service: Deactivated successfully. Oct 29 05:25:43.790993 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 05:25:43.791736 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Oct 29 05:25:43.792742 systemd-logind[1183]: Removed session 2. Oct 29 05:25:43.935626 systemd[1]: Started sshd@3-10.230.52.194:22-147.75.109.163:59600.service. Oct 29 05:25:44.843487 sshd[1307]: Accepted publickey for core from 147.75.109.163 port 59600 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:25:44.846104 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:44.853533 systemd[1]: Started session-3.scope. Oct 29 05:25:44.854413 systemd-logind[1183]: New session 3 of user core. Oct 29 05:25:45.220451 sshd[1302]: Failed password for invalid user debian from 178.128.241.223 port 57444 ssh2 Oct 29 05:25:45.310673 sshd[1302]: Connection closed by invalid user debian 178.128.241.223 port 57444 [preauth] Oct 29 05:25:45.312466 systemd[1]: sshd@2-10.230.52.194:22-178.128.241.223:57444.service: Deactivated successfully. Oct 29 05:25:45.467537 sshd[1307]: pam_unix(sshd:session): session closed for user core Oct 29 05:25:45.471874 systemd[1]: sshd@3-10.230.52.194:22-147.75.109.163:59600.service: Deactivated successfully. Oct 29 05:25:45.472838 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 05:25:45.473753 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Oct 29 05:25:45.475329 systemd-logind[1183]: Removed session 3. Oct 29 05:25:45.617137 systemd[1]: Started sshd@4-10.230.52.194:22-147.75.109.163:59616.service. Oct 29 05:25:46.524895 sshd[1314]: Accepted publickey for core from 147.75.109.163 port 59616 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:25:46.527078 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:46.534297 systemd-logind[1183]: New session 4 of user core. Oct 29 05:25:46.535681 systemd[1]: Started session-4.scope. Oct 29 05:25:47.155360 sshd[1314]: pam_unix(sshd:session): session closed for user core Oct 29 05:25:47.159441 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Oct 29 05:25:47.160107 systemd[1]: sshd@4-10.230.52.194:22-147.75.109.163:59616.service: Deactivated successfully. Oct 29 05:25:47.160959 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 05:25:47.162025 systemd-logind[1183]: Removed session 4. Oct 29 05:25:47.304190 systemd[1]: Started sshd@5-10.230.52.194:22-147.75.109.163:59632.service. Oct 29 05:25:48.207844 sshd[1320]: Accepted publickey for core from 147.75.109.163 port 59632 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:25:48.209841 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:25:48.217593 systemd[1]: Started session-5.scope. Oct 29 05:25:48.218868 systemd-logind[1183]: New session 5 of user core. Oct 29 05:25:48.703692 sudo[1323]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 05:25:48.704114 sudo[1323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 05:25:48.741286 systemd[1]: Starting docker.service... Oct 29 05:25:48.800251 env[1333]: time="2025-10-29T05:25:48.800038177Z" level=info msg="Starting up" Oct 29 05:25:48.804006 env[1333]: time="2025-10-29T05:25:48.803971910Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 05:25:48.804123 env[1333]: time="2025-10-29T05:25:48.804095320Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 05:25:48.804280 env[1333]: time="2025-10-29T05:25:48.804242103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 05:25:48.804417 env[1333]: time="2025-10-29T05:25:48.804389309Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 05:25:48.808700 env[1333]: time="2025-10-29T05:25:48.808670711Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 05:25:48.808862 env[1333]: time="2025-10-29T05:25:48.808834090Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 05:25:48.809009 env[1333]: time="2025-10-29T05:25:48.808963911Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 05:25:48.809149 env[1333]: time="2025-10-29T05:25:48.809104830Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 05:25:48.839409 env[1333]: time="2025-10-29T05:25:48.839368002Z" level=info msg="Loading containers: start." Oct 29 05:25:48.997081 kernel: Initializing XFRM netlink socket Oct 29 05:25:49.052837 env[1333]: time="2025-10-29T05:25:49.052705317Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 05:25:49.144233 systemd-networkd[1029]: docker0: Link UP Oct 29 05:25:49.161793 env[1333]: time="2025-10-29T05:25:49.161741531Z" level=info msg="Loading containers: done." Oct 29 05:25:49.183333 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1913730202-merged.mount: Deactivated successfully. Oct 29 05:25:49.186434 env[1333]: time="2025-10-29T05:25:49.186383964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 05:25:49.186723 env[1333]: time="2025-10-29T05:25:49.186686468Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 29 05:25:49.186968 env[1333]: time="2025-10-29T05:25:49.186936023Z" level=info msg="Daemon has completed initialization" Oct 29 05:25:49.203203 systemd[1]: Started docker.service. Oct 29 05:25:49.212985 env[1333]: time="2025-10-29T05:25:49.212915012Z" level=info msg="API listen on /run/docker.sock" Oct 29 05:25:50.467570 env[1191]: time="2025-10-29T05:25:50.467409628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 05:25:51.540787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354637641.mount: Deactivated successfully. Oct 29 05:25:53.567747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 05:25:53.568227 systemd[1]: Stopped kubelet.service. Oct 29 05:25:53.571725 systemd[1]: Starting kubelet.service... Oct 29 05:25:53.742027 systemd[1]: Started kubelet.service. Oct 29 05:25:53.848491 kubelet[1461]: E1029 05:25:53.848330 1461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:25:53.851008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:25:53.851224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:25:53.880152 env[1191]: time="2025-10-29T05:25:53.878699110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:53.880152 env[1191]: time="2025-10-29T05:25:53.880094251Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:53.882563 env[1191]: time="2025-10-29T05:25:53.882529269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:53.884855 env[1191]: time="2025-10-29T05:25:53.884815149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:53.886231 env[1191]: time="2025-10-29T05:25:53.886191545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 29 05:25:53.888741 env[1191]: time="2025-10-29T05:25:53.888698780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 05:25:56.430283 env[1191]: time="2025-10-29T05:25:56.430075999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:56.433273 env[1191]: time="2025-10-29T05:25:56.433211350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:56.436317 env[1191]: time="2025-10-29T05:25:56.436276545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:56.439192 env[1191]: time="2025-10-29T05:25:56.439157919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:56.440702 env[1191]: time="2025-10-29T05:25:56.440635415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 29 05:25:56.442079 env[1191]: time="2025-10-29T05:25:56.442031864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 05:25:58.660939 env[1191]: time="2025-10-29T05:25:58.660808583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:58.663836 env[1191]: time="2025-10-29T05:25:58.663764296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:58.668058 env[1191]: time="2025-10-29T05:25:58.668012046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:58.670530 env[1191]: time="2025-10-29T05:25:58.670496877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:25:58.671858 env[1191]: time="2025-10-29T05:25:58.671798304Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 29 05:25:58.673711 env[1191]: time="2025-10-29T05:25:58.673666390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 05:26:00.435392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614424608.mount: Deactivated successfully. Oct 29 05:26:01.520084 env[1191]: time="2025-10-29T05:26:01.519820050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:01.523592 env[1191]: time="2025-10-29T05:26:01.523097210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:01.524727 env[1191]: time="2025-10-29T05:26:01.524690853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:01.525558 env[1191]: time="2025-10-29T05:26:01.525520350Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 29 05:26:01.526861 env[1191]: time="2025-10-29T05:26:01.526825010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:01.527122 env[1191]: time="2025-10-29T05:26:01.527089266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 05:26:02.316684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299093046.mount: Deactivated successfully. Oct 29 05:26:02.530885 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 29 05:26:03.972638 env[1191]: time="2025-10-29T05:26:03.972523477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:03.975641 env[1191]: time="2025-10-29T05:26:03.975601262Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:03.978258 env[1191]: time="2025-10-29T05:26:03.978222828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:03.980807 env[1191]: time="2025-10-29T05:26:03.980757997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:03.982221 env[1191]: time="2025-10-29T05:26:03.982129553Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 29 05:26:03.983983 env[1191]: time="2025-10-29T05:26:03.983950021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 05:26:04.067988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 05:26:04.068709 systemd[1]: Stopped kubelet.service. Oct 29 05:26:04.072314 systemd[1]: Starting kubelet.service... Oct 29 05:26:04.307468 systemd[1]: Started kubelet.service. Oct 29 05:26:04.370468 kubelet[1474]: E1029 05:26:04.370409 1474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 05:26:04.372996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 05:26:04.373218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 05:26:04.818780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199865898.mount: Deactivated successfully. Oct 29 05:26:04.824813 env[1191]: time="2025-10-29T05:26:04.824730745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:04.826520 env[1191]: time="2025-10-29T05:26:04.826475365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:04.828391 env[1191]: time="2025-10-29T05:26:04.828356591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:04.830158 env[1191]: time="2025-10-29T05:26:04.830122005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:04.831122 env[1191]: time="2025-10-29T05:26:04.831075688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 05:26:04.831965 env[1191]: time="2025-10-29T05:26:04.831930328Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 05:26:05.560254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203352495.mount: Deactivated successfully. Oct 29 05:26:09.832125 env[1191]: time="2025-10-29T05:26:09.831855533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:09.835629 env[1191]: time="2025-10-29T05:26:09.835564422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:09.839232 env[1191]: time="2025-10-29T05:26:09.839187271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:09.842810 env[1191]: time="2025-10-29T05:26:09.842681420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:09.844013 env[1191]: time="2025-10-29T05:26:09.843943214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 29 05:26:13.991008 systemd[1]: Stopped kubelet.service. Oct 29 05:26:13.996370 systemd[1]: Starting kubelet.service... Oct 29 05:26:14.031941 systemd[1]: Reloading. Oct 29 05:26:14.192947 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2025-10-29T05:26:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 05:26:14.194837 /usr/lib/systemd/system-generators/torcx-generator[1523]: time="2025-10-29T05:26:14Z" level=info msg="torcx already run" Oct 29 05:26:14.299377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 05:26:14.299761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 05:26:14.329355 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 05:26:14.514330 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 05:26:14.514796 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 05:26:14.515495 systemd[1]: Stopped kubelet.service. Oct 29 05:26:14.520131 systemd[1]: Starting kubelet.service... Oct 29 05:26:14.684208 systemd[1]: Started kubelet.service. Oct 29 05:26:14.828185 kubelet[1574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:26:14.828185 kubelet[1574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 05:26:14.828185 kubelet[1574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:26:14.829626 kubelet[1574]: I1029 05:26:14.828295 1574 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 05:26:15.437239 kubelet[1574]: I1029 05:26:15.437172 1574 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 05:26:15.437239 kubelet[1574]: I1029 05:26:15.437214 1574 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 05:26:15.437719 kubelet[1574]: I1029 05:26:15.437688 1574 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 05:26:15.528803 kubelet[1574]: I1029 05:26:15.528741 1574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 05:26:15.529237 kubelet[1574]: E1029 05:26:15.528862 1574 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.52.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:15.548186 kubelet[1574]: E1029 05:26:15.548130 1574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 05:26:15.548186 kubelet[1574]: I1029 05:26:15.548187 1574 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 05:26:15.554105 kubelet[1574]: I1029 05:26:15.554061 1574 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 05:26:15.555553 kubelet[1574]: I1029 05:26:15.555472 1574 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 05:26:15.555834 kubelet[1574]: I1029 05:26:15.555545 1574 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-clpdb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 05:26:15.556099 kubelet[1574]: I1029 05:26:15.555858 1574 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 05:26:15.556099 kubelet[1574]: I1029 05:26:15.555874 1574 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 05:26:15.556233 kubelet[1574]: I1029 05:26:15.556092 1574 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:26:15.560091 kubelet[1574]: I1029 05:26:15.560058 1574 kubelet.go:446] "Attempting to sync node with API server" Oct 29 05:26:15.560195 kubelet[1574]: I1029 05:26:15.560118 1574 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 05:26:15.560195 kubelet[1574]: I1029 05:26:15.560160 1574 kubelet.go:352] "Adding apiserver pod source" Oct 29 05:26:15.560195 kubelet[1574]: I1029 05:26:15.560182 1574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 05:26:15.567201 update_engine[1185]: I1029 05:26:15.567075 1185 update_attempter.cc:509] Updating boot flags... Oct 29 05:26:15.579445 kubelet[1574]: W1029 05:26:15.578260 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:15.579445 kubelet[1574]: E1029 05:26:15.578963 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:15.579445 kubelet[1574]: W1029 05:26:15.579159 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-clpdb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:15.579445 kubelet[1574]: E1029 05:26:15.579217 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-clpdb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:15.585217 kubelet[1574]: I1029 05:26:15.585192 1574 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 05:26:15.586152 kubelet[1574]: I1029 05:26:15.586122 1574 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 05:26:15.589801 kubelet[1574]: W1029 05:26:15.588218 1574 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 05:26:15.600218 kubelet[1574]: I1029 05:26:15.600191 1574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 05:26:15.600404 kubelet[1574]: I1029 05:26:15.600382 1574 server.go:1287] "Started kubelet" Oct 29 05:26:15.600909 kubelet[1574]: I1029 05:26:15.600851 1574 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 05:26:15.608377 kubelet[1574]: I1029 05:26:15.608289 1574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 05:26:15.609217 kubelet[1574]: I1029 05:26:15.609192 1574 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 05:26:15.615190 kubelet[1574]: E1029 05:26:15.613757 1574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.52.194:6443/api/v1/namespaces/default/events\": dial tcp 10.230.52.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-clpdb.gb1.brightbox.com.1872defb66ee816a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-clpdb.gb1.brightbox.com,UID:srv-clpdb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-clpdb.gb1.brightbox.com,},FirstTimestamp:2025-10-29 05:26:15.600349546 +0000 UTC m=+0.908475358,LastTimestamp:2025-10-29 05:26:15.600349546 +0000 UTC m=+0.908475358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-clpdb.gb1.brightbox.com,}" Oct 29 05:26:15.617155 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Oct 29 05:26:15.620975 kubelet[1574]: I1029 05:26:15.620948 1574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 05:26:15.625120 kubelet[1574]: I1029 05:26:15.625095 1574 server.go:479] "Adding debug handlers to kubelet server" Oct 29 05:26:15.627567 kubelet[1574]: I1029 05:26:15.627531 1574 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 05:26:15.627872 kubelet[1574]: E1029 05:26:15.627838 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-clpdb.gb1.brightbox.com\" not found" Oct 29 05:26:15.628315 kubelet[1574]: I1029 05:26:15.628287 1574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 05:26:15.628399 kubelet[1574]: I1029 05:26:15.628368 1574 reconciler.go:26] "Reconciler: start to sync state" Oct 29 05:26:15.628705 kubelet[1574]: I1029 05:26:15.628677 1574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 05:26:15.633848 kubelet[1574]: E1029 05:26:15.628886 1574 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 05:26:15.637807 kubelet[1574]: W1029 05:26:15.637749 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:15.637976 kubelet[1574]: E1029 05:26:15.637946 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:15.638237 kubelet[1574]: E1029 05:26:15.638187 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-clpdb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="200ms" Oct 29 05:26:15.643737 kubelet[1574]: I1029 05:26:15.643710 1574 factory.go:221] Registration of the systemd container factory successfully Oct 29 05:26:15.644067 kubelet[1574]: I1029 05:26:15.644036 1574 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 05:26:15.646056 kubelet[1574]: I1029 05:26:15.646031 1574 factory.go:221] Registration of the containerd container factory successfully Oct 29 05:26:15.698631 kubelet[1574]: I1029 05:26:15.696237 1574 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 05:26:15.698834 kubelet[1574]: I1029 05:26:15.698808 1574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 05:26:15.698982 kubelet[1574]: I1029 05:26:15.698960 1574 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:26:15.700840 kubelet[1574]: I1029 05:26:15.700818 1574 policy_none.go:49] "None policy: Start" Oct 29 05:26:15.700991 kubelet[1574]: I1029 05:26:15.700967 1574 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 05:26:15.701132 kubelet[1574]: I1029 05:26:15.701111 1574 state_mem.go:35] "Initializing new in-memory state store" Oct 29 05:26:15.724013 systemd[1]: Created slice kubepods.slice. Oct 29 05:26:15.728602 kubelet[1574]: E1029 05:26:15.728555 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-clpdb.gb1.brightbox.com\" not found" Oct 29 05:26:15.739623 kubelet[1574]: I1029 05:26:15.739572 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 05:26:15.758291 kubelet[1574]: I1029 05:26:15.757322 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 05:26:15.758291 kubelet[1574]: I1029 05:26:15.757397 1574 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 05:26:15.758291 kubelet[1574]: I1029 05:26:15.757442 1574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 05:26:15.758291 kubelet[1574]: I1029 05:26:15.757459 1574 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 05:26:15.758291 kubelet[1574]: E1029 05:26:15.757532 1574 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 05:26:15.774766 kubelet[1574]: W1029 05:26:15.774653 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:15.774766 kubelet[1574]: E1029 05:26:15.774765 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:15.779236 systemd[1]: Created slice kubepods-burstable.slice. Oct 29 05:26:15.829171 kubelet[1574]: E1029 05:26:15.829061 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-clpdb.gb1.brightbox.com\" not found" Oct 29 05:26:15.839365 kubelet[1574]: E1029 05:26:15.839314 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-clpdb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="400ms" Oct 29 05:26:15.843480 systemd[1]: Created slice kubepods-besteffort.slice. Oct 29 05:26:15.858550 kubelet[1574]: E1029 05:26:15.858511 1574 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 05:26:15.871460 kubelet[1574]: I1029 05:26:15.871417 1574 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 05:26:15.871969 kubelet[1574]: I1029 05:26:15.871946 1574 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 05:26:15.872167 kubelet[1574]: I1029 05:26:15.872101 1574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 05:26:15.873452 kubelet[1574]: I1029 05:26:15.873425 1574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 05:26:15.876703 kubelet[1574]: E1029 05:26:15.876669 1574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 05:26:15.876834 kubelet[1574]: E1029 05:26:15.876794 1574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-clpdb.gb1.brightbox.com\" not found" Oct 29 05:26:15.977611 kubelet[1574]: I1029 05:26:15.976852 1574 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:15.977611 kubelet[1574]: E1029 05:26:15.977315 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.070914 systemd[1]: Created slice kubepods-burstable-pod9154b199af3c3a15521caab849c59d98.slice. Oct 29 05:26:16.079158 kubelet[1574]: E1029 05:26:16.079002 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.084795 systemd[1]: Created slice kubepods-burstable-pod52513821253a1717e117d6c48569a598.slice. Oct 29 05:26:16.092707 kubelet[1574]: E1029 05:26:16.092580 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.095696 systemd[1]: Created slice kubepods-burstable-pode265b1a32e4022c37b185246ed4846c6.slice. Oct 29 05:26:16.097677 kubelet[1574]: E1029 05:26:16.097639 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.131490 kubelet[1574]: I1029 05:26:16.131402 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9154b199af3c3a15521caab849c59d98-kubeconfig\") pod \"kube-scheduler-srv-clpdb.gb1.brightbox.com\" (UID: \"9154b199af3c3a15521caab849c59d98\") " pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.131738 kubelet[1574]: I1029 05:26:16.131700 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-ca-certs\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.131943 kubelet[1574]: I1029 05:26:16.131914 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-usr-share-ca-certificates\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132089 kubelet[1574]: I1029 05:26:16.132061 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-k8s-certs\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132265 kubelet[1574]: I1029 05:26:16.132241 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-k8s-certs\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132410 kubelet[1574]: I1029 05:26:16.132386 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-ca-certs\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132584 kubelet[1574]: I1029 05:26:16.132546 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-flexvolume-dir\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132762 kubelet[1574]: I1029 05:26:16.132736 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-kubeconfig\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.132939 kubelet[1574]: I1029 05:26:16.132912 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.180682 kubelet[1574]: I1029 05:26:16.180644 1574 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.181292 kubelet[1574]: E1029 05:26:16.181202 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.240930 kubelet[1574]: E1029 05:26:16.240727 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-clpdb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="800ms" Oct 29 05:26:16.381792 env[1191]: time="2025-10-29T05:26:16.381664788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-clpdb.gb1.brightbox.com,Uid:9154b199af3c3a15521caab849c59d98,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:16.395032 env[1191]: time="2025-10-29T05:26:16.394986415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-clpdb.gb1.brightbox.com,Uid:52513821253a1717e117d6c48569a598,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:16.398969 env[1191]: time="2025-10-29T05:26:16.398907939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-clpdb.gb1.brightbox.com,Uid:e265b1a32e4022c37b185246ed4846c6,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:16.586103 kubelet[1574]: I1029 05:26:16.586053 1574 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.586585 kubelet[1574]: E1029 05:26:16.586537 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:16.627321 kubelet[1574]: W1029 05:26:16.627146 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-clpdb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:16.627321 kubelet[1574]: E1029 05:26:16.627244 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.52.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-clpdb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:17.016106 kubelet[1574]: W1029 05:26:17.015760 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:17.016106 kubelet[1574]: E1029 05:26:17.015896 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.52.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:17.042099 kubelet[1574]: E1029 05:26:17.042030 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-clpdb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.194:6443: connect: connection refused" interval="1.6s" Oct 29 05:26:17.044577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957084835.mount: Deactivated successfully. Oct 29 05:26:17.054534 env[1191]: time="2025-10-29T05:26:17.054480501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.057135 env[1191]: time="2025-10-29T05:26:17.057094281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.058105 env[1191]: time="2025-10-29T05:26:17.058057153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.059909 env[1191]: time="2025-10-29T05:26:17.059875190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.062381 env[1191]: time="2025-10-29T05:26:17.062334846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.068868 env[1191]: time="2025-10-29T05:26:17.068734947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.083511 env[1191]: time="2025-10-29T05:26:17.083473459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.085512 env[1191]: time="2025-10-29T05:26:17.085478624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.087650 env[1191]: time="2025-10-29T05:26:17.087616727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.091481 env[1191]: time="2025-10-29T05:26:17.091444059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.092422 env[1191]: time="2025-10-29T05:26:17.092386651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.097741 env[1191]: time="2025-10-29T05:26:17.097699523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:17.107648 kubelet[1574]: W1029 05:26:17.106264 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:17.107648 kubelet[1574]: E1029 05:26:17.106361 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.52.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:17.135490 kubelet[1574]: W1029 05:26:17.135335 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.52.194:6443: connect: connection refused Oct 29 05:26:17.135490 kubelet[1574]: E1029 05:26:17.135421 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.52.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:17.142024 env[1191]: time="2025-10-29T05:26:17.141916862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:17.142294 env[1191]: time="2025-10-29T05:26:17.142240096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:17.142489 env[1191]: time="2025-10-29T05:26:17.142435539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:17.142844 env[1191]: time="2025-10-29T05:26:17.142745016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:17.143082 env[1191]: time="2025-10-29T05:26:17.142896575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:17.143252 env[1191]: time="2025-10-29T05:26:17.143076212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:17.143429 env[1191]: time="2025-10-29T05:26:17.143359172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0ac668622fc7de94d08e56a8dcf749a79ef36ff95a1fc951c20e47592031ebb pid=1648 runtime=io.containerd.runc.v2 Oct 29 05:26:17.143700 env[1191]: time="2025-10-29T05:26:17.143640461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f65e5ce0c76a3ac1e796c0c26b08797c3cf8eccb16cd7098cd433f0abc394571 pid=1637 runtime=io.containerd.runc.v2 Oct 29 05:26:17.146197 env[1191]: time="2025-10-29T05:26:17.146108895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:17.146375 env[1191]: time="2025-10-29T05:26:17.146333525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:17.148175 env[1191]: time="2025-10-29T05:26:17.148102873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:17.148731 env[1191]: time="2025-10-29T05:26:17.148669041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13e48a21ed15c512b8613dc00c3c2c8da8ca1d31361816ab2bdeb73bc536610b pid=1642 runtime=io.containerd.runc.v2 Oct 29 05:26:17.174874 systemd[1]: Started cri-containerd-13e48a21ed15c512b8613dc00c3c2c8da8ca1d31361816ab2bdeb73bc536610b.scope. Oct 29 05:26:17.211476 systemd[1]: Started cri-containerd-a0ac668622fc7de94d08e56a8dcf749a79ef36ff95a1fc951c20e47592031ebb.scope. Oct 29 05:26:17.213328 systemd[1]: Started cri-containerd-f65e5ce0c76a3ac1e796c0c26b08797c3cf8eccb16cd7098cd433f0abc394571.scope. Oct 29 05:26:17.322121 env[1191]: time="2025-10-29T05:26:17.322031688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-clpdb.gb1.brightbox.com,Uid:52513821253a1717e117d6c48569a598,Namespace:kube-system,Attempt:0,} returns sandbox id \"13e48a21ed15c512b8613dc00c3c2c8da8ca1d31361816ab2bdeb73bc536610b\"" Oct 29 05:26:17.324832 env[1191]: time="2025-10-29T05:26:17.324505216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-clpdb.gb1.brightbox.com,Uid:9154b199af3c3a15521caab849c59d98,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0ac668622fc7de94d08e56a8dcf749a79ef36ff95a1fc951c20e47592031ebb\"" Oct 29 05:26:17.328326 env[1191]: time="2025-10-29T05:26:17.328287138Z" level=info msg="CreateContainer within sandbox \"13e48a21ed15c512b8613dc00c3c2c8da8ca1d31361816ab2bdeb73bc536610b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 05:26:17.328760 env[1191]: time="2025-10-29T05:26:17.328722182Z" level=info msg="CreateContainer within sandbox \"a0ac668622fc7de94d08e56a8dcf749a79ef36ff95a1fc951c20e47592031ebb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 05:26:17.345135 env[1191]: time="2025-10-29T05:26:17.345061183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-clpdb.gb1.brightbox.com,Uid:e265b1a32e4022c37b185246ed4846c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f65e5ce0c76a3ac1e796c0c26b08797c3cf8eccb16cd7098cd433f0abc394571\"" Oct 29 05:26:17.348601 env[1191]: time="2025-10-29T05:26:17.348563968Z" level=info msg="CreateContainer within sandbox \"f65e5ce0c76a3ac1e796c0c26b08797c3cf8eccb16cd7098cd433f0abc394571\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 05:26:17.352751 env[1191]: time="2025-10-29T05:26:17.352692118Z" level=info msg="CreateContainer within sandbox \"a0ac668622fc7de94d08e56a8dcf749a79ef36ff95a1fc951c20e47592031ebb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1a9b417ae72e8590e8551b6cf8b9651f82a48649d1f08d69935f6ba675be766d\"" Oct 29 05:26:17.353959 env[1191]: time="2025-10-29T05:26:17.353924628Z" level=info msg="StartContainer for \"1a9b417ae72e8590e8551b6cf8b9651f82a48649d1f08d69935f6ba675be766d\"" Oct 29 05:26:17.360874 env[1191]: time="2025-10-29T05:26:17.360816937Z" level=info msg="CreateContainer within sandbox \"13e48a21ed15c512b8613dc00c3c2c8da8ca1d31361816ab2bdeb73bc536610b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cd65ad229a380ddd70823969123d02d0e8d4bed0005ed494b720f2ddf47dd84\"" Oct 29 05:26:17.361596 env[1191]: time="2025-10-29T05:26:17.361561250Z" level=info msg="StartContainer for \"0cd65ad229a380ddd70823969123d02d0e8d4bed0005ed494b720f2ddf47dd84\"" Oct 29 05:26:17.370574 env[1191]: time="2025-10-29T05:26:17.370511719Z" level=info msg="CreateContainer within sandbox \"f65e5ce0c76a3ac1e796c0c26b08797c3cf8eccb16cd7098cd433f0abc394571\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3818bebe8fadd1559ae171de266fc95e08a82f59b5efd64f9f98fb0ed85178ac\"" Oct 29 05:26:17.371209 env[1191]: time="2025-10-29T05:26:17.371175492Z" level=info msg="StartContainer for \"3818bebe8fadd1559ae171de266fc95e08a82f59b5efd64f9f98fb0ed85178ac\"" Oct 29 05:26:17.388289 systemd[1]: Started cri-containerd-1a9b417ae72e8590e8551b6cf8b9651f82a48649d1f08d69935f6ba675be766d.scope. Oct 29 05:26:17.397749 kubelet[1574]: I1029 05:26:17.396968 1574 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:17.397749 kubelet[1574]: E1029 05:26:17.397700 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.52.194:6443/api/v1/nodes\": dial tcp 10.230.52.194:6443: connect: connection refused" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:17.419948 systemd[1]: Started cri-containerd-0cd65ad229a380ddd70823969123d02d0e8d4bed0005ed494b720f2ddf47dd84.scope. Oct 29 05:26:17.432361 systemd[1]: Started cri-containerd-3818bebe8fadd1559ae171de266fc95e08a82f59b5efd64f9f98fb0ed85178ac.scope. Oct 29 05:26:17.524551 env[1191]: time="2025-10-29T05:26:17.524493698Z" level=info msg="StartContainer for \"3818bebe8fadd1559ae171de266fc95e08a82f59b5efd64f9f98fb0ed85178ac\" returns successfully" Oct 29 05:26:17.541817 env[1191]: time="2025-10-29T05:26:17.541703881Z" level=info msg="StartContainer for \"0cd65ad229a380ddd70823969123d02d0e8d4bed0005ed494b720f2ddf47dd84\" returns successfully" Oct 29 05:26:17.552287 env[1191]: time="2025-10-29T05:26:17.552248325Z" level=info msg="StartContainer for \"1a9b417ae72e8590e8551b6cf8b9651f82a48649d1f08d69935f6ba675be766d\" returns successfully" Oct 29 05:26:17.651238 kubelet[1574]: E1029 05:26:17.649955 1574 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.52.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.52.194:6443: connect: connection refused" logger="UnhandledError" Oct 29 05:26:17.777653 kubelet[1574]: E1029 05:26:17.777613 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:17.783082 kubelet[1574]: E1029 05:26:17.783045 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:17.788113 kubelet[1574]: E1029 05:26:17.788086 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:18.790180 kubelet[1574]: E1029 05:26:18.790124 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:18.791146 kubelet[1574]: E1029 05:26:18.791116 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:18.791411 kubelet[1574]: E1029 05:26:18.791386 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:19.001237 kubelet[1574]: I1029 05:26:19.001198 1574 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:19.792732 kubelet[1574]: E1029 05:26:19.792675 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.197423 kubelet[1574]: E1029 05:26:20.197313 1574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-clpdb.gb1.brightbox.com\" not found" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.272778 kubelet[1574]: I1029 05:26:20.272730 1574 kubelet_node_status.go:78] "Successfully registered node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.328640 kubelet[1574]: I1029 05:26:20.328482 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.338012 kubelet[1574]: E1029 05:26:20.337961 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-clpdb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.338236 kubelet[1574]: I1029 05:26:20.338198 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.340499 kubelet[1574]: E1029 05:26:20.340457 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.340499 kubelet[1574]: I1029 05:26:20.340494 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.343149 kubelet[1574]: E1029 05:26:20.343111 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.567692 kubelet[1574]: I1029 05:26:20.567651 1574 apiserver.go:52] "Watching apiserver" Oct 29 05:26:20.629051 kubelet[1574]: I1029 05:26:20.628982 1574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 05:26:20.910691 kubelet[1574]: I1029 05:26:20.910533 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:20.913577 kubelet[1574]: E1029 05:26:20.913546 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:22.242199 systemd[1]: Reloading. Oct 29 05:26:22.342026 /usr/lib/systemd/system-generators/torcx-generator[1882]: time="2025-10-29T05:26:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 05:26:22.342092 /usr/lib/systemd/system-generators/torcx-generator[1882]: time="2025-10-29T05:26:22Z" level=info msg="torcx already run" Oct 29 05:26:22.457017 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 05:26:22.457324 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 05:26:22.488059 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 05:26:22.646964 systemd[1]: Stopping kubelet.service... Oct 29 05:26:22.665695 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 05:26:22.666091 systemd[1]: Stopped kubelet.service. Oct 29 05:26:22.666222 systemd[1]: kubelet.service: Consumed 1.302s CPU time. Oct 29 05:26:22.669451 systemd[1]: Starting kubelet.service... Oct 29 05:26:24.069364 systemd[1]: Started kubelet.service. Oct 29 05:26:24.204309 sudo[1943]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 29 05:26:24.205628 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 29 05:26:24.207684 kubelet[1933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:26:24.207684 kubelet[1933]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 05:26:24.207684 kubelet[1933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 05:26:24.208279 kubelet[1933]: I1029 05:26:24.207873 1933 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 05:26:24.231864 kubelet[1933]: I1029 05:26:24.230527 1933 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 05:26:24.231864 kubelet[1933]: I1029 05:26:24.230616 1933 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 05:26:24.231864 kubelet[1933]: I1029 05:26:24.231468 1933 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 05:26:24.235797 kubelet[1933]: I1029 05:26:24.235103 1933 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 05:26:24.240359 kubelet[1933]: I1029 05:26:24.240319 1933 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 05:26:24.259333 kubelet[1933]: E1029 05:26:24.259291 1933 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 05:26:24.259333 kubelet[1933]: I1029 05:26:24.259334 1933 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 05:26:24.267187 kubelet[1933]: I1029 05:26:24.267160 1933 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 05:26:24.267852 kubelet[1933]: I1029 05:26:24.267768 1933 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 05:26:24.271875 kubelet[1933]: I1029 05:26:24.267854 1933 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-clpdb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 05:26:24.271875 kubelet[1933]: I1029 05:26:24.271282 1933 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 05:26:24.271875 kubelet[1933]: I1029 05:26:24.271303 1933 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 05:26:24.271875 kubelet[1933]: I1029 05:26:24.271472 1933 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:26:24.272333 kubelet[1933]: I1029 05:26:24.271966 1933 kubelet.go:446] "Attempting to sync node with API server" Oct 29 05:26:24.272333 kubelet[1933]: I1029 05:26:24.271999 1933 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 05:26:24.272333 kubelet[1933]: I1029 05:26:24.272033 1933 kubelet.go:352] "Adding apiserver pod source" Oct 29 05:26:24.272333 kubelet[1933]: I1029 05:26:24.272067 1933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 05:26:24.287348 kubelet[1933]: I1029 05:26:24.284834 1933 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 05:26:24.287348 kubelet[1933]: I1029 05:26:24.285857 1933 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 05:26:24.287348 kubelet[1933]: I1029 05:26:24.287126 1933 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 05:26:24.287348 kubelet[1933]: I1029 05:26:24.287323 1933 server.go:1287] "Started kubelet" Oct 29 05:26:24.295558 kubelet[1933]: I1029 05:26:24.295528 1933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 05:26:24.297908 kubelet[1933]: I1029 05:26:24.297866 1933 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 05:26:24.300957 kubelet[1933]: I1029 05:26:24.300928 1933 server.go:479] "Adding debug handlers to kubelet server" Oct 29 05:26:24.304024 kubelet[1933]: I1029 05:26:24.303510 1933 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 05:26:24.318838 kubelet[1933]: I1029 05:26:24.318798 1933 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 05:26:24.319036 kubelet[1933]: E1029 05:26:24.319006 1933 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-clpdb.gb1.brightbox.com\" not found" Oct 29 05:26:24.322870 kubelet[1933]: I1029 05:26:24.322747 1933 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 05:26:24.326706 kubelet[1933]: I1029 05:26:24.326432 1933 reconciler.go:26] "Reconciler: start to sync state" Oct 29 05:26:24.334184 kubelet[1933]: I1029 05:26:24.334151 1933 factory.go:221] Registration of the systemd container factory successfully Oct 29 05:26:24.334322 kubelet[1933]: I1029 05:26:24.334282 1933 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 05:26:24.345620 kubelet[1933]: I1029 05:26:24.345531 1933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 05:26:24.345973 kubelet[1933]: I1029 05:26:24.345946 1933 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 05:26:24.346153 kubelet[1933]: I1029 05:26:24.346123 1933 factory.go:221] Registration of the containerd container factory successfully Oct 29 05:26:24.364687 kubelet[1933]: I1029 05:26:24.364634 1933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 05:26:24.366792 kubelet[1933]: I1029 05:26:24.366748 1933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 05:26:24.366882 kubelet[1933]: I1029 05:26:24.366849 1933 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 05:26:24.366965 kubelet[1933]: I1029 05:26:24.366886 1933 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 05:26:24.366965 kubelet[1933]: I1029 05:26:24.366902 1933 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 05:26:24.367103 kubelet[1933]: E1029 05:26:24.366992 1933 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 05:26:24.472904 kubelet[1933]: E1029 05:26:24.467095 1933 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 05:26:24.509699 kubelet[1933]: I1029 05:26:24.507863 1933 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 05:26:24.510346 kubelet[1933]: I1029 05:26:24.510303 1933 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 05:26:24.510453 kubelet[1933]: I1029 05:26:24.510361 1933 state_mem.go:36] "Initialized new in-memory state store" Oct 29 05:26:24.510679 kubelet[1933]: I1029 05:26:24.510649 1933 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 05:26:24.510758 kubelet[1933]: I1029 05:26:24.510678 1933 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 05:26:24.510758 kubelet[1933]: I1029 05:26:24.510720 1933 policy_none.go:49] "None policy: Start" Oct 29 05:26:24.510758 kubelet[1933]: I1029 05:26:24.510749 1933 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 05:26:24.511010 kubelet[1933]: I1029 05:26:24.510796 1933 state_mem.go:35] "Initializing new in-memory state store" Oct 29 05:26:24.511088 kubelet[1933]: I1029 05:26:24.511025 1933 state_mem.go:75] "Updated machine memory state" Oct 29 05:26:24.524347 kubelet[1933]: I1029 05:26:24.524304 1933 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 05:26:24.524586 kubelet[1933]: I1029 05:26:24.524559 1933 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 05:26:24.524664 kubelet[1933]: I1029 05:26:24.524585 1933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 05:26:24.525311 kubelet[1933]: I1029 05:26:24.525277 1933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 05:26:24.530046 kubelet[1933]: E1029 05:26:24.530009 1933 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 05:26:24.667165 kubelet[1933]: I1029 05:26:24.667056 1933 kubelet_node_status.go:75] "Attempting to register node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.677090 kubelet[1933]: I1029 05:26:24.677050 1933 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.679073 kubelet[1933]: I1029 05:26:24.679048 1933 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.679252 kubelet[1933]: I1029 05:26:24.679096 1933 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.689165 kubelet[1933]: I1029 05:26:24.689131 1933 kubelet_node_status.go:124] "Node was previously registered" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.689302 kubelet[1933]: I1029 05:26:24.689251 1933 kubelet_node_status.go:78] "Successfully registered node" node="srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.690174 kubelet[1933]: W1029 05:26:24.690144 1933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 05:26:24.698797 kubelet[1933]: W1029 05:26:24.698721 1933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 05:26:24.700353 kubelet[1933]: W1029 05:26:24.700324 1933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 05:26:24.730784 kubelet[1933]: I1029 05:26:24.730729 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-ca-certs\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731073 kubelet[1933]: I1029 05:26:24.731032 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-usr-share-ca-certificates\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731278 kubelet[1933]: I1029 05:26:24.731244 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-flexvolume-dir\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731445 kubelet[1933]: I1029 05:26:24.731418 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-k8s-certs\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731600 kubelet[1933]: I1029 05:26:24.731574 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-kubeconfig\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731759 kubelet[1933]: I1029 05:26:24.731729 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e265b1a32e4022c37b185246ed4846c6-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-clpdb.gb1.brightbox.com\" (UID: \"e265b1a32e4022c37b185246ed4846c6\") " pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.731976 kubelet[1933]: I1029 05:26:24.731949 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9154b199af3c3a15521caab849c59d98-kubeconfig\") pod \"kube-scheduler-srv-clpdb.gb1.brightbox.com\" (UID: \"9154b199af3c3a15521caab849c59d98\") " pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.732114 kubelet[1933]: I1029 05:26:24.732087 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-ca-certs\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:24.732253 kubelet[1933]: I1029 05:26:24.732226 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52513821253a1717e117d6c48569a598-k8s-certs\") pod \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" (UID: \"52513821253a1717e117d6c48569a598\") " pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:25.131715 sudo[1943]: pam_unix(sudo:session): session closed for user root Oct 29 05:26:25.284966 kubelet[1933]: I1029 05:26:25.284911 1933 apiserver.go:52] "Watching apiserver" Oct 29 05:26:25.323656 kubelet[1933]: I1029 05:26:25.323579 1933 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 05:26:25.428628 kubelet[1933]: I1029 05:26:25.428478 1933 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:25.428918 kubelet[1933]: I1029 05:26:25.428566 1933 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:25.436199 kubelet[1933]: W1029 05:26:25.436159 1933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 05:26:25.436278 kubelet[1933]: E1029 05:26:25.436264 1933 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-clpdb.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:25.444136 kubelet[1933]: W1029 05:26:25.444107 1933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 05:26:25.444263 kubelet[1933]: E1029 05:26:25.444179 1933 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-clpdb.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" Oct 29 05:26:25.478166 kubelet[1933]: I1029 05:26:25.478051 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-clpdb.gb1.brightbox.com" podStartSLOduration=1.47800994 podStartE2EDuration="1.47800994s" podCreationTimestamp="2025-10-29 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:25.466409096 +0000 UTC m=+1.374549356" watchObservedRunningTime="2025-10-29 05:26:25.47800994 +0000 UTC m=+1.386150196" Oct 29 05:26:25.488937 kubelet[1933]: I1029 05:26:25.488887 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-clpdb.gb1.brightbox.com" podStartSLOduration=1.488850088 podStartE2EDuration="1.488850088s" podCreationTimestamp="2025-10-29 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:25.479205736 +0000 UTC m=+1.387346004" watchObservedRunningTime="2025-10-29 05:26:25.488850088 +0000 UTC m=+1.396990340" Oct 29 05:26:25.504257 kubelet[1933]: I1029 05:26:25.504168 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-clpdb.gb1.brightbox.com" podStartSLOduration=1.504153787 podStartE2EDuration="1.504153787s" podCreationTimestamp="2025-10-29 05:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:25.491615411 +0000 UTC m=+1.399755686" watchObservedRunningTime="2025-10-29 05:26:25.504153787 +0000 UTC m=+1.412294039" Oct 29 05:26:27.531978 sudo[1323]: pam_unix(sudo:session): session closed for user root Oct 29 05:26:27.681498 sshd[1320]: pam_unix(sshd:session): session closed for user core Oct 29 05:26:27.688011 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Oct 29 05:26:27.689614 systemd[1]: sshd@5-10.230.52.194:22-147.75.109.163:59632.service: Deactivated successfully. Oct 29 05:26:27.691578 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 05:26:27.692099 systemd[1]: session-5.scope: Consumed 6.658s CPU time. Oct 29 05:26:27.693147 systemd-logind[1183]: Removed session 5. Oct 29 05:26:27.946876 kubelet[1933]: I1029 05:26:27.946814 1933 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 05:26:27.947703 env[1191]: time="2025-10-29T05:26:27.947614611Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 05:26:27.948240 kubelet[1933]: I1029 05:26:27.947998 1933 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 05:26:28.958821 kubelet[1933]: I1029 05:26:28.958746 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/628f7bf3-f3bb-4768-93d2-ad666f1222d4-kube-proxy\") pod \"kube-proxy-6xz82\" (UID: \"628f7bf3-f3bb-4768-93d2-ad666f1222d4\") " pod="kube-system/kube-proxy-6xz82" Oct 29 05:26:28.962427 systemd[1]: Created slice kubepods-besteffort-pod628f7bf3_f3bb_4768_93d2_ad666f1222d4.slice. Oct 29 05:26:28.964940 kubelet[1933]: I1029 05:26:28.964903 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-cgroup\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.968758 kubelet[1933]: I1029 05:26:28.968728 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-xtables-lock\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.970070 kubelet[1933]: I1029 05:26:28.969028 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5a355ba-d143-481e-b662-b538b703d12f-clustermesh-secrets\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.971568 systemd[1]: Created slice kubepods-burstable-poda5a355ba_d143_481e_b662_b538b703d12f.slice. Oct 29 05:26:28.972096 kubelet[1933]: I1029 05:26:28.972055 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cni-path\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974037 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/628f7bf3-f3bb-4768-93d2-ad666f1222d4-lib-modules\") pod \"kube-proxy-6xz82\" (UID: \"628f7bf3-f3bb-4768-93d2-ad666f1222d4\") " pod="kube-system/kube-proxy-6xz82" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974224 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-etc-cni-netd\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974256 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-lib-modules\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974287 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjnhc\" (UniqueName: \"kubernetes.io/projected/628f7bf3-f3bb-4768-93d2-ad666f1222d4-kube-api-access-hjnhc\") pod \"kube-proxy-6xz82\" (UID: \"628f7bf3-f3bb-4768-93d2-ad666f1222d4\") " pod="kube-system/kube-proxy-6xz82" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974346 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-bpf-maps\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.977810 kubelet[1933]: I1029 05:26:28.974376 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-hubble-tls\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974417 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdsjw\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-kube-api-access-vdsjw\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974474 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5a355ba-d143-481e-b662-b538b703d12f-cilium-config-path\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974516 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-run\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974541 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-hostproc\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974568 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-net\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:28.978220 kubelet[1933]: I1029 05:26:28.974594 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/628f7bf3-f3bb-4768-93d2-ad666f1222d4-xtables-lock\") pod \"kube-proxy-6xz82\" (UID: \"628f7bf3-f3bb-4768-93d2-ad666f1222d4\") " pod="kube-system/kube-proxy-6xz82" Oct 29 05:26:28.978673 kubelet[1933]: I1029 05:26:28.974640 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-kernel\") pod \"cilium-kqn88\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " pod="kube-system/cilium-kqn88" Oct 29 05:26:29.035802 kubelet[1933]: I1029 05:26:29.035693 1933 status_manager.go:890] "Failed to get status for pod" podUID="e8751b1d-d64d-4f7d-b496-77e4bbe52f16" pod="kube-system/cilium-operator-6c4d7847fc-l6v47" err="pods \"cilium-operator-6c4d7847fc-l6v47\" is forbidden: User \"system:node:srv-clpdb.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-clpdb.gb1.brightbox.com' and this object" Oct 29 05:26:29.039104 systemd[1]: Created slice kubepods-besteffort-pode8751b1d_d64d_4f7d_b496_77e4bbe52f16.slice. Oct 29 05:26:29.077028 kubelet[1933]: I1029 05:26:29.076948 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l6v47\" (UID: \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\") " pod="kube-system/cilium-operator-6c4d7847fc-l6v47" Oct 29 05:26:29.077630 kubelet[1933]: I1029 05:26:29.077597 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtj2m\" (UniqueName: \"kubernetes.io/projected/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-kube-api-access-rtj2m\") pod \"cilium-operator-6c4d7847fc-l6v47\" (UID: \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\") " pod="kube-system/cilium-operator-6c4d7847fc-l6v47" Oct 29 05:26:29.085185 kubelet[1933]: I1029 05:26:29.085132 1933 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 29 05:26:29.271544 env[1191]: time="2025-10-29T05:26:29.269903865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xz82,Uid:628f7bf3-f3bb-4768-93d2-ad666f1222d4,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:29.277121 env[1191]: time="2025-10-29T05:26:29.277068199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqn88,Uid:a5a355ba-d143-481e-b662-b538b703d12f,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304013584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304272921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304313397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304752108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304853373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.304869416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:29.305471 env[1191]: time="2025-10-29T05:26:29.305128607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43 pid=2026 runtime=io.containerd.runc.v2 Oct 29 05:26:29.306128 env[1191]: time="2025-10-29T05:26:29.305990971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c077dba8431a55420d083e2ba87ee6ca45ce8af3e3085bdeafa088fb3826e0dc pid=2025 runtime=io.containerd.runc.v2 Oct 29 05:26:29.346268 env[1191]: time="2025-10-29T05:26:29.346189496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l6v47,Uid:e8751b1d-d64d-4f7d-b496-77e4bbe52f16,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:29.346694 systemd[1]: Started cri-containerd-c077dba8431a55420d083e2ba87ee6ca45ce8af3e3085bdeafa088fb3826e0dc.scope. Oct 29 05:26:29.367191 systemd[1]: Started cri-containerd-977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43.scope. Oct 29 05:26:29.429926 env[1191]: time="2025-10-29T05:26:29.428698876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:29.429926 env[1191]: time="2025-10-29T05:26:29.428795791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:29.429926 env[1191]: time="2025-10-29T05:26:29.428825733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:29.429926 env[1191]: time="2025-10-29T05:26:29.429104141Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707 pid=2081 runtime=io.containerd.runc.v2 Oct 29 05:26:29.434864 env[1191]: time="2025-10-29T05:26:29.434797340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xz82,Uid:628f7bf3-f3bb-4768-93d2-ad666f1222d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c077dba8431a55420d083e2ba87ee6ca45ce8af3e3085bdeafa088fb3826e0dc\"" Oct 29 05:26:29.446087 env[1191]: time="2025-10-29T05:26:29.446020299Z" level=info msg="CreateContainer within sandbox \"c077dba8431a55420d083e2ba87ee6ca45ce8af3e3085bdeafa088fb3826e0dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 05:26:29.462394 env[1191]: time="2025-10-29T05:26:29.462339902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqn88,Uid:a5a355ba-d143-481e-b662-b538b703d12f,Namespace:kube-system,Attempt:0,} returns sandbox id \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\"" Oct 29 05:26:29.471830 env[1191]: time="2025-10-29T05:26:29.471751113Z" level=info msg="CreateContainer within sandbox \"c077dba8431a55420d083e2ba87ee6ca45ce8af3e3085bdeafa088fb3826e0dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51677ac76d19b43a0e7142d411a208b09fbf6872605c9ddc00c1135961126bf7\"" Oct 29 05:26:29.472980 env[1191]: time="2025-10-29T05:26:29.472939332Z" level=info msg="StartContainer for \"51677ac76d19b43a0e7142d411a208b09fbf6872605c9ddc00c1135961126bf7\"" Oct 29 05:26:29.473573 env[1191]: time="2025-10-29T05:26:29.473498960Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 29 05:26:29.508162 systemd[1]: Started cri-containerd-d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707.scope. Oct 29 05:26:29.526361 systemd[1]: Started cri-containerd-51677ac76d19b43a0e7142d411a208b09fbf6872605c9ddc00c1135961126bf7.scope. Oct 29 05:26:29.599830 env[1191]: time="2025-10-29T05:26:29.599735695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l6v47,Uid:e8751b1d-d64d-4f7d-b496-77e4bbe52f16,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\"" Oct 29 05:26:29.607444 env[1191]: time="2025-10-29T05:26:29.606053967Z" level=info msg="StartContainer for \"51677ac76d19b43a0e7142d411a208b09fbf6872605c9ddc00c1135961126bf7\" returns successfully" Oct 29 05:26:34.788620 kubelet[1933]: I1029 05:26:34.788103 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xz82" podStartSLOduration=6.788002537 podStartE2EDuration="6.788002537s" podCreationTimestamp="2025-10-29 05:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:30.466528353 +0000 UTC m=+6.374668626" watchObservedRunningTime="2025-10-29 05:26:34.788002537 +0000 UTC m=+10.696142783" Oct 29 05:26:37.425829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565785982.mount: Deactivated successfully. Oct 29 05:26:42.031966 env[1191]: time="2025-10-29T05:26:42.031721165Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:42.036412 env[1191]: time="2025-10-29T05:26:42.036348090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:42.039653 env[1191]: time="2025-10-29T05:26:42.039577708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:42.040813 env[1191]: time="2025-10-29T05:26:42.040728231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 29 05:26:42.046474 env[1191]: time="2025-10-29T05:26:42.045833639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 29 05:26:42.051442 env[1191]: time="2025-10-29T05:26:42.049506344Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 05:26:42.076465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110784491.mount: Deactivated successfully. Oct 29 05:26:42.086932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150556721.mount: Deactivated successfully. Oct 29 05:26:42.097336 env[1191]: time="2025-10-29T05:26:42.097268036Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\"" Oct 29 05:26:42.099607 env[1191]: time="2025-10-29T05:26:42.099535142Z" level=info msg="StartContainer for \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\"" Oct 29 05:26:42.148302 systemd[1]: Started cri-containerd-1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e.scope. Oct 29 05:26:42.215224 env[1191]: time="2025-10-29T05:26:42.212555779Z" level=info msg="StartContainer for \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\" returns successfully" Oct 29 05:26:42.233871 systemd[1]: cri-containerd-1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e.scope: Deactivated successfully. Oct 29 05:26:42.428042 env[1191]: time="2025-10-29T05:26:42.427968047Z" level=info msg="shim disconnected" id=1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e Oct 29 05:26:42.428501 env[1191]: time="2025-10-29T05:26:42.428469174Z" level=warning msg="cleaning up after shim disconnected" id=1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e namespace=k8s.io Oct 29 05:26:42.428640 env[1191]: time="2025-10-29T05:26:42.428610457Z" level=info msg="cleaning up dead shim" Oct 29 05:26:42.443558 env[1191]: time="2025-10-29T05:26:42.443492834Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Oct 29 05:26:42.502629 env[1191]: time="2025-10-29T05:26:42.502558091Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 05:26:42.524023 env[1191]: time="2025-10-29T05:26:42.523967818Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\"" Oct 29 05:26:42.526483 env[1191]: time="2025-10-29T05:26:42.524913855Z" level=info msg="StartContainer for \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\"" Oct 29 05:26:42.550989 systemd[1]: Started cri-containerd-a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127.scope. Oct 29 05:26:42.618600 env[1191]: time="2025-10-29T05:26:42.618544753Z" level=info msg="StartContainer for \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\" returns successfully" Oct 29 05:26:42.640274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 05:26:42.641521 systemd[1]: Stopped systemd-sysctl.service. Oct 29 05:26:42.642130 systemd[1]: Stopping systemd-sysctl.service... Oct 29 05:26:42.648297 systemd[1]: Starting systemd-sysctl.service... Oct 29 05:26:42.648775 systemd[1]: cri-containerd-a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127.scope: Deactivated successfully. Oct 29 05:26:42.676544 env[1191]: time="2025-10-29T05:26:42.676483160Z" level=info msg="shim disconnected" id=a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127 Oct 29 05:26:42.676544 env[1191]: time="2025-10-29T05:26:42.676542151Z" level=warning msg="cleaning up after shim disconnected" id=a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127 namespace=k8s.io Oct 29 05:26:42.676883 env[1191]: time="2025-10-29T05:26:42.676557848Z" level=info msg="cleaning up dead shim" Oct 29 05:26:42.678411 systemd[1]: Finished systemd-sysctl.service. Oct 29 05:26:42.688846 env[1191]: time="2025-10-29T05:26:42.688787924Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:26:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2417 runtime=io.containerd.runc.v2\n" Oct 29 05:26:43.068507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e-rootfs.mount: Deactivated successfully. Oct 29 05:26:43.502578 env[1191]: time="2025-10-29T05:26:43.502057961Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 05:26:43.541288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411720363.mount: Deactivated successfully. Oct 29 05:26:43.555847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484196503.mount: Deactivated successfully. Oct 29 05:26:43.562199 env[1191]: time="2025-10-29T05:26:43.562107307Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\"" Oct 29 05:26:43.565433 env[1191]: time="2025-10-29T05:26:43.563684830Z" level=info msg="StartContainer for \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\"" Oct 29 05:26:43.595594 systemd[1]: Started cri-containerd-0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb.scope. Oct 29 05:26:43.675540 env[1191]: time="2025-10-29T05:26:43.675377488Z" level=info msg="StartContainer for \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\" returns successfully" Oct 29 05:26:43.680534 systemd[1]: cri-containerd-0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb.scope: Deactivated successfully. Oct 29 05:26:43.723683 env[1191]: time="2025-10-29T05:26:43.723618600Z" level=info msg="shim disconnected" id=0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb Oct 29 05:26:43.723683 env[1191]: time="2025-10-29T05:26:43.723675868Z" level=warning msg="cleaning up after shim disconnected" id=0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb namespace=k8s.io Oct 29 05:26:43.724063 env[1191]: time="2025-10-29T05:26:43.723695392Z" level=info msg="cleaning up dead shim" Oct 29 05:26:43.734693 env[1191]: time="2025-10-29T05:26:43.734640520Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:26:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2474 runtime=io.containerd.runc.v2\n" Oct 29 05:26:44.513296 env[1191]: time="2025-10-29T05:26:44.513088491Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 05:26:44.560751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833612631.mount: Deactivated successfully. Oct 29 05:26:44.568181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810491213.mount: Deactivated successfully. Oct 29 05:26:44.572076 env[1191]: time="2025-10-29T05:26:44.572008383Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\"" Oct 29 05:26:44.574821 env[1191]: time="2025-10-29T05:26:44.573726837Z" level=info msg="StartContainer for \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\"" Oct 29 05:26:44.632176 systemd[1]: Started cri-containerd-64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf.scope. Oct 29 05:26:44.712810 systemd[1]: cri-containerd-64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf.scope: Deactivated successfully. Oct 29 05:26:44.715075 env[1191]: time="2025-10-29T05:26:44.715010843Z" level=info msg="StartContainer for \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\" returns successfully" Oct 29 05:26:44.883330 env[1191]: time="2025-10-29T05:26:44.883259167Z" level=info msg="shim disconnected" id=64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf Oct 29 05:26:44.883931 env[1191]: time="2025-10-29T05:26:44.883889954Z" level=warning msg="cleaning up after shim disconnected" id=64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf namespace=k8s.io Oct 29 05:26:44.884121 env[1191]: time="2025-10-29T05:26:44.884091853Z" level=info msg="cleaning up dead shim" Oct 29 05:26:44.907485 env[1191]: time="2025-10-29T05:26:44.907402756Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:44.909681 env[1191]: time="2025-10-29T05:26:44.909619912Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:26:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2529 runtime=io.containerd.runc.v2\n" Oct 29 05:26:44.912810 env[1191]: time="2025-10-29T05:26:44.912746434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:44.915398 env[1191]: time="2025-10-29T05:26:44.915356009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 05:26:44.915903 env[1191]: time="2025-10-29T05:26:44.915863286Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 29 05:26:44.920105 env[1191]: time="2025-10-29T05:26:44.920044297Z" level=info msg="CreateContainer within sandbox \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 29 05:26:44.937942 env[1191]: time="2025-10-29T05:26:44.937859913Z" level=info msg="CreateContainer within sandbox \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\"" Oct 29 05:26:44.941158 env[1191]: time="2025-10-29T05:26:44.941115743Z" level=info msg="StartContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\"" Oct 29 05:26:44.968482 systemd[1]: Started cri-containerd-9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4.scope. Oct 29 05:26:45.026023 env[1191]: time="2025-10-29T05:26:45.025957242Z" level=info msg="StartContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" returns successfully" Oct 29 05:26:45.519724 env[1191]: time="2025-10-29T05:26:45.519149660Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 05:26:45.543811 env[1191]: time="2025-10-29T05:26:45.543738356Z" level=info msg="CreateContainer within sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\"" Oct 29 05:26:45.544487 env[1191]: time="2025-10-29T05:26:45.544434049Z" level=info msg="StartContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\"" Oct 29 05:26:45.600917 systemd[1]: Started cri-containerd-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51.scope. Oct 29 05:26:45.745168 env[1191]: time="2025-10-29T05:26:45.745115035Z" level=info msg="StartContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" returns successfully" Oct 29 05:26:45.787726 kubelet[1933]: I1029 05:26:45.787477 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l6v47" podStartSLOduration=2.472526135 podStartE2EDuration="17.78741681s" podCreationTimestamp="2025-10-29 05:26:28 +0000 UTC" firstStartedPulling="2025-10-29 05:26:29.602498319 +0000 UTC m=+5.510638571" lastFinishedPulling="2025-10-29 05:26:44.917388988 +0000 UTC m=+20.825529246" observedRunningTime="2025-10-29 05:26:45.570233565 +0000 UTC m=+21.478373827" watchObservedRunningTime="2025-10-29 05:26:45.78741681 +0000 UTC m=+21.695557077" Oct 29 05:26:46.069199 systemd[1]: run-containerd-runc-k8s.io-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51-runc.MoaDA7.mount: Deactivated successfully. Oct 29 05:26:46.259509 kubelet[1933]: I1029 05:26:46.259464 1933 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 05:26:46.422166 systemd[1]: Created slice kubepods-burstable-pod530ad247_ba98_443e_91d4_06e7851c06cb.slice. Oct 29 05:26:46.430493 systemd[1]: Created slice kubepods-burstable-pod359e7d11_22c5_49fd_9535_4e14e661a512.slice. Oct 29 05:26:46.513244 kubelet[1933]: I1029 05:26:46.513162 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgh9f\" (UniqueName: \"kubernetes.io/projected/359e7d11-22c5-49fd-9535-4e14e661a512-kube-api-access-kgh9f\") pod \"coredns-668d6bf9bc-r858b\" (UID: \"359e7d11-22c5-49fd-9535-4e14e661a512\") " pod="kube-system/coredns-668d6bf9bc-r858b" Oct 29 05:26:46.513466 kubelet[1933]: I1029 05:26:46.513331 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/530ad247-ba98-443e-91d4-06e7851c06cb-config-volume\") pod \"coredns-668d6bf9bc-mrw2l\" (UID: \"530ad247-ba98-443e-91d4-06e7851c06cb\") " pod="kube-system/coredns-668d6bf9bc-mrw2l" Oct 29 05:26:46.513466 kubelet[1933]: I1029 05:26:46.513372 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359e7d11-22c5-49fd-9535-4e14e661a512-config-volume\") pod \"coredns-668d6bf9bc-r858b\" (UID: \"359e7d11-22c5-49fd-9535-4e14e661a512\") " pod="kube-system/coredns-668d6bf9bc-r858b" Oct 29 05:26:46.513466 kubelet[1933]: I1029 05:26:46.513411 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdcl7\" (UniqueName: \"kubernetes.io/projected/530ad247-ba98-443e-91d4-06e7851c06cb-kube-api-access-pdcl7\") pod \"coredns-668d6bf9bc-mrw2l\" (UID: \"530ad247-ba98-443e-91d4-06e7851c06cb\") " pod="kube-system/coredns-668d6bf9bc-mrw2l" Oct 29 05:26:46.554872 kubelet[1933]: I1029 05:26:46.554709 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kqn88" podStartSLOduration=5.977576963 podStartE2EDuration="18.554656195s" podCreationTimestamp="2025-10-29 05:26:28 +0000 UTC" firstStartedPulling="2025-10-29 05:26:29.466320546 +0000 UTC m=+5.374460803" lastFinishedPulling="2025-10-29 05:26:42.043399765 +0000 UTC m=+17.951540035" observedRunningTime="2025-10-29 05:26:46.552918044 +0000 UTC m=+22.461058306" watchObservedRunningTime="2025-10-29 05:26:46.554656195 +0000 UTC m=+22.462796450" Oct 29 05:26:46.729222 env[1191]: time="2025-10-29T05:26:46.728517039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrw2l,Uid:530ad247-ba98-443e-91d4-06e7851c06cb,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:46.734118 env[1191]: time="2025-10-29T05:26:46.734079764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r858b,Uid:359e7d11-22c5-49fd-9535-4e14e661a512,Namespace:kube-system,Attempt:0,}" Oct 29 05:26:48.898609 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Oct 29 05:26:48.899431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Oct 29 05:26:48.899311 systemd-networkd[1029]: cilium_host: Link UP Oct 29 05:26:48.899556 systemd-networkd[1029]: cilium_net: Link UP Oct 29 05:26:48.900606 systemd-networkd[1029]: cilium_net: Gained carrier Oct 29 05:26:48.901363 systemd-networkd[1029]: cilium_host: Gained carrier Oct 29 05:26:49.068928 systemd-networkd[1029]: cilium_vxlan: Link UP Oct 29 05:26:49.068939 systemd-networkd[1029]: cilium_vxlan: Gained carrier Oct 29 05:26:49.267319 systemd-networkd[1029]: cilium_net: Gained IPv6LL Oct 29 05:26:49.393996 systemd-networkd[1029]: cilium_host: Gained IPv6LL Oct 29 05:26:49.611814 kernel: NET: Registered PF_ALG protocol family Oct 29 05:26:50.646543 systemd-networkd[1029]: lxc_health: Link UP Oct 29 05:26:50.677810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 05:26:50.677987 systemd-networkd[1029]: lxc_health: Gained carrier Oct 29 05:26:50.756215 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Oct 29 05:26:51.142552 systemd[1]: Started sshd@6-10.230.52.194:22-178.128.241.223:41316.service. Oct 29 05:26:51.262363 sshd[3080]: Invalid user debian from 178.128.241.223 port 41316 Oct 29 05:26:51.284354 sshd[3080]: pam_faillock(sshd:auth): User unknown Oct 29 05:26:51.285597 sshd[3080]: pam_unix(sshd:auth): check pass; user unknown Oct 29 05:26:51.285731 sshd[3080]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.128.241.223 Oct 29 05:26:51.286780 sshd[3080]: pam_faillock(sshd:auth): User unknown Oct 29 05:26:51.344361 systemd-networkd[1029]: lxcd2ce150f93d8: Link UP Oct 29 05:26:51.353134 kernel: eth0: renamed from tmp6d7b8 Oct 29 05:26:51.362357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd2ce150f93d8: link becomes ready Oct 29 05:26:51.362224 systemd-networkd[1029]: lxcd2ce150f93d8: Gained carrier Oct 29 05:26:51.386348 systemd-networkd[1029]: lxce98cba572145: Link UP Oct 29 05:26:51.412934 kernel: eth0: renamed from tmp336b0 Oct 29 05:26:51.423982 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce98cba572145: link becomes ready Oct 29 05:26:51.421741 systemd-networkd[1029]: lxce98cba572145: Gained carrier Oct 29 05:26:52.414954 systemd-networkd[1029]: lxc_health: Gained IPv6LL Oct 29 05:26:52.458164 systemd-networkd[1029]: lxcd2ce150f93d8: Gained IPv6LL Oct 29 05:26:52.586116 systemd-networkd[1029]: lxce98cba572145: Gained IPv6LL Oct 29 05:26:53.456861 sshd[3080]: Failed password for invalid user debian from 178.128.241.223 port 41316 ssh2 Oct 29 05:26:55.093844 sshd[3080]: Connection closed by invalid user debian 178.128.241.223 port 41316 [preauth] Oct 29 05:26:55.095705 systemd[1]: sshd@6-10.230.52.194:22-178.128.241.223:41316.service: Deactivated successfully. Oct 29 05:26:56.913512 env[1191]: time="2025-10-29T05:26:56.912976206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:56.913512 env[1191]: time="2025-10-29T05:26:56.913078861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:56.913512 env[1191]: time="2025-10-29T05:26:56.913100379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:56.915158 env[1191]: time="2025-10-29T05:26:56.915044284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7 pid=3120 runtime=io.containerd.runc.v2 Oct 29 05:26:56.946363 env[1191]: time="2025-10-29T05:26:56.946246984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:26:56.946665 env[1191]: time="2025-10-29T05:26:56.946621633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:26:56.946870 env[1191]: time="2025-10-29T05:26:56.946825296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:26:56.949993 env[1191]: time="2025-10-29T05:26:56.949935438Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7b8cee946c98a837c25137bf7b85d55ed8a8da0aa4a01555467357dec3e6a5 pid=3136 runtime=io.containerd.runc.v2 Oct 29 05:26:56.976899 systemd[1]: run-containerd-runc-k8s.io-336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7-runc.PcSXJd.mount: Deactivated successfully. Oct 29 05:26:56.989990 systemd[1]: Started cri-containerd-336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7.scope. Oct 29 05:26:57.019945 systemd[1]: Started cri-containerd-6d7b8cee946c98a837c25137bf7b85d55ed8a8da0aa4a01555467357dec3e6a5.scope. Oct 29 05:26:57.122883 env[1191]: time="2025-10-29T05:26:57.122718614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrw2l,Uid:530ad247-ba98-443e-91d4-06e7851c06cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d7b8cee946c98a837c25137bf7b85d55ed8a8da0aa4a01555467357dec3e6a5\"" Oct 29 05:26:57.131765 env[1191]: time="2025-10-29T05:26:57.131729309Z" level=info msg="CreateContainer within sandbox \"6d7b8cee946c98a837c25137bf7b85d55ed8a8da0aa4a01555467357dec3e6a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 05:26:57.168401 env[1191]: time="2025-10-29T05:26:57.167033251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r858b,Uid:359e7d11-22c5-49fd-9535-4e14e661a512,Namespace:kube-system,Attempt:0,} returns sandbox id \"336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7\"" Oct 29 05:26:57.169411 env[1191]: time="2025-10-29T05:26:57.169351699Z" level=info msg="CreateContainer within sandbox \"6d7b8cee946c98a837c25137bf7b85d55ed8a8da0aa4a01555467357dec3e6a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92b5f231bab45279b1f3560a1ab2ec09de7d22ec4af79973d4173f7ec026f122\"" Oct 29 05:26:57.172447 env[1191]: time="2025-10-29T05:26:57.172411934Z" level=info msg="StartContainer for \"92b5f231bab45279b1f3560a1ab2ec09de7d22ec4af79973d4173f7ec026f122\"" Oct 29 05:26:57.174429 env[1191]: time="2025-10-29T05:26:57.174394739Z" level=info msg="CreateContainer within sandbox \"336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 05:26:57.189008 env[1191]: time="2025-10-29T05:26:57.188956926Z" level=info msg="CreateContainer within sandbox \"336b0caed2dec22c5e043bd46d555b51244822dc4a29f210feda88f3a68025c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e34f9eacec8d94b4dd758ccdd98aedfb0ac0a81fc58ecd105587ebe76448fc45\"" Oct 29 05:26:57.191467 env[1191]: time="2025-10-29T05:26:57.190767716Z" level=info msg="StartContainer for \"e34f9eacec8d94b4dd758ccdd98aedfb0ac0a81fc58ecd105587ebe76448fc45\"" Oct 29 05:26:57.203526 systemd[1]: Started cri-containerd-92b5f231bab45279b1f3560a1ab2ec09de7d22ec4af79973d4173f7ec026f122.scope. Oct 29 05:26:57.236549 systemd[1]: Started cri-containerd-e34f9eacec8d94b4dd758ccdd98aedfb0ac0a81fc58ecd105587ebe76448fc45.scope. Oct 29 05:26:57.262767 env[1191]: time="2025-10-29T05:26:57.262702945Z" level=info msg="StartContainer for \"92b5f231bab45279b1f3560a1ab2ec09de7d22ec4af79973d4173f7ec026f122\" returns successfully" Oct 29 05:26:57.309598 env[1191]: time="2025-10-29T05:26:57.309530835Z" level=info msg="StartContainer for \"e34f9eacec8d94b4dd758ccdd98aedfb0ac0a81fc58ecd105587ebe76448fc45\" returns successfully" Oct 29 05:26:57.572524 kubelet[1933]: I1029 05:26:57.572407 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mrw2l" podStartSLOduration=29.57227193 podStartE2EDuration="29.57227193s" podCreationTimestamp="2025-10-29 05:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:57.567004505 +0000 UTC m=+33.475144768" watchObservedRunningTime="2025-10-29 05:26:57.57227193 +0000 UTC m=+33.480412182" Oct 29 05:27:31.662431 systemd[1]: Started sshd@7-10.230.52.194:22-147.75.109.163:54616.service. Oct 29 05:27:32.585468 sshd[3282]: Accepted publickey for core from 147.75.109.163 port 54616 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:32.588353 sshd[3282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:32.598724 systemd-logind[1183]: New session 6 of user core. Oct 29 05:27:32.600211 systemd[1]: Started session-6.scope. Oct 29 05:27:33.421353 sshd[3282]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:33.426339 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Oct 29 05:27:33.427439 systemd[1]: sshd@7-10.230.52.194:22-147.75.109.163:54616.service: Deactivated successfully. Oct 29 05:27:33.428896 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 05:27:33.430119 systemd-logind[1183]: Removed session 6. Oct 29 05:27:38.575648 systemd[1]: Started sshd@8-10.230.52.194:22-147.75.109.163:54622.service. Oct 29 05:27:39.481040 sshd[3295]: Accepted publickey for core from 147.75.109.163 port 54622 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:39.483585 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:39.490721 systemd[1]: Started session-7.scope. Oct 29 05:27:39.490757 systemd-logind[1183]: New session 7 of user core. Oct 29 05:27:40.215567 sshd[3295]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:40.218991 systemd[1]: sshd@8-10.230.52.194:22-147.75.109.163:54622.service: Deactivated successfully. Oct 29 05:27:40.219967 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 05:27:40.220694 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Oct 29 05:27:40.221939 systemd-logind[1183]: Removed session 7. Oct 29 05:27:45.364893 systemd[1]: Started sshd@9-10.230.52.194:22-147.75.109.163:52426.service. Oct 29 05:27:46.264285 sshd[3309]: Accepted publickey for core from 147.75.109.163 port 52426 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:46.266313 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:46.274890 systemd[1]: Started session-8.scope. Oct 29 05:27:46.275486 systemd-logind[1183]: New session 8 of user core. Oct 29 05:27:47.002362 sshd[3309]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:47.007921 systemd[1]: sshd@9-10.230.52.194:22-147.75.109.163:52426.service: Deactivated successfully. Oct 29 05:27:47.009236 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 05:27:47.010259 systemd-logind[1183]: Session 8 logged out. Waiting for processes to exit. Oct 29 05:27:47.011535 systemd-logind[1183]: Removed session 8. Oct 29 05:27:52.154249 systemd[1]: Started sshd@10-10.230.52.194:22-147.75.109.163:42482.service. Oct 29 05:27:53.056503 sshd[3322]: Accepted publickey for core from 147.75.109.163 port 42482 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:53.060457 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:53.071538 systemd[1]: Started session-9.scope. Oct 29 05:27:53.072699 systemd-logind[1183]: New session 9 of user core. Oct 29 05:27:53.807993 sshd[3322]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:53.811945 systemd-logind[1183]: Session 9 logged out. Waiting for processes to exit. Oct 29 05:27:53.812859 systemd[1]: sshd@10-10.230.52.194:22-147.75.109.163:42482.service: Deactivated successfully. Oct 29 05:27:53.813922 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 05:27:53.815198 systemd-logind[1183]: Removed session 9. Oct 29 05:27:53.957852 systemd[1]: Started sshd@11-10.230.52.194:22-147.75.109.163:42486.service. Oct 29 05:27:54.875295 sshd[3335]: Accepted publickey for core from 147.75.109.163 port 42486 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:54.876075 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:54.882492 systemd-logind[1183]: New session 10 of user core. Oct 29 05:27:54.885076 systemd[1]: Started session-10.scope. Oct 29 05:27:55.701081 sshd[3335]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:55.707329 systemd[1]: sshd@11-10.230.52.194:22-147.75.109.163:42486.service: Deactivated successfully. Oct 29 05:27:55.708439 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 05:27:55.709242 systemd-logind[1183]: Session 10 logged out. Waiting for processes to exit. Oct 29 05:27:55.710424 systemd-logind[1183]: Removed session 10. Oct 29 05:27:55.850991 systemd[1]: Started sshd@12-10.230.52.194:22-147.75.109.163:42502.service. Oct 29 05:27:56.760459 sshd[3345]: Accepted publickey for core from 147.75.109.163 port 42502 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:27:56.762455 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:27:56.770563 systemd-logind[1183]: New session 11 of user core. Oct 29 05:27:56.770885 systemd[1]: Started session-11.scope. Oct 29 05:27:57.483568 sshd[3345]: pam_unix(sshd:session): session closed for user core Oct 29 05:27:57.487715 systemd-logind[1183]: Session 11 logged out. Waiting for processes to exit. Oct 29 05:27:57.488067 systemd[1]: sshd@12-10.230.52.194:22-147.75.109.163:42502.service: Deactivated successfully. Oct 29 05:27:57.489031 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 05:27:57.490234 systemd-logind[1183]: Removed session 11. Oct 29 05:28:00.904808 systemd[1]: Started sshd@13-10.230.52.194:22-178.128.241.223:50650.service. Oct 29 05:28:00.991398 sshd[3359]: Invalid user debian from 178.128.241.223 port 50650 Oct 29 05:28:01.013244 sshd[3359]: pam_faillock(sshd:auth): User unknown Oct 29 05:28:01.014760 sshd[3359]: pam_unix(sshd:auth): check pass; user unknown Oct 29 05:28:01.015157 sshd[3359]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.128.241.223 Oct 29 05:28:01.017031 sshd[3359]: pam_faillock(sshd:auth): User unknown Oct 29 05:28:02.632866 systemd[1]: Started sshd@14-10.230.52.194:22-147.75.109.163:50854.service. Oct 29 05:28:02.794052 sshd[3359]: Failed password for invalid user debian from 178.128.241.223 port 50650 ssh2 Oct 29 05:28:02.957868 sshd[3359]: Connection closed by invalid user debian 178.128.241.223 port 50650 [preauth] Oct 29 05:28:02.959766 systemd[1]: sshd@13-10.230.52.194:22-178.128.241.223:50650.service: Deactivated successfully. Oct 29 05:28:03.531730 sshd[3362]: Accepted publickey for core from 147.75.109.163 port 50854 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:03.533711 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:03.541613 systemd-logind[1183]: New session 12 of user core. Oct 29 05:28:03.541943 systemd[1]: Started session-12.scope. Oct 29 05:28:04.265243 sshd[3362]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:04.268810 systemd[1]: sshd@14-10.230.52.194:22-147.75.109.163:50854.service: Deactivated successfully. Oct 29 05:28:04.269992 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 05:28:04.270948 systemd-logind[1183]: Session 12 logged out. Waiting for processes to exit. Oct 29 05:28:04.272350 systemd-logind[1183]: Removed session 12. Oct 29 05:28:09.419100 systemd[1]: Started sshd@15-10.230.52.194:22-147.75.109.163:50860.service. Oct 29 05:28:10.323586 sshd[3375]: Accepted publickey for core from 147.75.109.163 port 50860 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:10.326346 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:10.334356 systemd[1]: Started session-13.scope. Oct 29 05:28:10.335939 systemd-logind[1183]: New session 13 of user core. Oct 29 05:28:11.043088 sshd[3375]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:11.047341 systemd[1]: sshd@15-10.230.52.194:22-147.75.109.163:50860.service: Deactivated successfully. Oct 29 05:28:11.048364 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 05:28:11.049239 systemd-logind[1183]: Session 13 logged out. Waiting for processes to exit. Oct 29 05:28:11.050398 systemd-logind[1183]: Removed session 13. Oct 29 05:28:11.191312 systemd[1]: Started sshd@16-10.230.52.194:22-147.75.109.163:48016.service. Oct 29 05:28:12.086879 sshd[3387]: Accepted publickey for core from 147.75.109.163 port 48016 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:12.089494 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:12.099482 systemd-logind[1183]: New session 14 of user core. Oct 29 05:28:12.100466 systemd[1]: Started session-14.scope. Oct 29 05:28:13.250247 sshd[3387]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:13.255463 systemd[1]: sshd@16-10.230.52.194:22-147.75.109.163:48016.service: Deactivated successfully. Oct 29 05:28:13.256684 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 05:28:13.257425 systemd-logind[1183]: Session 14 logged out. Waiting for processes to exit. Oct 29 05:28:13.258592 systemd-logind[1183]: Removed session 14. Oct 29 05:28:13.400898 systemd[1]: Started sshd@17-10.230.52.194:22-147.75.109.163:48022.service. Oct 29 05:28:14.317090 sshd[3397]: Accepted publickey for core from 147.75.109.163 port 48022 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:14.319950 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:14.327873 systemd-logind[1183]: New session 15 of user core. Oct 29 05:28:14.328336 systemd[1]: Started session-15.scope. Oct 29 05:28:15.771620 sshd[3397]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:15.779348 systemd[1]: sshd@17-10.230.52.194:22-147.75.109.163:48022.service: Deactivated successfully. Oct 29 05:28:15.780531 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 05:28:15.781344 systemd-logind[1183]: Session 15 logged out. Waiting for processes to exit. Oct 29 05:28:15.783463 systemd-logind[1183]: Removed session 15. Oct 29 05:28:15.921643 systemd[1]: Started sshd@18-10.230.52.194:22-147.75.109.163:48026.service. Oct 29 05:28:16.822627 sshd[3414]: Accepted publickey for core from 147.75.109.163 port 48026 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:16.825041 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:16.834811 systemd-logind[1183]: New session 16 of user core. Oct 29 05:28:16.834910 systemd[1]: Started session-16.scope. Oct 29 05:28:17.743957 sshd[3414]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:17.747843 systemd[1]: sshd@18-10.230.52.194:22-147.75.109.163:48026.service: Deactivated successfully. Oct 29 05:28:17.748930 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 05:28:17.749827 systemd-logind[1183]: Session 16 logged out. Waiting for processes to exit. Oct 29 05:28:17.751311 systemd-logind[1183]: Removed session 16. Oct 29 05:28:17.894680 systemd[1]: Started sshd@19-10.230.52.194:22-147.75.109.163:48028.service. Oct 29 05:28:18.797602 sshd[3424]: Accepted publickey for core from 147.75.109.163 port 48028 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:18.800370 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:18.807593 systemd-logind[1183]: New session 17 of user core. Oct 29 05:28:18.807939 systemd[1]: Started session-17.scope. Oct 29 05:28:19.511212 sshd[3424]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:19.515125 systemd-logind[1183]: Session 17 logged out. Waiting for processes to exit. Oct 29 05:28:19.516480 systemd[1]: sshd@19-10.230.52.194:22-147.75.109.163:48028.service: Deactivated successfully. Oct 29 05:28:19.517519 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 05:28:19.518685 systemd-logind[1183]: Removed session 17. Oct 29 05:28:24.661770 systemd[1]: Started sshd@20-10.230.52.194:22-147.75.109.163:53698.service. Oct 29 05:28:25.568264 sshd[3438]: Accepted publickey for core from 147.75.109.163 port 53698 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:25.570572 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:25.577709 systemd-logind[1183]: New session 18 of user core. Oct 29 05:28:25.578661 systemd[1]: Started session-18.scope. Oct 29 05:28:26.287972 sshd[3438]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:26.291348 systemd[1]: sshd@20-10.230.52.194:22-147.75.109.163:53698.service: Deactivated successfully. Oct 29 05:28:26.292411 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 05:28:26.293398 systemd-logind[1183]: Session 18 logged out. Waiting for processes to exit. Oct 29 05:28:26.294640 systemd-logind[1183]: Removed session 18. Oct 29 05:28:31.440405 systemd[1]: Started sshd@21-10.230.52.194:22-147.75.109.163:37450.service. Oct 29 05:28:32.342931 sshd[3458]: Accepted publickey for core from 147.75.109.163 port 37450 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:32.345633 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:32.354358 systemd[1]: Started session-19.scope. Oct 29 05:28:32.354836 systemd-logind[1183]: New session 19 of user core. Oct 29 05:28:33.075862 sshd[3458]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:33.080103 systemd[1]: sshd@21-10.230.52.194:22-147.75.109.163:37450.service: Deactivated successfully. Oct 29 05:28:33.081289 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 05:28:33.082903 systemd-logind[1183]: Session 19 logged out. Waiting for processes to exit. Oct 29 05:28:33.084949 systemd-logind[1183]: Removed session 19. Oct 29 05:28:38.226902 systemd[1]: Started sshd@22-10.230.52.194:22-147.75.109.163:37452.service. Oct 29 05:28:39.138735 sshd[3469]: Accepted publickey for core from 147.75.109.163 port 37452 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:39.141192 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:39.149526 systemd-logind[1183]: New session 20 of user core. Oct 29 05:28:39.151370 systemd[1]: Started session-20.scope. Oct 29 05:28:39.876215 sshd[3469]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:39.880907 systemd[1]: sshd@22-10.230.52.194:22-147.75.109.163:37452.service: Deactivated successfully. Oct 29 05:28:39.882019 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 05:28:39.882567 systemd-logind[1183]: Session 20 logged out. Waiting for processes to exit. Oct 29 05:28:39.884556 systemd-logind[1183]: Removed session 20. Oct 29 05:28:43.563926 update_engine[1185]: I1029 05:28:43.563670 1185 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 29 05:28:43.563926 update_engine[1185]: I1029 05:28:43.563817 1185 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 29 05:28:43.566643 update_engine[1185]: I1029 05:28:43.566509 1185 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 29 05:28:43.567815 update_engine[1185]: I1029 05:28:43.567714 1185 omaha_request_params.cc:62] Current group set to lts Oct 29 05:28:43.569850 update_engine[1185]: I1029 05:28:43.569597 1185 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 29 05:28:43.569850 update_engine[1185]: I1029 05:28:43.569619 1185 update_attempter.cc:643] Scheduling an action processor start. Oct 29 05:28:43.569850 update_engine[1185]: I1029 05:28:43.569648 1185 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 29 05:28:43.572258 update_engine[1185]: I1029 05:28:43.571966 1185 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 29 05:28:43.572258 update_engine[1185]: I1029 05:28:43.572160 1185 omaha_request_action.cc:270] Posting an Omaha request to disabled Oct 29 05:28:43.572258 update_engine[1185]: I1029 05:28:43.572180 1185 omaha_request_action.cc:271] Request: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: Oct 29 05:28:43.572258 update_engine[1185]: I1029 05:28:43.572193 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 29 05:28:43.583113 update_engine[1185]: I1029 05:28:43.583086 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 29 05:28:43.583926 update_engine[1185]: I1029 05:28:43.583901 1185 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 29 05:28:43.589626 update_engine[1185]: E1029 05:28:43.589602 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 29 05:28:43.589895 update_engine[1185]: I1029 05:28:43.589871 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 29 05:28:43.591938 locksmithd[1226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 29 05:28:45.028099 systemd[1]: Started sshd@23-10.230.52.194:22-147.75.109.163:60386.service. Oct 29 05:28:45.939129 sshd[3482]: Accepted publickey for core from 147.75.109.163 port 60386 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:45.941483 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:45.949232 systemd-logind[1183]: New session 21 of user core. Oct 29 05:28:45.949921 systemd[1]: Started session-21.scope. Oct 29 05:28:46.697514 sshd[3482]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:46.701425 systemd[1]: sshd@23-10.230.52.194:22-147.75.109.163:60386.service: Deactivated successfully. Oct 29 05:28:46.702546 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 05:28:46.703338 systemd-logind[1183]: Session 21 logged out. Waiting for processes to exit. Oct 29 05:28:46.704757 systemd-logind[1183]: Removed session 21. Oct 29 05:28:46.849116 systemd[1]: Started sshd@24-10.230.52.194:22-147.75.109.163:60402.service. Oct 29 05:28:47.765992 sshd[3494]: Accepted publickey for core from 147.75.109.163 port 60402 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:47.768864 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:47.776910 systemd[1]: Started session-22.scope. Oct 29 05:28:47.777818 systemd-logind[1183]: New session 22 of user core. Oct 29 05:28:49.719321 kubelet[1933]: I1029 05:28:49.719199 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r858b" podStartSLOduration=140.719131278 podStartE2EDuration="2m20.719131278s" podCreationTimestamp="2025-10-29 05:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:26:57.618716461 +0000 UTC m=+33.526856734" watchObservedRunningTime="2025-10-29 05:28:49.719131278 +0000 UTC m=+145.627271530" Oct 29 05:28:49.745037 env[1191]: time="2025-10-29T05:28:49.744925122Z" level=info msg="StopContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" with timeout 30 (s)" Oct 29 05:28:49.746678 env[1191]: time="2025-10-29T05:28:49.746630504Z" level=info msg="Stop container \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" with signal terminated" Oct 29 05:28:49.765241 systemd[1]: run-containerd-runc-k8s.io-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51-runc.xFiFth.mount: Deactivated successfully. Oct 29 05:28:49.798060 systemd[1]: cri-containerd-9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4.scope: Deactivated successfully. Oct 29 05:28:49.820288 env[1191]: time="2025-10-29T05:28:49.819111683Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 05:28:49.827970 env[1191]: time="2025-10-29T05:28:49.827915554Z" level=info msg="StopContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" with timeout 2 (s)" Oct 29 05:28:49.828606 env[1191]: time="2025-10-29T05:28:49.828573796Z" level=info msg="Stop container \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" with signal terminated" Oct 29 05:28:49.838325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4-rootfs.mount: Deactivated successfully. Oct 29 05:28:49.844718 env[1191]: time="2025-10-29T05:28:49.844671304Z" level=info msg="shim disconnected" id=9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4 Oct 29 05:28:49.845034 env[1191]: time="2025-10-29T05:28:49.844968592Z" level=warning msg="cleaning up after shim disconnected" id=9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4 namespace=k8s.io Oct 29 05:28:49.845222 env[1191]: time="2025-10-29T05:28:49.845192766Z" level=info msg="cleaning up dead shim" Oct 29 05:28:49.854293 systemd-networkd[1029]: lxc_health: Link DOWN Oct 29 05:28:49.854305 systemd-networkd[1029]: lxc_health: Lost carrier Oct 29 05:28:49.888886 env[1191]: time="2025-10-29T05:28:49.887585266Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3547 runtime=io.containerd.runc.v2\n" Oct 29 05:28:49.897197 env[1191]: time="2025-10-29T05:28:49.896969438Z" level=info msg="StopContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" returns successfully" Oct 29 05:28:49.899585 env[1191]: time="2025-10-29T05:28:49.899545908Z" level=info msg="StopPodSandbox for \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\"" Oct 29 05:28:49.904227 env[1191]: time="2025-10-29T05:28:49.899648847Z" level=info msg="Container to stop \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.903334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707-shm.mount: Deactivated successfully. Oct 29 05:28:49.904542 systemd[1]: cri-containerd-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51.scope: Deactivated successfully. Oct 29 05:28:49.905144 systemd[1]: cri-containerd-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51.scope: Consumed 9.999s CPU time. Oct 29 05:28:49.924604 systemd[1]: cri-containerd-d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707.scope: Deactivated successfully. Oct 29 05:28:49.953867 env[1191]: time="2025-10-29T05:28:49.953786665Z" level=info msg="shim disconnected" id=8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51 Oct 29 05:28:49.954402 env[1191]: time="2025-10-29T05:28:49.954372771Z" level=warning msg="cleaning up after shim disconnected" id=8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51 namespace=k8s.io Oct 29 05:28:49.954652 env[1191]: time="2025-10-29T05:28:49.954622524Z" level=info msg="cleaning up dead shim" Oct 29 05:28:49.980450 env[1191]: time="2025-10-29T05:28:49.978024352Z" level=info msg="shim disconnected" id=d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707 Oct 29 05:28:49.980450 env[1191]: time="2025-10-29T05:28:49.978098082Z" level=warning msg="cleaning up after shim disconnected" id=d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707 namespace=k8s.io Oct 29 05:28:49.980450 env[1191]: time="2025-10-29T05:28:49.978114037Z" level=info msg="cleaning up dead shim" Oct 29 05:28:49.984617 env[1191]: time="2025-10-29T05:28:49.984577852Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3588 runtime=io.containerd.runc.v2\n" Oct 29 05:28:49.986529 env[1191]: time="2025-10-29T05:28:49.986490438Z" level=info msg="StopContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" returns successfully" Oct 29 05:28:49.987866 env[1191]: time="2025-10-29T05:28:49.987833825Z" level=info msg="StopPodSandbox for \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\"" Oct 29 05:28:49.988390 env[1191]: time="2025-10-29T05:28:49.988323281Z" level=info msg="Container to stop \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.988503 env[1191]: time="2025-10-29T05:28:49.988390411Z" level=info msg="Container to stop \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.988503 env[1191]: time="2025-10-29T05:28:49.988430824Z" level=info msg="Container to stop \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.988503 env[1191]: time="2025-10-29T05:28:49.988461212Z" level=info msg="Container to stop \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.988503 env[1191]: time="2025-10-29T05:28:49.988492891Z" level=info msg="Container to stop \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:49.999072 systemd[1]: cri-containerd-977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43.scope: Deactivated successfully. Oct 29 05:28:50.003602 env[1191]: time="2025-10-29T05:28:50.003558668Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3605 runtime=io.containerd.runc.v2\n" Oct 29 05:28:50.004305 env[1191]: time="2025-10-29T05:28:50.004255499Z" level=info msg="TearDown network for sandbox \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\" successfully" Oct 29 05:28:50.004499 env[1191]: time="2025-10-29T05:28:50.004466101Z" level=info msg="StopPodSandbox for \"d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707\" returns successfully" Oct 29 05:28:50.040534 env[1191]: time="2025-10-29T05:28:50.040471246Z" level=info msg="shim disconnected" id=977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43 Oct 29 05:28:50.041469 env[1191]: time="2025-10-29T05:28:50.041408270Z" level=warning msg="cleaning up after shim disconnected" id=977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43 namespace=k8s.io Oct 29 05:28:50.041951 env[1191]: time="2025-10-29T05:28:50.041912385Z" level=info msg="cleaning up dead shim" Oct 29 05:28:50.055597 env[1191]: time="2025-10-29T05:28:50.055556891Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Oct 29 05:28:50.056269 env[1191]: time="2025-10-29T05:28:50.056231237Z" level=info msg="TearDown network for sandbox \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" successfully" Oct 29 05:28:50.056473 env[1191]: time="2025-10-29T05:28:50.056420801Z" level=info msg="StopPodSandbox for \"977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43\" returns successfully" Oct 29 05:28:50.106711 kubelet[1933]: I1029 05:28:50.106666 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-cilium-config-path\") pod \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\" (UID: \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\") " Oct 29 05:28:50.107073 kubelet[1933]: I1029 05:28:50.107045 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtj2m\" (UniqueName: \"kubernetes.io/projected/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-kube-api-access-rtj2m\") pod \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\" (UID: \"e8751b1d-d64d-4f7d-b496-77e4bbe52f16\") " Oct 29 05:28:50.117890 kubelet[1933]: I1029 05:28:50.114750 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8751b1d-d64d-4f7d-b496-77e4bbe52f16" (UID: "e8751b1d-d64d-4f7d-b496-77e4bbe52f16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 05:28:50.118758 kubelet[1933]: I1029 05:28:50.118723 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-kube-api-access-rtj2m" (OuterVolumeSpecName: "kube-api-access-rtj2m") pod "e8751b1d-d64d-4f7d-b496-77e4bbe52f16" (UID: "e8751b1d-d64d-4f7d-b496-77e4bbe52f16"). InnerVolumeSpecName "kube-api-access-rtj2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:28:50.207842 kubelet[1933]: I1029 05:28:50.207668 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-xtables-lock\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.207842 kubelet[1933]: I1029 05:28:50.207742 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-kernel\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.207842 kubelet[1933]: I1029 05:28:50.207834 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5a355ba-d143-481e-b662-b538b703d12f-cilium-config-path\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.207865 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-etc-cni-netd\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.207904 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-hubble-tls\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.207935 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-cgroup\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.207960 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-lib-modules\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.207987 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cni-path\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208223 kubelet[1933]: I1029 05:28:50.208022 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5a355ba-d143-481e-b662-b538b703d12f-clustermesh-secrets\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208060 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-bpf-maps\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208105 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-net\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208132 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-hostproc\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208158 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdsjw\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-kube-api-access-vdsjw\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208218 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-run\") pod \"a5a355ba-d143-481e-b662-b538b703d12f\" (UID: \"a5a355ba-d143-481e-b662-b538b703d12f\") " Oct 29 05:28:50.208616 kubelet[1933]: I1029 05:28:50.208495 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.208967 kubelet[1933]: I1029 05:28:50.208575 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.208967 kubelet[1933]: I1029 05:28:50.208608 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.209386 kubelet[1933]: I1029 05:28:50.209344 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.209526 kubelet[1933]: I1029 05:28:50.209438 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.209658 kubelet[1933]: I1029 05:28:50.209472 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cni-path" (OuterVolumeSpecName: "cni-path") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.209956 kubelet[1933]: I1029 05:28:50.209913 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.210046 kubelet[1933]: I1029 05:28:50.209958 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.210046 kubelet[1933]: I1029 05:28:50.209988 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-hostproc" (OuterVolumeSpecName: "hostproc") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.210794 kubelet[1933]: I1029 05:28:50.210747 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-cilium-config-path\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.210954 kubelet[1933]: I1029 05:28:50.210928 1933 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtj2m\" (UniqueName: \"kubernetes.io/projected/e8751b1d-d64d-4f7d-b496-77e4bbe52f16-kube-api-access-rtj2m\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.211669 kubelet[1933]: I1029 05:28:50.211634 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:50.214930 kubelet[1933]: I1029 05:28:50.214896 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a355ba-d143-481e-b662-b538b703d12f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 05:28:50.216466 kubelet[1933]: I1029 05:28:50.216424 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:28:50.217072 kubelet[1933]: I1029 05:28:50.217040 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-kube-api-access-vdsjw" (OuterVolumeSpecName: "kube-api-access-vdsjw") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "kube-api-access-vdsjw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:28:50.219575 kubelet[1933]: I1029 05:28:50.219543 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a355ba-d143-481e-b662-b538b703d12f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a5a355ba-d143-481e-b662-b538b703d12f" (UID: "a5a355ba-d143-481e-b662-b538b703d12f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311825 1933 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vdsjw\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-kube-api-access-vdsjw\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311887 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-run\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311914 1933 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-xtables-lock\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311928 1933 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-kernel\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311946 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5a355ba-d143-481e-b662-b538b703d12f-cilium-config-path\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311962 1933 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-etc-cni-netd\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311976 1933 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5a355ba-d143-481e-b662-b538b703d12f-hubble-tls\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.311984 kubelet[1933]: I1029 05:28:50.311992 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cilium-cgroup\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312006 1933 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-lib-modules\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312021 1933 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-cni-path\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312035 1933 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5a355ba-d143-481e-b662-b538b703d12f-clustermesh-secrets\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312048 1933 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-bpf-maps\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312065 1933 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-host-proc-sys-net\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.312583 kubelet[1933]: I1029 05:28:50.312080 1933 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5a355ba-d143-481e-b662-b538b703d12f-hostproc\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:50.377937 systemd[1]: Removed slice kubepods-burstable-poda5a355ba_d143_481e_b662_b538b703d12f.slice. Oct 29 05:28:50.378083 systemd[1]: kubepods-burstable-poda5a355ba_d143_481e_b662_b538b703d12f.slice: Consumed 10.198s CPU time. Oct 29 05:28:50.379633 systemd[1]: Removed slice kubepods-besteffort-pode8751b1d_d64d_4f7d_b496_77e4bbe52f16.slice. Oct 29 05:28:50.756972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51-rootfs.mount: Deactivated successfully. Oct 29 05:28:50.757132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6a0e620cf80af35a9b8bcdba84831d5c5558972561cfc6750b4c346daa50707-rootfs.mount: Deactivated successfully. Oct 29 05:28:50.757235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43-rootfs.mount: Deactivated successfully. Oct 29 05:28:50.757347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-977fd99e0e7d623da03ae48b1f6a8734916408e7ca317f153eedf4c91c938a43-shm.mount: Deactivated successfully. Oct 29 05:28:50.757491 systemd[1]: var-lib-kubelet-pods-e8751b1d\x2dd64d\x2d4f7d\x2db496\x2d77e4bbe52f16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtj2m.mount: Deactivated successfully. Oct 29 05:28:50.757594 systemd[1]: var-lib-kubelet-pods-a5a355ba\x2dd143\x2d481e\x2db662\x2db538b703d12f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdsjw.mount: Deactivated successfully. Oct 29 05:28:50.757711 systemd[1]: var-lib-kubelet-pods-a5a355ba\x2dd143\x2d481e\x2db662\x2db538b703d12f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 05:28:50.757843 systemd[1]: var-lib-kubelet-pods-a5a355ba\x2dd143\x2d481e\x2db662\x2db538b703d12f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 05:28:50.865727 kubelet[1933]: I1029 05:28:50.865372 1933 scope.go:117] "RemoveContainer" containerID="8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51" Oct 29 05:28:50.871732 env[1191]: time="2025-10-29T05:28:50.871669412Z" level=info msg="RemoveContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\"" Oct 29 05:28:50.878005 env[1191]: time="2025-10-29T05:28:50.877937823Z" level=info msg="RemoveContainer for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" returns successfully" Oct 29 05:28:50.880309 kubelet[1933]: I1029 05:28:50.879800 1933 scope.go:117] "RemoveContainer" containerID="64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf" Oct 29 05:28:50.882512 env[1191]: time="2025-10-29T05:28:50.881277501Z" level=info msg="RemoveContainer for \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\"" Oct 29 05:28:50.884587 env[1191]: time="2025-10-29T05:28:50.884533050Z" level=info msg="RemoveContainer for \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\" returns successfully" Oct 29 05:28:50.884784 kubelet[1933]: I1029 05:28:50.884739 1933 scope.go:117] "RemoveContainer" containerID="0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb" Oct 29 05:28:50.889066 env[1191]: time="2025-10-29T05:28:50.886065594Z" level=info msg="RemoveContainer for \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\"" Oct 29 05:28:50.890367 env[1191]: time="2025-10-29T05:28:50.890320248Z" level=info msg="RemoveContainer for \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\" returns successfully" Oct 29 05:28:50.890706 kubelet[1933]: I1029 05:28:50.890678 1933 scope.go:117] "RemoveContainer" containerID="a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127" Oct 29 05:28:50.894040 env[1191]: time="2025-10-29T05:28:50.893985426Z" level=info msg="RemoveContainer for \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\"" Oct 29 05:28:50.904452 env[1191]: time="2025-10-29T05:28:50.904407403Z" level=info msg="RemoveContainer for \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\" returns successfully" Oct 29 05:28:50.904973 kubelet[1933]: I1029 05:28:50.904935 1933 scope.go:117] "RemoveContainer" containerID="1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e" Oct 29 05:28:50.910584 env[1191]: time="2025-10-29T05:28:50.910527510Z" level=info msg="RemoveContainer for \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\"" Oct 29 05:28:50.916719 env[1191]: time="2025-10-29T05:28:50.916677810Z" level=info msg="RemoveContainer for \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\" returns successfully" Oct 29 05:28:50.916936 kubelet[1933]: I1029 05:28:50.916902 1933 scope.go:117] "RemoveContainer" containerID="8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51" Oct 29 05:28:50.917553 env[1191]: time="2025-10-29T05:28:50.917369964Z" level=error msg="ContainerStatus for \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\": not found" Oct 29 05:28:50.917997 kubelet[1933]: E1029 05:28:50.917961 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\": not found" containerID="8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51" Oct 29 05:28:50.919835 kubelet[1933]: I1029 05:28:50.919645 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51"} err="failed to get container status \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e642f5274256b6e514bb38e7427d0e6b8919e01523819f17a237836adc6da51\": not found" Oct 29 05:28:50.919835 kubelet[1933]: I1029 05:28:50.919796 1933 scope.go:117] "RemoveContainer" containerID="64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf" Oct 29 05:28:50.920086 env[1191]: time="2025-10-29T05:28:50.920014680Z" level=error msg="ContainerStatus for \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\": not found" Oct 29 05:28:50.920552 kubelet[1933]: E1029 05:28:50.920377 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\": not found" containerID="64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf" Oct 29 05:28:50.920552 kubelet[1933]: I1029 05:28:50.920415 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf"} err="failed to get container status \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"64efffaed4d0961c9a1f35587d169ce0662d3b93e906ef58fdcfc4004c689cbf\": not found" Oct 29 05:28:50.920552 kubelet[1933]: I1029 05:28:50.920439 1933 scope.go:117] "RemoveContainer" containerID="0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb" Oct 29 05:28:50.920829 env[1191]: time="2025-10-29T05:28:50.920648539Z" level=error msg="ContainerStatus for \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\": not found" Oct 29 05:28:50.920983 kubelet[1933]: E1029 05:28:50.920951 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\": not found" containerID="0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb" Oct 29 05:28:50.921094 kubelet[1933]: I1029 05:28:50.920991 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb"} err="failed to get container status \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b7584d3ca6ae42cab040c4a12f8bf91a1e5b6d532d05a22e55b5707c69ba1eb\": not found" Oct 29 05:28:50.921094 kubelet[1933]: I1029 05:28:50.921013 1933 scope.go:117] "RemoveContainer" containerID="a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127" Oct 29 05:28:50.921352 env[1191]: time="2025-10-29T05:28:50.921286033Z" level=error msg="ContainerStatus for \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\": not found" Oct 29 05:28:50.921602 kubelet[1933]: E1029 05:28:50.921529 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\": not found" containerID="a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127" Oct 29 05:28:50.921678 kubelet[1933]: I1029 05:28:50.921599 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127"} err="failed to get container status \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\": rpc error: code = NotFound desc = an error occurred when try to find container \"a74c348eceee49cac0f673dd6f2443bb3c4144e39c0c68a9610103c3d1bed127\": not found" Oct 29 05:28:50.921678 kubelet[1933]: I1029 05:28:50.921621 1933 scope.go:117] "RemoveContainer" containerID="1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e" Oct 29 05:28:50.922006 env[1191]: time="2025-10-29T05:28:50.921916977Z" level=error msg="ContainerStatus for \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\": not found" Oct 29 05:28:50.922317 kubelet[1933]: E1029 05:28:50.922167 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\": not found" containerID="1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e" Oct 29 05:28:50.922317 kubelet[1933]: I1029 05:28:50.922210 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e"} err="failed to get container status \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e1d8a62f44bf11b5fdea0fa62a3c9547c4df4f17d59897e7a14f5fc20af057e\": not found" Oct 29 05:28:50.922317 kubelet[1933]: I1029 05:28:50.922233 1933 scope.go:117] "RemoveContainer" containerID="9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4" Oct 29 05:28:50.923674 env[1191]: time="2025-10-29T05:28:50.923621059Z" level=info msg="RemoveContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\"" Oct 29 05:28:50.927215 env[1191]: time="2025-10-29T05:28:50.927151449Z" level=info msg="RemoveContainer for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" returns successfully" Oct 29 05:28:50.927444 kubelet[1933]: I1029 05:28:50.927410 1933 scope.go:117] "RemoveContainer" containerID="9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4" Oct 29 05:28:50.927888 env[1191]: time="2025-10-29T05:28:50.927822078Z" level=error msg="ContainerStatus for \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\": not found" Oct 29 05:28:50.928168 kubelet[1933]: E1029 05:28:50.928109 1933 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\": not found" containerID="9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4" Oct 29 05:28:50.928289 kubelet[1933]: I1029 05:28:50.928166 1933 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4"} err="failed to get container status \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9852b381c0bfbc2151a9f93f02048866b89a175794acd4104fb8e4cf76ea36a4\": not found" Oct 29 05:28:51.805538 sshd[3494]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:51.809993 systemd[1]: sshd@24-10.230.52.194:22-147.75.109.163:60402.service: Deactivated successfully. Oct 29 05:28:51.811767 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 05:28:51.811842 systemd-logind[1183]: Session 22 logged out. Waiting for processes to exit. Oct 29 05:28:51.814624 systemd-logind[1183]: Removed session 22. Oct 29 05:28:51.954024 systemd[1]: Started sshd@25-10.230.52.194:22-147.75.109.163:34538.service. Oct 29 05:28:52.371883 kubelet[1933]: I1029 05:28:52.371806 1933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a355ba-d143-481e-b662-b538b703d12f" path="/var/lib/kubelet/pods/a5a355ba-d143-481e-b662-b538b703d12f/volumes" Oct 29 05:28:52.373818 kubelet[1933]: I1029 05:28:52.373751 1933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8751b1d-d64d-4f7d-b496-77e4bbe52f16" path="/var/lib/kubelet/pods/e8751b1d-d64d-4f7d-b496-77e4bbe52f16/volumes" Oct 29 05:28:52.856423 sshd[3658]: Accepted publickey for core from 147.75.109.163 port 34538 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:52.859147 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:52.866387 systemd-logind[1183]: New session 23 of user core. Oct 29 05:28:52.867529 systemd[1]: Started session-23.scope. Oct 29 05:28:53.555866 update_engine[1185]: I1029 05:28:53.554876 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 29 05:28:53.555866 update_engine[1185]: I1029 05:28:53.555411 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 29 05:28:53.555866 update_engine[1185]: I1029 05:28:53.555805 1185 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 29 05:28:53.556952 update_engine[1185]: E1029 05:28:53.556118 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 29 05:28:53.556952 update_engine[1185]: I1029 05:28:53.556241 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 29 05:28:54.237317 kubelet[1933]: I1029 05:28:54.237245 1933 memory_manager.go:355] "RemoveStaleState removing state" podUID="e8751b1d-d64d-4f7d-b496-77e4bbe52f16" containerName="cilium-operator" Oct 29 05:28:54.237317 kubelet[1933]: I1029 05:28:54.237294 1933 memory_manager.go:355] "RemoveStaleState removing state" podUID="a5a355ba-d143-481e-b662-b538b703d12f" containerName="cilium-agent" Oct 29 05:28:54.257801 systemd[1]: Created slice kubepods-burstable-podb17b41da_6822_4738_a320_f4e6d46744a1.slice. Oct 29 05:28:54.338751 kubelet[1933]: I1029 05:28:54.338700 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-lib-modules\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.339100 kubelet[1933]: I1029 05:28:54.339069 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-run\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.339270 kubelet[1933]: I1029 05:28:54.339243 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-etc-cni-netd\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.339469 kubelet[1933]: I1029 05:28:54.339439 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-bpf-maps\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.339695 kubelet[1933]: I1029 05:28:54.339668 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-xtables-lock\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.339868 kubelet[1933]: I1029 05:28:54.339828 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-clustermesh-secrets\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340055 kubelet[1933]: I1029 05:28:54.340029 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-hostproc\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340216 kubelet[1933]: I1029 05:28:54.340184 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-config-path\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340393 kubelet[1933]: I1029 05:28:54.340369 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-hubble-tls\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340590 kubelet[1933]: I1029 05:28:54.340562 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cni-path\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340762 kubelet[1933]: I1029 05:28:54.340737 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-net\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.340965 kubelet[1933]: I1029 05:28:54.340938 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54lz\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-kube-api-access-h54lz\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.341145 kubelet[1933]: I1029 05:28:54.341119 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-kernel\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.341303 kubelet[1933]: I1029 05:28:54.341272 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-cgroup\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.341508 kubelet[1933]: I1029 05:28:54.341472 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-ipsec-secrets\") pod \"cilium-gcqbw\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " pod="kube-system/cilium-gcqbw" Oct 29 05:28:54.394476 sshd[3658]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:54.399714 systemd[1]: sshd@25-10.230.52.194:22-147.75.109.163:34538.service: Deactivated successfully. Oct 29 05:28:54.401202 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 05:28:54.401259 systemd-logind[1183]: Session 23 logged out. Waiting for processes to exit. Oct 29 05:28:54.403274 systemd-logind[1183]: Removed session 23. Oct 29 05:28:54.545968 systemd[1]: Started sshd@26-10.230.52.194:22-147.75.109.163:34540.service. Oct 29 05:28:54.570368 env[1191]: time="2025-10-29T05:28:54.570267092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcqbw,Uid:b17b41da-6822-4738-a320-f4e6d46744a1,Namespace:kube-system,Attempt:0,}" Oct 29 05:28:54.579411 kubelet[1933]: E1029 05:28:54.579359 1933 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 05:28:54.593209 env[1191]: time="2025-10-29T05:28:54.593045171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:28:54.593209 env[1191]: time="2025-10-29T05:28:54.593162844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:28:54.593502 env[1191]: time="2025-10-29T05:28:54.593179957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:28:54.593972 env[1191]: time="2025-10-29T05:28:54.593915229Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91 pid=3682 runtime=io.containerd.runc.v2 Oct 29 05:28:54.613128 systemd[1]: Started cri-containerd-4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91.scope. Oct 29 05:28:54.657247 env[1191]: time="2025-10-29T05:28:54.657183291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcqbw,Uid:b17b41da-6822-4738-a320-f4e6d46744a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\"" Oct 29 05:28:54.664072 env[1191]: time="2025-10-29T05:28:54.664021147Z" level=info msg="CreateContainer within sandbox \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 05:28:54.675615 env[1191]: time="2025-10-29T05:28:54.675564558Z" level=info msg="CreateContainer within sandbox \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" Oct 29 05:28:54.676387 env[1191]: time="2025-10-29T05:28:54.676303234Z" level=info msg="StartContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" Oct 29 05:28:54.701257 systemd[1]: Started cri-containerd-1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff.scope. Oct 29 05:28:54.726678 systemd[1]: cri-containerd-1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff.scope: Deactivated successfully. Oct 29 05:28:54.744683 env[1191]: time="2025-10-29T05:28:54.744564867Z" level=info msg="shim disconnected" id=1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff Oct 29 05:28:54.745026 env[1191]: time="2025-10-29T05:28:54.744994592Z" level=warning msg="cleaning up after shim disconnected" id=1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff namespace=k8s.io Oct 29 05:28:54.745163 env[1191]: time="2025-10-29T05:28:54.745138462Z" level=info msg="cleaning up dead shim" Oct 29 05:28:54.763212 env[1191]: time="2025-10-29T05:28:54.763141196Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\ntime=\"2025-10-29T05:28:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 29 05:28:54.764011 env[1191]: time="2025-10-29T05:28:54.763811789Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 29 05:28:54.764909 env[1191]: time="2025-10-29T05:28:54.764847781Z" level=error msg="Failed to pipe stderr of container \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" error="reading from a closed fifo" Oct 29 05:28:54.765105 env[1191]: time="2025-10-29T05:28:54.765062016Z" level=error msg="Failed to pipe stdout of container \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" error="reading from a closed fifo" Oct 29 05:28:54.766501 env[1191]: time="2025-10-29T05:28:54.766448789Z" level=error msg="StartContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 29 05:28:54.768120 kubelet[1933]: E1029 05:28:54.768043 1933 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff" Oct 29 05:28:54.770843 kubelet[1933]: E1029 05:28:54.770250 1933 kuberuntime_manager.go:1341] "Unhandled Error" err=< Oct 29 05:28:54.770843 kubelet[1933]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 29 05:28:54.770843 kubelet[1933]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 29 05:28:54.770843 kubelet[1933]: rm /hostbin/cilium-mount Oct 29 05:28:54.771491 kubelet[1933]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h54lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-gcqbw_kube-system(b17b41da-6822-4738-a320-f4e6d46744a1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 29 05:28:54.771491 kubelet[1933]: > logger="UnhandledError" Oct 29 05:28:54.772297 kubelet[1933]: E1029 05:28:54.772167 1933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-gcqbw" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" Oct 29 05:28:54.903118 env[1191]: time="2025-10-29T05:28:54.903066356Z" level=info msg="CreateContainer within sandbox \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 29 05:28:54.919142 env[1191]: time="2025-10-29T05:28:54.919088404Z" level=info msg="CreateContainer within sandbox \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\"" Oct 29 05:28:54.922114 env[1191]: time="2025-10-29T05:28:54.922075833Z" level=info msg="StartContainer for \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\"" Oct 29 05:28:54.947333 systemd[1]: Started cri-containerd-492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51.scope. Oct 29 05:28:54.968240 systemd[1]: cri-containerd-492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51.scope: Deactivated successfully. Oct 29 05:28:54.979121 env[1191]: time="2025-10-29T05:28:54.979047967Z" level=info msg="shim disconnected" id=492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51 Oct 29 05:28:54.979293 env[1191]: time="2025-10-29T05:28:54.979125821Z" level=warning msg="cleaning up after shim disconnected" id=492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51 namespace=k8s.io Oct 29 05:28:54.979293 env[1191]: time="2025-10-29T05:28:54.979142603Z" level=info msg="cleaning up dead shim" Oct 29 05:28:54.990513 env[1191]: time="2025-10-29T05:28:54.990465352Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3780 runtime=io.containerd.runc.v2\ntime=\"2025-10-29T05:28:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 29 05:28:54.991023 env[1191]: time="2025-10-29T05:28:54.990962586Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 29 05:28:54.991864 env[1191]: time="2025-10-29T05:28:54.991198445Z" level=error msg="Failed to pipe stdout of container \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\"" error="reading from a closed fifo" Oct 29 05:28:54.992086 env[1191]: time="2025-10-29T05:28:54.992029143Z" level=error msg="Failed to pipe stderr of container \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\"" error="reading from a closed fifo" Oct 29 05:28:54.993483 env[1191]: time="2025-10-29T05:28:54.993441089Z" level=error msg="StartContainer for \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 29 05:28:54.994576 kubelet[1933]: E1029 05:28:54.993860 1933 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51" Oct 29 05:28:54.994576 kubelet[1933]: E1029 05:28:54.994077 1933 kuberuntime_manager.go:1341] "Unhandled Error" err=< Oct 29 05:28:54.994576 kubelet[1933]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 29 05:28:54.994576 kubelet[1933]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 29 05:28:54.994576 kubelet[1933]: rm /hostbin/cilium-mount Oct 29 05:28:54.994576 kubelet[1933]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h54lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-gcqbw_kube-system(b17b41da-6822-4738-a320-f4e6d46744a1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 29 05:28:54.994576 kubelet[1933]: > logger="UnhandledError" Oct 29 05:28:54.995750 kubelet[1933]: E1029 05:28:54.995683 1933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-gcqbw" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" Oct 29 05:28:55.448184 sshd[3672]: Accepted publickey for core from 147.75.109.163 port 34540 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:55.455897 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:55.464456 systemd[1]: Started session-24.scope. Oct 29 05:28:55.465340 systemd-logind[1183]: New session 24 of user core. Oct 29 05:28:55.898594 kubelet[1933]: I1029 05:28:55.898488 1933 scope.go:117] "RemoveContainer" containerID="1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff" Oct 29 05:28:55.899496 kubelet[1933]: I1029 05:28:55.899462 1933 scope.go:117] "RemoveContainer" containerID="1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff" Oct 29 05:28:55.902667 env[1191]: time="2025-10-29T05:28:55.902571795Z" level=info msg="RemoveContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" Oct 29 05:28:55.903366 env[1191]: time="2025-10-29T05:28:55.902571848Z" level=info msg="RemoveContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\"" Oct 29 05:28:55.903757 env[1191]: time="2025-10-29T05:28:55.903658349Z" level=error msg="RemoveContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\" failed" error="failed to set removing state for container \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\": container is already in removing state" Oct 29 05:28:55.904203 kubelet[1933]: E1029 05:28:55.904061 1933 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\": container is already in removing state" containerID="1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff" Oct 29 05:28:55.905513 kubelet[1933]: E1029 05:28:55.905447 1933 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\": container is already in removing state; Skipping pod \"cilium-gcqbw_kube-system(b17b41da-6822-4738-a320-f4e6d46744a1)\"" logger="UnhandledError" Oct 29 05:28:55.907110 kubelet[1933]: E1029 05:28:55.906884 1933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-gcqbw_kube-system(b17b41da-6822-4738-a320-f4e6d46744a1)\"" pod="kube-system/cilium-gcqbw" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" Oct 29 05:28:55.907209 env[1191]: time="2025-10-29T05:28:55.906931715Z" level=info msg="RemoveContainer for \"1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff\" returns successfully" Oct 29 05:28:56.257688 sshd[3672]: pam_unix(sshd:session): session closed for user core Oct 29 05:28:56.262214 systemd[1]: sshd@26-10.230.52.194:22-147.75.109.163:34540.service: Deactivated successfully. Oct 29 05:28:56.263416 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 05:28:56.264911 systemd-logind[1183]: Session 24 logged out. Waiting for processes to exit. Oct 29 05:28:56.266456 systemd-logind[1183]: Removed session 24. Oct 29 05:28:56.407218 systemd[1]: Started sshd@27-10.230.52.194:22-147.75.109.163:34554.service. Oct 29 05:28:56.907697 env[1191]: time="2025-10-29T05:28:56.903472398Z" level=info msg="StopPodSandbox for \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\"" Oct 29 05:28:56.907697 env[1191]: time="2025-10-29T05:28:56.903593904Z" level=info msg="Container to stop \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 05:28:56.906671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91-shm.mount: Deactivated successfully. Oct 29 05:28:56.931932 systemd[1]: cri-containerd-4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91.scope: Deactivated successfully. Oct 29 05:28:56.969016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91-rootfs.mount: Deactivated successfully. Oct 29 05:28:56.977402 env[1191]: time="2025-10-29T05:28:56.977323312Z" level=info msg="shim disconnected" id=4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91 Oct 29 05:28:56.977627 env[1191]: time="2025-10-29T05:28:56.977402258Z" level=warning msg="cleaning up after shim disconnected" id=4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91 namespace=k8s.io Oct 29 05:28:56.977627 env[1191]: time="2025-10-29T05:28:56.977418583Z" level=info msg="cleaning up dead shim" Oct 29 05:28:57.000345 env[1191]: time="2025-10-29T05:28:57.000271957Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Oct 29 05:28:57.000856 env[1191]: time="2025-10-29T05:28:57.000814214Z" level=info msg="TearDown network for sandbox \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" successfully" Oct 29 05:28:57.000856 env[1191]: time="2025-10-29T05:28:57.000851554Z" level=info msg="StopPodSandbox for \"4656ff2cfe999a5897d168f138ea0ce4c6c556be1201e5ce0a02e335e2dccc91\" returns successfully" Oct 29 05:28:57.165793 kubelet[1933]: I1029 05:28:57.165593 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-xtables-lock\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.165793 kubelet[1933]: I1029 05:28:57.165670 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-hubble-tls\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.165793 kubelet[1933]: I1029 05:28:57.165705 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h54lz\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-kube-api-access-h54lz\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.165793 kubelet[1933]: I1029 05:28:57.165768 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-ipsec-secrets\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165819 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-bpf-maps\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165855 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-config-path\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165878 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-kernel\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165901 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-cgroup\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165929 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-lib-modules\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165953 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-hostproc\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.165986 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-net\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166028 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-run\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166065 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-etc-cni-netd\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166106 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-clustermesh-secrets\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166129 1933 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cni-path\") pod \"b17b41da-6822-4738-a320-f4e6d46744a1\" (UID: \"b17b41da-6822-4738-a320-f4e6d46744a1\") " Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166243 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cni-path" (OuterVolumeSpecName: "cni-path") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.166577 kubelet[1933]: I1029 05:28:57.166306 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.167406 kubelet[1933]: I1029 05:28:57.166846 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.169221 kubelet[1933]: I1029 05:28:57.169190 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.169572 kubelet[1933]: I1029 05:28:57.169490 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-hostproc" (OuterVolumeSpecName: "hostproc") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.169828 kubelet[1933]: I1029 05:28:57.169801 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.170005 kubelet[1933]: I1029 05:28:57.169979 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.170176 kubelet[1933]: I1029 05:28:57.170151 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.173277 systemd[1]: var-lib-kubelet-pods-b17b41da\x2d6822\x2d4738\x2da320\x2df4e6d46744a1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 05:28:57.177881 kubelet[1933]: I1029 05:28:57.177848 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.177978 kubelet[1933]: I1029 05:28:57.177899 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 05:28:57.181086 systemd[1]: var-lib-kubelet-pods-b17b41da\x2d6822\x2d4738\x2da320\x2df4e6d46744a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh54lz.mount: Deactivated successfully. Oct 29 05:28:57.182405 kubelet[1933]: I1029 05:28:57.182358 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 05:28:57.182792 kubelet[1933]: I1029 05:28:57.182748 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:28:57.183034 kubelet[1933]: I1029 05:28:57.182988 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-kube-api-access-h54lz" (OuterVolumeSpecName: "kube-api-access-h54lz") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "kube-api-access-h54lz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 05:28:57.187284 systemd[1]: var-lib-kubelet-pods-b17b41da\x2d6822\x2d4738\x2da320\x2df4e6d46744a1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 29 05:28:57.188791 kubelet[1933]: I1029 05:28:57.188733 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 05:28:57.188934 kubelet[1933]: I1029 05:28:57.188862 1933 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b17b41da-6822-4738-a320-f4e6d46744a1" (UID: "b17b41da-6822-4738-a320-f4e6d46744a1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 05:28:57.266923 kubelet[1933]: I1029 05:28:57.266861 1933 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-bpf-maps\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.266923 kubelet[1933]: I1029 05:28:57.266907 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-config-path\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.266923 kubelet[1933]: I1029 05:28:57.266930 1933 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-kernel\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.266946 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-cgroup\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.266963 1933 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-lib-modules\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.266976 1933 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-host-proc-sys-net\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.266993 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-run\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267021 1933 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-hostproc\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267046 1933 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-etc-cni-netd\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267060 1933 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-cni-path\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267083 1933 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-clustermesh-secrets\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267096 1933 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b17b41da-6822-4738-a320-f4e6d46744a1-xtables-lock\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267118 1933 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-hubble-tls\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267144 1933 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h54lz\" (UniqueName: \"kubernetes.io/projected/b17b41da-6822-4738-a320-f4e6d46744a1-kube-api-access-h54lz\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.267316 kubelet[1933]: I1029 05:28:57.267158 1933 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b17b41da-6822-4738-a320-f4e6d46744a1-cilium-ipsec-secrets\") on node \"srv-clpdb.gb1.brightbox.com\" DevicePath \"\"" Oct 29 05:28:57.310511 sshd[3803]: Accepted publickey for core from 147.75.109.163 port 34554 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 05:28:57.312563 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 05:28:57.319876 systemd-logind[1183]: New session 25 of user core. Oct 29 05:28:57.320848 systemd[1]: Started session-25.scope. Oct 29 05:28:57.697508 kubelet[1933]: I1029 05:28:57.697356 1933 setters.go:602] "Node became not ready" node="srv-clpdb.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-29T05:28:57Z","lastTransitionTime":"2025-10-29T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 29 05:28:57.860033 kubelet[1933]: W1029 05:28:57.859944 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17b41da_6822_4738_a320_f4e6d46744a1.slice/cri-containerd-1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff.scope WatchSource:0}: container "1953c2c960f95df1712aba539ef752a5f78494be18a8005baf8c09345ba894ff" in namespace "k8s.io": not found Oct 29 05:28:57.906710 systemd[1]: var-lib-kubelet-pods-b17b41da\x2d6822\x2d4738\x2da320\x2df4e6d46744a1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 05:28:57.911644 kubelet[1933]: I1029 05:28:57.911601 1933 scope.go:117] "RemoveContainer" containerID="492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51" Oct 29 05:28:57.928129 systemd[1]: Removed slice kubepods-burstable-podb17b41da_6822_4738_a320_f4e6d46744a1.slice. Oct 29 05:28:57.930211 env[1191]: time="2025-10-29T05:28:57.930131056Z" level=info msg="RemoveContainer for \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\"" Oct 29 05:28:57.936665 env[1191]: time="2025-10-29T05:28:57.936610816Z" level=info msg="RemoveContainer for \"492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51\" returns successfully" Oct 29 05:28:58.029042 kubelet[1933]: I1029 05:28:58.028902 1933 memory_manager.go:355] "RemoveStaleState removing state" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" containerName="mount-cgroup" Oct 29 05:28:58.029042 kubelet[1933]: I1029 05:28:58.028971 1933 memory_manager.go:355] "RemoveStaleState removing state" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" containerName="mount-cgroup" Oct 29 05:28:58.042146 systemd[1]: Created slice kubepods-burstable-pod4e6ad7b7_2ede_4379_a639_e85dad61cd5c.slice. Oct 29 05:28:58.181339 kubelet[1933]: I1029 05:28:58.181281 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-etc-cni-netd\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182148 kubelet[1933]: I1029 05:28:58.182108 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-hubble-tls\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182362 kubelet[1933]: I1029 05:28:58.182333 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-hostproc\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182585 kubelet[1933]: I1029 05:28:58.182549 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-clustermesh-secrets\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182857 kubelet[1933]: I1029 05:28:58.182765 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-host-proc-sys-net\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182976 kubelet[1933]: I1029 05:28:58.182878 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-cilium-run\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182976 kubelet[1933]: I1029 05:28:58.182918 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-cni-path\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.182976 kubelet[1933]: I1029 05:28:58.182949 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-cilium-config-path\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.182979 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-cilium-ipsec-secrets\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183007 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-cilium-cgroup\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183031 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-host-proc-sys-kernel\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183080 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-bpf-maps\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183106 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-lib-modules\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183129 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-xtables-lock\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.183259 kubelet[1933]: I1029 05:28:58.183184 1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2h9\" (UniqueName: \"kubernetes.io/projected/4e6ad7b7-2ede-4379-a639-e85dad61cd5c-kube-api-access-rn2h9\") pod \"cilium-mwzjj\" (UID: \"4e6ad7b7-2ede-4379-a639-e85dad61cd5c\") " pod="kube-system/cilium-mwzjj" Oct 29 05:28:58.346032 env[1191]: time="2025-10-29T05:28:58.345938876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwzjj,Uid:4e6ad7b7-2ede-4379-a639-e85dad61cd5c,Namespace:kube-system,Attempt:0,}" Oct 29 05:28:58.363050 env[1191]: time="2025-10-29T05:28:58.362744886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 05:28:58.363050 env[1191]: time="2025-10-29T05:28:58.362857223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 05:28:58.363050 env[1191]: time="2025-10-29T05:28:58.362874646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 05:28:58.363419 env[1191]: time="2025-10-29T05:28:58.363230564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493 pid=3860 runtime=io.containerd.runc.v2 Oct 29 05:28:58.371373 kubelet[1933]: I1029 05:28:58.371312 1933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17b41da-6822-4738-a320-f4e6d46744a1" path="/var/lib/kubelet/pods/b17b41da-6822-4738-a320-f4e6d46744a1/volumes" Oct 29 05:28:58.383129 systemd[1]: Started cri-containerd-5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493.scope. Oct 29 05:28:58.433943 env[1191]: time="2025-10-29T05:28:58.433873010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwzjj,Uid:4e6ad7b7-2ede-4379-a639-e85dad61cd5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\"" Oct 29 05:28:58.439428 env[1191]: time="2025-10-29T05:28:58.439344933Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 05:28:58.451961 env[1191]: time="2025-10-29T05:28:58.451889946Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787\"" Oct 29 05:28:58.454108 env[1191]: time="2025-10-29T05:28:58.454072306Z" level=info msg="StartContainer for \"f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787\"" Oct 29 05:28:58.475875 systemd[1]: Started cri-containerd-f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787.scope. Oct 29 05:28:58.529722 env[1191]: time="2025-10-29T05:28:58.529669370Z" level=info msg="StartContainer for \"f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787\" returns successfully" Oct 29 05:28:58.547652 systemd[1]: cri-containerd-f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787.scope: Deactivated successfully. Oct 29 05:28:58.580582 env[1191]: time="2025-10-29T05:28:58.580515061Z" level=info msg="shim disconnected" id=f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787 Oct 29 05:28:58.580951 env[1191]: time="2025-10-29T05:28:58.580918371Z" level=warning msg="cleaning up after shim disconnected" id=f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787 namespace=k8s.io Oct 29 05:28:58.581082 env[1191]: time="2025-10-29T05:28:58.581053875Z" level=info msg="cleaning up dead shim" Oct 29 05:28:58.593497 env[1191]: time="2025-10-29T05:28:58.593384460Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3942 runtime=io.containerd.runc.v2\n" Oct 29 05:28:58.923128 env[1191]: time="2025-10-29T05:28:58.923064439Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 05:28:58.939342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725227392.mount: Deactivated successfully. Oct 29 05:28:58.945242 env[1191]: time="2025-10-29T05:28:58.945197120Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471\"" Oct 29 05:28:58.952544 env[1191]: time="2025-10-29T05:28:58.952503857Z" level=info msg="StartContainer for \"6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471\"" Oct 29 05:28:58.981612 systemd[1]: Started cri-containerd-6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471.scope. Oct 29 05:28:59.026148 env[1191]: time="2025-10-29T05:28:59.026096320Z" level=info msg="StartContainer for \"6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471\" returns successfully" Oct 29 05:28:59.038024 systemd[1]: cri-containerd-6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471.scope: Deactivated successfully. Oct 29 05:28:59.070527 env[1191]: time="2025-10-29T05:28:59.070464182Z" level=info msg="shim disconnected" id=6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471 Oct 29 05:28:59.070527 env[1191]: time="2025-10-29T05:28:59.070526017Z" level=warning msg="cleaning up after shim disconnected" id=6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471 namespace=k8s.io Oct 29 05:28:59.070945 env[1191]: time="2025-10-29T05:28:59.070543855Z" level=info msg="cleaning up dead shim" Oct 29 05:28:59.080969 env[1191]: time="2025-10-29T05:28:59.080925062Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:28:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" Oct 29 05:28:59.580904 kubelet[1933]: E1029 05:28:59.580824 1933 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 05:28:59.907075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471-rootfs.mount: Deactivated successfully. Oct 29 05:28:59.925272 env[1191]: time="2025-10-29T05:28:59.925224789Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 05:28:59.946808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702792847.mount: Deactivated successfully. Oct 29 05:28:59.973342 env[1191]: time="2025-10-29T05:28:59.973254297Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8\"" Oct 29 05:28:59.974992 env[1191]: time="2025-10-29T05:28:59.974956186Z" level=info msg="StartContainer for \"b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8\"" Oct 29 05:29:00.018967 systemd[1]: Started cri-containerd-b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8.scope. Oct 29 05:29:00.073260 env[1191]: time="2025-10-29T05:29:00.073184187Z" level=info msg="StartContainer for \"b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8\" returns successfully" Oct 29 05:29:00.086229 systemd[1]: cri-containerd-b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8.scope: Deactivated successfully. Oct 29 05:29:00.120557 env[1191]: time="2025-10-29T05:29:00.120482818Z" level=info msg="shim disconnected" id=b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8 Oct 29 05:29:00.120557 env[1191]: time="2025-10-29T05:29:00.120549784Z" level=warning msg="cleaning up after shim disconnected" id=b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8 namespace=k8s.io Oct 29 05:29:00.120939 env[1191]: time="2025-10-29T05:29:00.120566916Z" level=info msg="cleaning up dead shim" Oct 29 05:29:00.131838 env[1191]: time="2025-10-29T05:29:00.131787541Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:29:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4067 runtime=io.containerd.runc.v2\n" Oct 29 05:29:00.907641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8-rootfs.mount: Deactivated successfully. Oct 29 05:29:00.933644 env[1191]: time="2025-10-29T05:29:00.931065990Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 05:29:00.949385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148495961.mount: Deactivated successfully. Oct 29 05:29:00.962428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853082248.mount: Deactivated successfully. Oct 29 05:29:00.977476 env[1191]: time="2025-10-29T05:29:00.977411863Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085\"" Oct 29 05:29:00.981128 env[1191]: time="2025-10-29T05:29:00.978841424Z" level=info msg="StartContainer for \"837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085\"" Oct 29 05:29:00.988194 kubelet[1933]: W1029 05:29:00.988112 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17b41da_6822_4738_a320_f4e6d46744a1.slice/cri-containerd-492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51.scope WatchSource:0}: container "492fc7cbe90a5eba2cb9462f0f819e66f1d3711ff90d863f6ddab2588ec8df51" in namespace "k8s.io": not found Oct 29 05:29:01.012472 systemd[1]: Started cri-containerd-837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085.scope. Oct 29 05:29:01.064381 systemd[1]: cri-containerd-837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085.scope: Deactivated successfully. Oct 29 05:29:01.071871 env[1191]: time="2025-10-29T05:29:01.071825183Z" level=info msg="StartContainer for \"837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085\" returns successfully" Oct 29 05:29:01.117281 env[1191]: time="2025-10-29T05:29:01.117216107Z" level=info msg="shim disconnected" id=837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085 Oct 29 05:29:01.117675 env[1191]: time="2025-10-29T05:29:01.117644138Z" level=warning msg="cleaning up after shim disconnected" id=837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085 namespace=k8s.io Oct 29 05:29:01.117833 env[1191]: time="2025-10-29T05:29:01.117805455Z" level=info msg="cleaning up dead shim" Oct 29 05:29:01.134481 env[1191]: time="2025-10-29T05:29:01.134414069Z" level=warning msg="cleanup warnings time=\"2025-10-29T05:29:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" Oct 29 05:29:01.937879 env[1191]: time="2025-10-29T05:29:01.936062416Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 05:29:01.954480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210918165.mount: Deactivated successfully. Oct 29 05:29:01.967816 env[1191]: time="2025-10-29T05:29:01.967732151Z" level=info msg="CreateContainer within sandbox \"5c7e595af647f5e1a156d6d3a3974b92e4d8c068fc141fd35744bd5eef3ed493\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9\"" Oct 29 05:29:01.968680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700143256.mount: Deactivated successfully. Oct 29 05:29:01.971140 env[1191]: time="2025-10-29T05:29:01.971105341Z" level=info msg="StartContainer for \"b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9\"" Oct 29 05:29:02.001226 systemd[1]: Started cri-containerd-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9.scope. Oct 29 05:29:02.065821 env[1191]: time="2025-10-29T05:29:02.065744103Z" level=info msg="StartContainer for \"b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9\" returns successfully" Oct 29 05:29:02.765819 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 29 05:29:02.970710 kubelet[1933]: I1029 05:29:02.970571 1933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mwzjj" podStartSLOduration=4.970524586 podStartE2EDuration="4.970524586s" podCreationTimestamp="2025-10-29 05:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 05:29:02.965266051 +0000 UTC m=+158.873406324" watchObservedRunningTime="2025-10-29 05:29:02.970524586 +0000 UTC m=+158.878664852" Oct 29 05:29:03.557282 update_engine[1185]: I1029 05:29:03.556037 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 29 05:29:03.557282 update_engine[1185]: I1029 05:29:03.556740 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 29 05:29:03.557282 update_engine[1185]: I1029 05:29:03.557224 1185 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 29 05:29:03.558187 update_engine[1185]: E1029 05:29:03.558137 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 29 05:29:03.558293 update_engine[1185]: I1029 05:29:03.558259 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 29 05:29:04.156974 kubelet[1933]: W1029 05:29:04.156907 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6ad7b7_2ede_4379_a639_e85dad61cd5c.slice/cri-containerd-f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787.scope WatchSource:0}: task f2a21975f8ab46b2fdd5b5bdbee2d89f556f34abd8aaef038ce34068fe981787 not found: not found Oct 29 05:29:04.227066 systemd[1]: run-containerd-runc-k8s.io-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9-runc.O7Z7hL.mount: Deactivated successfully. Oct 29 05:29:06.299577 systemd-networkd[1029]: lxc_health: Link UP Oct 29 05:29:06.316457 systemd-networkd[1029]: lxc_health: Gained carrier Oct 29 05:29:06.316994 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 05:29:06.536079 systemd[1]: run-containerd-runc-k8s.io-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9-runc.UnnAXI.mount: Deactivated successfully. Oct 29 05:29:07.287206 kubelet[1933]: W1029 05:29:07.287111 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6ad7b7_2ede_4379_a639_e85dad61cd5c.slice/cri-containerd-6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471.scope WatchSource:0}: task 6f9e918dae2d4c8aef298d5162f49bff74ce5039ee69bf66f07ab4195f984471 not found: not found Oct 29 05:29:08.394211 systemd-networkd[1029]: lxc_health: Gained IPv6LL Oct 29 05:29:08.794366 systemd[1]: run-containerd-runc-k8s.io-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9-runc.PPf8yU.mount: Deactivated successfully. Oct 29 05:29:10.397972 kubelet[1933]: W1029 05:29:10.397908 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6ad7b7_2ede_4379_a639_e85dad61cd5c.slice/cri-containerd-b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8.scope WatchSource:0}: task b310f9d3591120393114eb3dacbdf73aaa82beb25867408cef8d6052fc48e9f8 not found: not found Oct 29 05:29:10.997950 systemd[1]: run-containerd-runc-k8s.io-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9-runc.70Nptz.mount: Deactivated successfully. Oct 29 05:29:13.278280 systemd[1]: run-containerd-runc-k8s.io-b1d0a5571a71d80d76a47d219f5d992fc7d09fdbaf16e013c6caa4ac11af6dc9-runc.MJIKcI.mount: Deactivated successfully. Oct 29 05:29:13.506052 kubelet[1933]: W1029 05:29:13.505940 1933 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6ad7b7_2ede_4379_a639_e85dad61cd5c.slice/cri-containerd-837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085.scope WatchSource:0}: task 837b26415acc9946051fca9ba477f6db6bcf91db5b741001b759c60095dd4085 not found: not found Oct 29 05:29:13.509668 sshd[3803]: pam_unix(sshd:session): session closed for user core Oct 29 05:29:13.518952 systemd[1]: sshd@27-10.230.52.194:22-147.75.109.163:34554.service: Deactivated successfully. Oct 29 05:29:13.520177 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 05:29:13.521250 systemd-logind[1183]: Session 25 logged out. Waiting for processes to exit. Oct 29 05:29:13.524645 systemd-logind[1183]: Removed session 25. Oct 29 05:29:13.555386 update_engine[1185]: I1029 05:29:13.554708 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 29 05:29:13.555948 update_engine[1185]: I1029 05:29:13.555728 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 29 05:29:13.556215 update_engine[1185]: I1029 05:29:13.556185 1185 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 29 05:29:13.556565 update_engine[1185]: E1029 05:29:13.556532 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 29 05:29:13.556667 update_engine[1185]: I1029 05:29:13.556643 1185 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 29 05:29:13.556736 update_engine[1185]: I1029 05:29:13.556668 1185 omaha_request_action.cc:621] Omaha request response: Oct 29 05:29:13.557229 update_engine[1185]: E1029 05:29:13.557178 1185 omaha_request_action.cc:640] Omaha request network transfer failed. Oct 29 05:29:13.557306 update_engine[1185]: I1029 05:29:13.557242 1185 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 29 05:29:13.557306 update_engine[1185]: I1029 05:29:13.557264 1185 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 29 05:29:13.557306 update_engine[1185]: I1029 05:29:13.557271 1185 update_attempter.cc:306] Processing Done. Oct 29 05:29:13.557519 update_engine[1185]: E1029 05:29:13.557307 1185 update_attempter.cc:619] Update failed. Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557322 1185 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557327 1185 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557338 1185 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557461 1185 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557511 1185 omaha_request_action.cc:270] Posting an Omaha request to disabled Oct 29 05:29:13.557519 update_engine[1185]: I1029 05:29:13.557520 1185 omaha_request_action.cc:271] Request: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.557519 update_engine[1185]: Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.557528 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.557710 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.557924 1185 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 29 05:29:13.558539 update_engine[1185]: E1029 05:29:13.558382 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558483 1185 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558495 1185 omaha_request_action.cc:621] Omaha request response: Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558501 1185 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558507 1185 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558513 1185 update_attempter.cc:306] Processing Done. Oct 29 05:29:13.558539 update_engine[1185]: I1029 05:29:13.558527 1185 update_attempter.cc:310] Error event sent. Oct 29 05:29:13.559713 update_engine[1185]: I1029 05:29:13.558538 1185 update_check_scheduler.cc:74] Next update check in 48m13s Oct 29 05:29:13.559795 locksmithd[1226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 29 05:29:13.559795 locksmithd[1226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 29 05:29:13.866596 systemd[1]: Started sshd@28-10.230.52.194:22-178.128.241.223:46050.service. Oct 29 05:29:13.967843 sshd[4832]: Invalid user debian from 178.128.241.223 port 46050 Oct 29 05:29:13.986632 sshd[4832]: pam_faillock(sshd:auth): User unknown Oct 29 05:29:13.987595 sshd[4832]: pam_unix(sshd:auth): check pass; user unknown Oct 29 05:29:13.987667 sshd[4832]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=178.128.241.223 Oct 29 05:29:13.989618 sshd[4832]: pam_faillock(sshd:auth): User unknown Oct 29 05:29:15.652501 sshd[4832]: Failed password for invalid user debian from 178.128.241.223 port 46050 ssh2 Oct 29 05:29:15.998185 sshd[4832]: Connection closed by invalid user debian 178.128.241.223 port 46050 [preauth] Oct 29 05:29:15.999822 systemd[1]: sshd@28-10.230.52.194:22-178.128.241.223:46050.service: Deactivated successfully.