May 10 00:49:27.932683 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:49:27.932726 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:49:27.932746 kernel: BIOS-provided physical RAM map: May 10 00:49:27.932756 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 00:49:27.932765 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 00:49:27.932775 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 00:49:27.932786 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable May 10 00:49:27.932796 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved May 10 00:49:27.932805 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 00:49:27.932815 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 00:49:27.932829 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 00:49:27.932839 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 00:49:27.932849 kernel: NX (Execute Disable) protection: active May 10 00:49:27.932859 kernel: SMBIOS 2.8 present. May 10 00:49:27.937560 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 May 10 00:49:27.937574 kernel: Hypervisor detected: KVM May 10 00:49:27.937593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:49:27.937604 kernel: kvm-clock: cpu 0, msr 74196001, primary cpu clock May 10 00:49:27.937614 kernel: kvm-clock: using sched offset of 5046820478 cycles May 10 00:49:27.937626 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:49:27.937637 kernel: tsc: Detected 2499.998 MHz processor May 10 00:49:27.937648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:49:27.937659 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:49:27.937670 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 May 10 00:49:27.937681 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:49:27.937696 kernel: Using GB pages for direct mapping May 10 00:49:27.937707 kernel: ACPI: Early table checksum verification disabled May 10 00:49:27.937718 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) May 10 00:49:27.937729 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937740 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937751 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937761 kernel: ACPI: FACS 0x000000007FFDFD40 000040 May 10 00:49:27.937772 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937783 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937798 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937809 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:49:27.937819 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] May 10 00:49:27.937830 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] May 10 00:49:27.937841 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] May 10 00:49:27.937852 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] May 10 00:49:27.937891 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] May 10 00:49:27.937909 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] May 10 00:49:27.937920 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] May 10 00:49:27.937932 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 00:49:27.937943 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 00:49:27.937968 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 10 00:49:27.937980 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 May 10 00:49:27.937991 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 10 00:49:27.938007 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 May 10 00:49:27.938019 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 10 00:49:27.938030 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 May 10 00:49:27.938041 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 10 00:49:27.938053 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 May 10 00:49:27.938064 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 10 00:49:27.938075 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 May 10 00:49:27.938086 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 10 00:49:27.938097 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 May 10 00:49:27.938109 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 10 00:49:27.938124 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 May 10 00:49:27.938136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 00:49:27.938147 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 10 00:49:27.938159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug May 10 00:49:27.938170 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] May 10 00:49:27.938182 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] May 10 00:49:27.938193 kernel: Zone ranges: May 10 00:49:27.938205 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:49:27.938216 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] May 10 00:49:27.938232 kernel: Normal empty May 10 00:49:27.938243 kernel: Movable zone start for each node May 10 00:49:27.938254 kernel: Early memory node ranges May 10 00:49:27.938266 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 00:49:27.938277 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] May 10 00:49:27.938289 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] May 10 00:49:27.938300 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:49:27.938311 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 00:49:27.938323 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges May 10 00:49:27.938338 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 00:49:27.938349 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:49:27.938361 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:49:27.938372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 00:49:27.938384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:49:27.938395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:49:27.938406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:49:27.938417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:49:27.938429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:49:27.938444 kernel: TSC deadline timer available May 10 00:49:27.938455 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs May 10 00:49:27.938466 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 00:49:27.938478 kernel: Booting paravirtualized kernel on KVM May 10 00:49:27.938489 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:49:27.938501 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 10 00:49:27.938512 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 10 00:49:27.938524 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 10 00:49:27.938535 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 10 00:49:27.938550 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 May 10 00:49:27.938562 kernel: kvm-guest: PV spinlocks enabled May 10 00:49:27.938573 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:49:27.938585 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 May 10 00:49:27.938596 kernel: Policy zone: DMA32 May 10 00:49:27.938609 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:49:27.938621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:49:27.938633 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:49:27.938648 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 00:49:27.938659 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:49:27.938671 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 192524K reserved, 0K cma-reserved) May 10 00:49:27.938683 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 10 00:49:27.938694 kernel: Kernel/User page tables isolation: enabled May 10 00:49:27.938705 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:49:27.938717 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:49:27.938728 kernel: rcu: Hierarchical RCU implementation. May 10 00:49:27.938740 kernel: rcu: RCU event tracing is enabled. May 10 00:49:27.938756 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 10 00:49:27.938768 kernel: Rude variant of Tasks RCU enabled. May 10 00:49:27.938779 kernel: Tracing variant of Tasks RCU enabled. May 10 00:49:27.938791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:49:27.938803 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 10 00:49:27.938814 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 May 10 00:49:27.938825 kernel: random: crng init done May 10 00:49:27.938850 kernel: Console: colour VGA+ 80x25 May 10 00:49:27.938872 kernel: printk: console [tty0] enabled May 10 00:49:27.938886 kernel: printk: console [ttyS0] enabled May 10 00:49:27.938898 kernel: ACPI: Core revision 20210730 May 10 00:49:27.938910 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:49:27.938927 kernel: x2apic enabled May 10 00:49:27.938939 kernel: Switched APIC routing to physical x2apic. May 10 00:49:27.938961 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 10 00:49:27.938975 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 10 00:49:27.938987 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 00:49:27.939003 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 00:49:27.939016 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 00:49:27.939027 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:49:27.939039 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:49:27.939051 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:49:27.939063 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 10 00:49:27.939075 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:49:27.939087 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:49:27.939099 kernel: MDS: Mitigation: Clear CPU buffers May 10 00:49:27.939110 kernel: MMIO Stale Data: Unknown: No mitigations May 10 00:49:27.939122 kernel: SRBDS: Unknown: Dependent on hypervisor status May 10 00:49:27.939138 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:49:27.939150 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:49:27.939162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:49:27.939174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:49:27.939186 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 00:49:27.939198 kernel: Freeing SMP alternatives memory: 32K May 10 00:49:27.939209 kernel: pid_max: default: 32768 minimum: 301 May 10 00:49:27.939221 kernel: LSM: Security Framework initializing May 10 00:49:27.939233 kernel: SELinux: Initializing. May 10 00:49:27.939245 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:49:27.939257 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 00:49:27.939273 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) May 10 00:49:27.939285 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. May 10 00:49:27.939297 kernel: signal: max sigframe size: 1776 May 10 00:49:27.939309 kernel: rcu: Hierarchical SRCU implementation. May 10 00:49:27.939321 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 00:49:27.939334 kernel: smp: Bringing up secondary CPUs ... May 10 00:49:27.939346 kernel: x86: Booting SMP configuration: May 10 00:49:27.939358 kernel: .... node #0, CPUs: #1 May 10 00:49:27.939370 kernel: kvm-clock: cpu 1, msr 74196041, secondary cpu clock May 10 00:49:27.939386 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 10 00:49:27.939398 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 May 10 00:49:27.939410 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:49:27.939422 kernel: smpboot: Max logical packages: 16 May 10 00:49:27.939434 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 10 00:49:27.939445 kernel: devtmpfs: initialized May 10 00:49:27.939457 kernel: x86/mm: Memory block size: 128MB May 10 00:49:27.939470 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:49:27.939482 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 10 00:49:27.939498 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:49:27.939510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:49:27.939522 kernel: audit: initializing netlink subsys (disabled) May 10 00:49:27.939534 kernel: audit: type=2000 audit(1746838166.942:1): state=initialized audit_enabled=0 res=1 May 10 00:49:27.939546 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:49:27.939558 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:49:27.939570 kernel: cpuidle: using governor menu May 10 00:49:27.939582 kernel: ACPI: bus type PCI registered May 10 00:49:27.939595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:49:27.939610 kernel: dca service started, version 1.12.1 May 10 00:49:27.939623 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 00:49:27.939635 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 00:49:27.939647 kernel: PCI: Using configuration type 1 for base access May 10 00:49:27.939659 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:49:27.939671 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:49:27.939683 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:49:27.939695 kernel: ACPI: Added _OSI(Module Device) May 10 00:49:27.939707 kernel: ACPI: Added _OSI(Processor Device) May 10 00:49:27.939723 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:49:27.939735 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:49:27.939747 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:49:27.939759 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:49:27.939771 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:49:27.939783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:49:27.939796 kernel: ACPI: Interpreter enabled May 10 00:49:27.939808 kernel: ACPI: PM: (supports S0 S5) May 10 00:49:27.939820 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:49:27.939836 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:49:27.939848 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 00:49:27.939860 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:49:27.940158 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:49:27.940325 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:49:27.940483 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:49:27.940501 kernel: PCI host bridge to bus 0000:00 May 10 00:49:27.940653 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:49:27.940804 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:49:27.940977 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:49:27.941121 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 10 00:49:27.941260 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 00:49:27.941401 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] May 10 00:49:27.941544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:49:27.941721 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 00:49:27.946994 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 May 10 00:49:27.947180 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] May 10 00:49:27.947345 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] May 10 00:49:27.947505 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] May 10 00:49:27.947665 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:49:27.947835 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.950981 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] May 10 00:49:27.951192 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.951368 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] May 10 00:49:27.951576 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.951750 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] May 10 00:49:27.951958 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.952142 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] May 10 00:49:27.952321 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.952492 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] May 10 00:49:27.952669 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.952850 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] May 10 00:49:27.953068 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.953248 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] May 10 00:49:27.953427 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:49:27.953611 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] May 10 00:49:27.953791 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 00:49:27.954005 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 00:49:27.954174 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] May 10 00:49:27.954348 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 10 00:49:27.954513 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] May 10 00:49:27.954687 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 00:49:27.954853 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 00:49:27.955068 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] May 10 00:49:27.955265 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] May 10 00:49:27.955508 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 00:49:27.955708 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 00:49:27.964024 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 00:49:27.964263 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] May 10 00:49:27.964475 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] May 10 00:49:27.964686 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 00:49:27.964858 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 00:49:27.965087 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 May 10 00:49:27.965276 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] May 10 00:49:27.965447 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 00:49:27.965613 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 00:49:27.965780 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:49:27.965997 kernel: pci_bus 0000:02: extended config space not accessible May 10 00:49:27.966208 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 May 10 00:49:27.966405 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] May 10 00:49:27.966583 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 00:49:27.966760 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 00:49:27.966975 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:49:27.967154 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] May 10 00:49:27.967325 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 00:49:27.967495 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 00:49:27.967668 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:49:27.967878 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:49:27.968077 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 10 00:49:27.968250 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 00:49:27.968419 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 00:49:27.968586 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:49:27.968756 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 00:49:27.968964 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 00:49:27.969132 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:49:27.969316 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 00:49:27.969481 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 00:49:27.969648 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:49:27.969828 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 00:49:27.970068 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 00:49:27.970252 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:49:27.970434 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 00:49:27.970604 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 00:49:27.970772 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:49:27.978041 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 00:49:27.978236 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 00:49:27.978410 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:49:27.978431 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:49:27.978446 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:49:27.978459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:49:27.978481 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:49:27.978494 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 00:49:27.978507 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 00:49:27.978519 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 00:49:27.978532 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 00:49:27.978545 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 00:49:27.978557 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 00:49:27.978570 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 00:49:27.978582 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 00:49:27.978600 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 00:49:27.978612 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 00:49:27.978625 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 00:49:27.978637 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 00:49:27.978650 kernel: iommu: Default domain type: Translated May 10 00:49:27.978662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:49:27.978831 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 00:49:27.979033 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:49:27.979208 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 00:49:27.979228 kernel: vgaarb: loaded May 10 00:49:27.979241 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:49:27.979254 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:49:27.979267 kernel: PTP clock support registered May 10 00:49:27.979279 kernel: PCI: Using ACPI for IRQ routing May 10 00:49:27.979292 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:49:27.979304 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 00:49:27.979317 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] May 10 00:49:27.979335 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:49:27.979348 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:49:27.979360 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:49:27.979373 kernel: pnp: PnP ACPI init May 10 00:49:27.979584 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 00:49:27.979607 kernel: pnp: PnP ACPI: found 5 devices May 10 00:49:27.979620 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:49:27.979632 kernel: NET: Registered PF_INET protocol family May 10 00:49:27.979651 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:49:27.979664 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 00:49:27.979677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:49:27.979690 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 00:49:27.979702 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 00:49:27.979715 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 00:49:27.979727 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:49:27.979740 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 00:49:27.979752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:49:27.979769 kernel: NET: Registered PF_XDP protocol family May 10 00:49:27.979973 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 May 10 00:49:27.980147 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:49:27.980318 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:49:27.980489 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:49:27.980658 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:49:27.980833 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:49:27.981034 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:49:27.981203 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:49:27.981370 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:49:27.981536 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:49:27.981702 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:49:27.981880 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:49:27.982072 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:49:27.982239 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:49:27.982408 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:49:27.982576 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:49:27.982754 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 00:49:27.982955 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 00:49:27.983124 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 00:49:27.983308 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:49:27.983493 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 00:49:27.983662 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:49:27.983836 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 00:49:27.984061 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:49:27.984232 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 00:49:27.984400 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:49:27.984565 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 00:49:27.984734 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:49:27.993601 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 00:49:27.993809 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:49:27.994013 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 00:49:27.994184 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:49:27.994351 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 00:49:27.994516 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:49:27.994683 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 00:49:27.994857 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:49:27.995057 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 00:49:27.995232 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:49:27.995400 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 00:49:27.995565 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:49:27.995732 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 00:49:27.995920 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:49:27.996106 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 00:49:27.996282 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:49:27.996448 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 00:49:27.996615 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:49:27.996783 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 00:49:27.996979 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:49:27.997147 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 00:49:27.997323 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:49:27.997483 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:49:27.997637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:49:27.997791 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:49:27.997986 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 10 00:49:27.998140 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 00:49:27.998295 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] May 10 00:49:27.998471 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:49:27.998643 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] May 10 00:49:27.998804 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 10 00:49:27.999008 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] May 10 00:49:27.999198 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] May 10 00:49:27.999366 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] May 10 00:49:27.999526 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 00:49:27.999707 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] May 10 00:49:28.003786 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] May 10 00:49:28.004004 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 00:49:28.004181 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] May 10 00:49:28.004342 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] May 10 00:49:28.004500 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 00:49:28.004697 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] May 10 00:49:28.004880 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] May 10 00:49:28.005059 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 00:49:28.005229 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] May 10 00:49:28.005388 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] May 10 00:49:28.005546 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 00:49:28.005715 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] May 10 00:49:28.005895 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] May 10 00:49:28.006080 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 00:49:28.006270 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] May 10 00:49:28.006432 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] May 10 00:49:28.006591 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 00:49:28.006611 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 00:49:28.006625 kernel: PCI: CLS 0 bytes, default 64 May 10 00:49:28.006638 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 00:49:28.006652 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) May 10 00:49:28.006672 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 00:49:28.006686 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 10 00:49:28.006699 kernel: Initialise system trusted keyrings May 10 00:49:28.006712 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 00:49:28.006725 kernel: Key type asymmetric registered May 10 00:49:28.006738 kernel: Asymmetric key parser 'x509' registered May 10 00:49:28.006751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:49:28.006764 kernel: io scheduler mq-deadline registered May 10 00:49:28.006782 kernel: io scheduler kyber registered May 10 00:49:28.006795 kernel: io scheduler bfq registered May 10 00:49:28.006999 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 10 00:49:28.007170 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 10 00:49:28.007336 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.007506 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 10 00:49:28.007674 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 10 00:49:28.007842 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.008064 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 10 00:49:28.008242 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 10 00:49:28.008411 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.008578 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 10 00:49:28.008748 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 10 00:49:28.015136 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.015344 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 10 00:49:28.015520 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 10 00:49:28.015693 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.015886 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 10 00:49:28.016079 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 10 00:49:28.016250 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.016430 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 10 00:49:28.016600 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 10 00:49:28.016767 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.016967 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 10 00:49:28.017139 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 10 00:49:28.017306 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:49:28.017334 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:49:28.017350 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 00:49:28.017363 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 00:49:28.017377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:49:28.017391 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:49:28.017404 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:49:28.017417 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:49:28.017435 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:49:28.017618 kernel: rtc_cmos 00:03: RTC can wake from S4 May 10 00:49:28.017640 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:49:28.017794 kernel: rtc_cmos 00:03: registered as rtc0 May 10 00:49:28.017993 kernel: rtc_cmos 00:03: setting system clock to 2025-05-10T00:49:27 UTC (1746838167) May 10 00:49:28.018157 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 10 00:49:28.018177 kernel: intel_pstate: CPU model not supported May 10 00:49:28.018190 kernel: NET: Registered PF_INET6 protocol family May 10 00:49:28.018210 kernel: Segment Routing with IPv6 May 10 00:49:28.018224 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:49:28.018237 kernel: NET: Registered PF_PACKET protocol family May 10 00:49:28.018255 kernel: Key type dns_resolver registered May 10 00:49:28.018268 kernel: IPI shorthand broadcast: enabled May 10 00:49:28.018282 kernel: sched_clock: Marking stable (983444355, 223269380)->(1487041891, -280328156) May 10 00:49:28.018295 kernel: registered taskstats version 1 May 10 00:49:28.018308 kernel: Loading compiled-in X.509 certificates May 10 00:49:28.018321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:49:28.018338 kernel: Key type .fscrypt registered May 10 00:49:28.018351 kernel: Key type fscrypt-provisioning registered May 10 00:49:28.018364 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:49:28.018378 kernel: ima: Allocated hash algorithm: sha1 May 10 00:49:28.018391 kernel: ima: No architecture policies found May 10 00:49:28.018404 kernel: clk: Disabling unused clocks May 10 00:49:28.018417 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:49:28.018430 kernel: Write protecting the kernel read-only data: 28672k May 10 00:49:28.018443 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:49:28.018461 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:49:28.018475 kernel: Run /init as init process May 10 00:49:28.018488 kernel: with arguments: May 10 00:49:28.018500 kernel: /init May 10 00:49:28.018513 kernel: with environment: May 10 00:49:28.018526 kernel: HOME=/ May 10 00:49:28.018538 kernel: TERM=linux May 10 00:49:28.018551 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:49:28.018575 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:49:28.018600 systemd[1]: Detected virtualization kvm. May 10 00:49:28.018614 systemd[1]: Detected architecture x86-64. May 10 00:49:28.018628 systemd[1]: Running in initrd. May 10 00:49:28.018641 systemd[1]: No hostname configured, using default hostname. May 10 00:49:28.018655 systemd[1]: Hostname set to . May 10 00:49:28.018669 systemd[1]: Initializing machine ID from VM UUID. May 10 00:49:28.018683 systemd[1]: Queued start job for default target initrd.target. May 10 00:49:28.018701 systemd[1]: Started systemd-ask-password-console.path. May 10 00:49:28.018720 systemd[1]: Reached target cryptsetup.target. May 10 00:49:28.018734 systemd[1]: Reached target paths.target. May 10 00:49:28.018748 systemd[1]: Reached target slices.target. May 10 00:49:28.018761 systemd[1]: Reached target swap.target. May 10 00:49:28.018775 systemd[1]: Reached target timers.target. May 10 00:49:28.018790 systemd[1]: Listening on iscsid.socket. May 10 00:49:28.018804 systemd[1]: Listening on iscsiuio.socket. May 10 00:49:28.018823 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:49:28.018837 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:49:28.018851 systemd[1]: Listening on systemd-journald.socket. May 10 00:49:28.018885 systemd[1]: Listening on systemd-networkd.socket. May 10 00:49:28.018901 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:49:28.018915 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:49:28.018929 systemd[1]: Reached target sockets.target. May 10 00:49:28.018962 systemd[1]: Starting kmod-static-nodes.service... May 10 00:49:28.018980 systemd[1]: Finished network-cleanup.service. May 10 00:49:28.019001 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:49:28.019015 systemd[1]: Starting systemd-journald.service... May 10 00:49:28.019029 systemd[1]: Starting systemd-modules-load.service... May 10 00:49:28.019043 systemd[1]: Starting systemd-resolved.service... May 10 00:49:28.019057 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:49:28.019071 systemd[1]: Finished kmod-static-nodes.service. May 10 00:49:28.019085 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:49:28.019111 systemd-journald[202]: Journal started May 10 00:49:28.019193 systemd-journald[202]: Runtime Journal (/run/log/journal/5f3908e3cfef452baaf9d22112409d5a) is 4.7M, max 38.1M, 33.3M free. May 10 00:49:27.934223 systemd-modules-load[203]: Inserted module 'overlay' May 10 00:49:28.037972 kernel: Bridge firewalling registered May 10 00:49:28.038010 systemd[1]: Started systemd-resolved.service. May 10 00:49:28.038034 kernel: audit: type=1130 audit(1746838168.029:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:27.981025 systemd-resolved[204]: Positive Trust Anchors: May 10 00:49:27.981052 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:49:27.981097 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:49:27.993106 systemd-resolved[204]: Defaulting to hostname 'linux'. May 10 00:49:28.051164 kernel: audit: type=1130 audit(1746838168.043:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.051197 systemd[1]: Started systemd-journald.service. May 10 00:49:28.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.022335 systemd-modules-load[203]: Inserted module 'br_netfilter' May 10 00:49:28.053738 kernel: SCSI subsystem initialized May 10 00:49:28.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.054507 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:49:28.060177 kernel: audit: type=1130 audit(1746838168.053:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.060510 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:49:28.066460 kernel: audit: type=1130 audit(1746838168.059:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.066762 systemd[1]: Reached target nss-lookup.target. May 10 00:49:28.069248 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:49:28.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.071584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:49:28.089707 kernel: audit: type=1130 audit(1746838168.065:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.089742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:49:28.089761 kernel: device-mapper: uevent: version 1.0.3 May 10 00:49:28.089779 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:49:28.086896 systemd-modules-load[203]: Inserted module 'dm_multipath' May 10 00:49:28.095747 kernel: audit: type=1130 audit(1746838168.089:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.089339 systemd[1]: Finished systemd-modules-load.service. May 10 00:49:28.090584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:49:28.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.100518 systemd[1]: Starting systemd-sysctl.service... May 10 00:49:28.112719 kernel: audit: type=1130 audit(1746838168.098:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.114084 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:49:28.120173 kernel: audit: type=1130 audit(1746838168.113:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.115136 systemd[1]: Finished systemd-sysctl.service. May 10 00:49:28.139332 kernel: audit: type=1130 audit(1746838168.119:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.121999 systemd[1]: Starting dracut-cmdline.service... May 10 00:49:28.140544 dracut-cmdline[226]: dracut-dracut-053 May 10 00:49:28.140544 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 10 00:49:28.140544 dracut-cmdline[226]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:49:28.221904 kernel: Loading iSCSI transport class v2.0-870. May 10 00:49:28.243898 kernel: iscsi: registered transport (tcp) May 10 00:49:28.272343 kernel: iscsi: registered transport (qla4xxx) May 10 00:49:28.272429 kernel: QLogic iSCSI HBA Driver May 10 00:49:28.322159 systemd[1]: Finished dracut-cmdline.service. May 10 00:49:28.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.324128 systemd[1]: Starting dracut-pre-udev.service... May 10 00:49:28.382931 kernel: raid6: sse2x4 gen() 13870 MB/s May 10 00:49:28.400988 kernel: raid6: sse2x4 xor() 7867 MB/s May 10 00:49:28.418953 kernel: raid6: sse2x2 gen() 9524 MB/s May 10 00:49:28.436921 kernel: raid6: sse2x2 xor() 7915 MB/s May 10 00:49:28.454928 kernel: raid6: sse2x1 gen() 9779 MB/s May 10 00:49:28.473645 kernel: raid6: sse2x1 xor() 7184 MB/s May 10 00:49:28.473743 kernel: raid6: using algorithm sse2x4 gen() 13870 MB/s May 10 00:49:28.473764 kernel: raid6: .... xor() 7867 MB/s, rmw enabled May 10 00:49:28.474977 kernel: raid6: using ssse3x2 recovery algorithm May 10 00:49:28.491908 kernel: xor: automatically using best checksumming function avx May 10 00:49:28.607914 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:49:28.621220 systemd[1]: Finished dracut-pre-udev.service. May 10 00:49:28.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.621000 audit: BPF prog-id=7 op=LOAD May 10 00:49:28.621000 audit: BPF prog-id=8 op=LOAD May 10 00:49:28.623168 systemd[1]: Starting systemd-udevd.service... May 10 00:49:28.641287 systemd-udevd[403]: Using default interface naming scheme 'v252'. May 10 00:49:28.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.651369 systemd[1]: Started systemd-udevd.service. May 10 00:49:28.659218 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:49:28.677511 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 10 00:49:28.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.721249 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:49:28.723007 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:49:28.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:28.817186 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:49:28.909891 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 10 00:49:28.978628 kernel: ACPI: bus type USB registered May 10 00:49:28.978660 kernel: usbcore: registered new interface driver usbfs May 10 00:49:28.978688 kernel: usbcore: registered new interface driver hub May 10 00:49:28.978706 kernel: usbcore: registered new device driver usb May 10 00:49:28.978723 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:49:28.978740 kernel: GPT:17805311 != 125829119 May 10 00:49:28.978756 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:49:28.978772 kernel: GPT:17805311 != 125829119 May 10 00:49:28.978788 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:49:28.978804 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:49:28.978820 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:49:28.978841 kernel: AVX version of gcm_enc/dec engaged. May 10 00:49:28.978858 kernel: AES CTR mode by8 optimization enabled May 10 00:49:28.991035 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 00:49:29.025543 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 May 10 00:49:29.025755 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:49:29.026000 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 00:49:29.026189 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 May 10 00:49:29.026375 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed May 10 00:49:29.026567 kernel: hub 1-0:1.0: USB hub found May 10 00:49:29.026791 kernel: hub 1-0:1.0: 4 ports detected May 10 00:49:29.027028 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:49:29.027244 kernel: hub 2-0:1.0: USB hub found May 10 00:49:29.027457 kernel: hub 2-0:1.0: 4 ports detected May 10 00:49:29.024383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:49:29.124607 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) May 10 00:49:29.124641 kernel: libata version 3.00 loaded. May 10 00:49:29.124660 kernel: ahci 0000:00:1f.2: version 3.0 May 10 00:49:29.124909 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 00:49:29.124942 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 00:49:29.125118 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 00:49:29.125288 kernel: scsi host0: ahci May 10 00:49:29.125512 kernel: scsi host1: ahci May 10 00:49:29.125714 kernel: scsi host2: ahci May 10 00:49:29.125944 kernel: scsi host3: ahci May 10 00:49:29.126153 kernel: scsi host4: ahci May 10 00:49:29.126338 kernel: scsi host5: ahci May 10 00:49:29.126529 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 May 10 00:49:29.126555 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 May 10 00:49:29.126573 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 May 10 00:49:29.126589 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 May 10 00:49:29.126606 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 May 10 00:49:29.126623 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 May 10 00:49:29.123660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:49:29.130257 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:49:29.139319 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:49:29.152426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:49:29.154358 systemd[1]: Starting disk-uuid.service... May 10 00:49:29.161800 disk-uuid[529]: Primary Header is updated. May 10 00:49:29.161800 disk-uuid[529]: Secondary Entries is updated. May 10 00:49:29.161800 disk-uuid[529]: Secondary Header is updated. May 10 00:49:29.166891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:49:29.174027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:49:29.264053 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:49:29.389423 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.389512 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.389942 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.393359 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.395077 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.396742 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 10 00:49:29.420896 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:49:29.428880 kernel: usbcore: registered new interface driver usbhid May 10 00:49:29.428947 kernel: usbhid: USB HID core driver May 10 00:49:29.438387 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 10 00:49:29.438443 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 May 10 00:49:30.179758 disk-uuid[530]: The operation has completed successfully. May 10 00:49:30.180803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:49:30.223728 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:49:30.223899 systemd[1]: Finished disk-uuid.service. May 10 00:49:30.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.230398 systemd[1]: Starting verity-setup.service... May 10 00:49:30.250884 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" May 10 00:49:30.305381 systemd[1]: Found device dev-mapper-usr.device. May 10 00:49:30.307502 systemd[1]: Mounting sysusr-usr.mount... May 10 00:49:30.308554 systemd[1]: Finished verity-setup.service. May 10 00:49:30.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.403926 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:49:30.404810 systemd[1]: Mounted sysusr-usr.mount. May 10 00:49:30.405648 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:49:30.406611 systemd[1]: Starting ignition-setup.service... May 10 00:49:30.409370 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:49:30.426117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:49:30.426181 kernel: BTRFS info (device vda6): using free space tree May 10 00:49:30.426201 kernel: BTRFS info (device vda6): has skinny extents May 10 00:49:30.441468 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:49:30.451611 systemd[1]: Finished ignition-setup.service. May 10 00:49:30.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.453539 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:49:30.562851 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:49:30.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.564000 audit: BPF prog-id=9 op=LOAD May 10 00:49:30.565765 systemd[1]: Starting systemd-networkd.service... May 10 00:49:30.602680 systemd-networkd[710]: lo: Link UP May 10 00:49:30.603742 systemd-networkd[710]: lo: Gained carrier May 10 00:49:30.605887 systemd-networkd[710]: Enumeration completed May 10 00:49:30.606768 systemd[1]: Started systemd-networkd.service. May 10 00:49:30.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.607575 systemd[1]: Reached target network.target. May 10 00:49:30.607776 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:49:30.610481 systemd[1]: Starting iscsiuio.service... May 10 00:49:30.628173 systemd-networkd[710]: eth0: Link UP May 10 00:49:30.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.628181 systemd-networkd[710]: eth0: Gained carrier May 10 00:49:30.629844 systemd[1]: Started iscsiuio.service. May 10 00:49:30.632952 systemd[1]: Starting iscsid.service... May 10 00:49:30.640519 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:49:30.640519 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 10 00:49:30.640519 iscsid[716]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:49:30.640519 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:49:30.640519 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:49:30.640519 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:49:30.640519 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:49:30.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.643047 systemd[1]: Started iscsid.service. May 10 00:49:30.645702 systemd[1]: Starting dracut-initqueue.service... May 10 00:49:30.664333 ignition[624]: Ignition 2.14.0 May 10 00:49:30.664362 ignition[624]: Stage: fetch-offline May 10 00:49:30.664475 ignition[624]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:30.664514 ignition[624]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:30.667329 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:49:30.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.665791 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:30.669202 systemd-networkd[710]: eth0: DHCPv4 address 10.244.24.230/30, gateway 10.244.24.229 acquired from 10.244.24.229 May 10 00:49:30.665956 ignition[624]: parsed url from cmdline: "" May 10 00:49:30.669683 systemd[1]: Starting ignition-fetch.service... May 10 00:49:30.665963 ignition[624]: no config URL provided May 10 00:49:30.665973 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:49:30.665988 ignition[624]: no config at "/usr/lib/ignition/user.ign" May 10 00:49:30.665999 ignition[624]: failed to fetch config: resource requires networking May 10 00:49:30.666173 ignition[624]: Ignition finished successfully May 10 00:49:30.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.686628 systemd[1]: Finished dracut-initqueue.service. May 10 00:49:30.690430 ignition[722]: Ignition 2.14.0 May 10 00:49:30.687441 systemd[1]: Reached target remote-fs-pre.target. May 10 00:49:30.690442 ignition[722]: Stage: fetch May 10 00:49:30.688091 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:49:30.690630 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:30.688708 systemd[1]: Reached target remote-fs.target. May 10 00:49:30.690666 ignition[722]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:30.692480 systemd[1]: Starting dracut-pre-mount.service... May 10 00:49:30.692097 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:30.692267 ignition[722]: parsed url from cmdline: "" May 10 00:49:30.692275 ignition[722]: no config URL provided May 10 00:49:30.692285 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:49:30.692301 ignition[722]: no config at "/usr/lib/ignition/user.ign" May 10 00:49:30.699691 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 00:49:30.699741 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 00:49:30.702485 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 00:49:30.708316 systemd[1]: Finished dracut-pre-mount.service. May 10 00:49:30.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.725159 ignition[722]: GET result: OK May 10 00:49:30.726042 ignition[722]: parsing config with SHA512: ec746e7e99ed9af46edc90c8bd106549f05f94a1f811293bb7d8bddf2b0cea4a36033b7fb0726fb201de9d64a318a039f363734313ad6df5d45e6db37f52d08e May 10 00:49:30.735227 unknown[722]: fetched base config from "system" May 10 00:49:30.735248 unknown[722]: fetched base config from "system" May 10 00:49:30.735792 ignition[722]: fetch: fetch complete May 10 00:49:30.735258 unknown[722]: fetched user config from "openstack" May 10 00:49:30.735805 ignition[722]: fetch: fetch passed May 10 00:49:30.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.737393 systemd[1]: Finished ignition-fetch.service. May 10 00:49:30.735876 ignition[722]: Ignition finished successfully May 10 00:49:30.739308 systemd[1]: Starting ignition-kargs.service... May 10 00:49:30.753418 ignition[736]: Ignition 2.14.0 May 10 00:49:30.753438 ignition[736]: Stage: kargs May 10 00:49:30.753608 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:30.753643 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:30.754936 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:30.756507 ignition[736]: kargs: kargs passed May 10 00:49:30.757664 systemd[1]: Finished ignition-kargs.service. May 10 00:49:30.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.756573 ignition[736]: Ignition finished successfully May 10 00:49:30.759604 systemd[1]: Starting ignition-disks.service... May 10 00:49:30.770653 ignition[741]: Ignition 2.14.0 May 10 00:49:30.770676 ignition[741]: Stage: disks May 10 00:49:30.770838 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:30.770892 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:30.772182 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:30.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.774966 systemd[1]: Finished ignition-disks.service. May 10 00:49:30.773727 ignition[741]: disks: disks passed May 10 00:49:30.775750 systemd[1]: Reached target initrd-root-device.target. May 10 00:49:30.773792 ignition[741]: Ignition finished successfully May 10 00:49:30.776446 systemd[1]: Reached target local-fs-pre.target. May 10 00:49:30.777727 systemd[1]: Reached target local-fs.target. May 10 00:49:30.778982 systemd[1]: Reached target sysinit.target. May 10 00:49:30.780226 systemd[1]: Reached target basic.target. May 10 00:49:30.782655 systemd[1]: Starting systemd-fsck-root.service... May 10 00:49:30.802671 systemd-fsck[748]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 00:49:30.808188 systemd[1]: Finished systemd-fsck-root.service. May 10 00:49:30.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.809970 systemd[1]: Mounting sysroot.mount... May 10 00:49:30.821938 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:49:30.822099 systemd[1]: Mounted sysroot.mount. May 10 00:49:30.822836 systemd[1]: Reached target initrd-root-fs.target. May 10 00:49:30.825570 systemd[1]: Mounting sysroot-usr.mount... May 10 00:49:30.826771 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:49:30.827674 systemd[1]: Starting flatcar-openstack-hostname.service... May 10 00:49:30.831435 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:49:30.831517 systemd[1]: Reached target ignition-diskful.target. May 10 00:49:30.836947 systemd[1]: Mounted sysroot-usr.mount. May 10 00:49:30.840336 systemd[1]: Starting initrd-setup-root.service... May 10 00:49:30.853932 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:49:30.863978 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory May 10 00:49:30.873639 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:49:30.882378 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:49:30.946587 systemd[1]: Finished initrd-setup-root.service. May 10 00:49:30.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.948519 systemd[1]: Starting ignition-mount.service... May 10 00:49:30.954634 systemd[1]: Starting sysroot-boot.service... May 10 00:49:30.961717 bash[802]: umount: /sysroot/usr/share/oem: not mounted. May 10 00:49:30.974396 ignition[803]: INFO : Ignition 2.14.0 May 10 00:49:30.974396 ignition[803]: INFO : Stage: mount May 10 00:49:30.976076 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:30.976076 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:30.978352 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:30.985320 ignition[803]: INFO : mount: mount passed May 10 00:49:30.986071 ignition[803]: INFO : Ignition finished successfully May 10 00:49:30.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.987157 systemd[1]: Finished ignition-mount.service. May 10 00:49:30.988966 coreos-metadata[754]: May 10 00:49:30.988 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 00:49:30.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:30.999495 systemd[1]: Finished sysroot-boot.service. May 10 00:49:31.006187 coreos-metadata[754]: May 10 00:49:31.006 INFO Fetch successful May 10 00:49:31.007914 coreos-metadata[754]: May 10 00:49:31.007 INFO wrote hostname srv-3yk6k.gb1.brightbox.com to /sysroot/etc/hostname May 10 00:49:31.010225 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 00:49:31.010363 systemd[1]: Finished flatcar-openstack-hostname.service. May 10 00:49:31.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:31.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:31.328985 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:49:31.339933 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (811) May 10 00:49:31.344525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:49:31.344566 kernel: BTRFS info (device vda6): using free space tree May 10 00:49:31.344597 kernel: BTRFS info (device vda6): has skinny extents May 10 00:49:31.351825 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:49:31.353639 systemd[1]: Starting ignition-files.service... May 10 00:49:31.374824 ignition[831]: INFO : Ignition 2.14.0 May 10 00:49:31.374824 ignition[831]: INFO : Stage: files May 10 00:49:31.376552 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:31.376552 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:31.376552 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:31.380450 ignition[831]: DEBUG : files: compiled without relabeling support, skipping May 10 00:49:31.380450 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:49:31.380450 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:49:31.384759 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:49:31.385971 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:49:31.387196 unknown[831]: wrote ssh authorized keys file for user: core May 10 00:49:31.388350 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:49:31.388350 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:49:31.388350 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:49:31.597312 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:49:31.812703 systemd-networkd[710]: eth0: Gained IPv6LL May 10 00:49:31.961559 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:49:31.963631 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:49:31.964995 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:49:32.666818 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:49:32.777059 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:639:24:19ff:fef4:18e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:639:24:19ff:fef4:18e6/64 assigned by NDisc. May 10 00:49:32.777072 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 00:49:33.016050 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:49:33.017330 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:49:33.026163 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 10 00:49:33.578269 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 00:49:35.631144 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:49:35.631144 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 00:49:35.631144 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:49:35.634956 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:49:35.634956 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 00:49:35.643511 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:49:35.643511 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:49:35.643511 ignition[831]: INFO : files: files passed May 10 00:49:35.643511 ignition[831]: INFO : Ignition finished successfully May 10 00:49:35.659843 kernel: kauditd_printk_skb: 28 callbacks suppressed May 10 00:49:35.659903 kernel: audit: type=1130 audit(1746838175.644:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.642758 systemd[1]: Finished ignition-files.service. May 10 00:49:35.646967 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:49:35.658254 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:49:35.666739 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:49:35.673621 kernel: audit: type=1130 audit(1746838175.666:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.660630 systemd[1]: Starting ignition-quench.service... May 10 00:49:35.684727 kernel: audit: type=1130 audit(1746838175.673:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.684787 kernel: audit: type=1131 audit(1746838175.673:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.662202 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:49:35.667893 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:49:35.668030 systemd[1]: Finished ignition-quench.service. May 10 00:49:35.674579 systemd[1]: Reached target ignition-complete.target. May 10 00:49:35.686795 systemd[1]: Starting initrd-parse-etc.service... May 10 00:49:35.713809 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:49:35.714007 systemd[1]: Finished initrd-parse-etc.service. May 10 00:49:35.725729 kernel: audit: type=1130 audit(1746838175.714:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.725773 kernel: audit: type=1131 audit(1746838175.714:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.715833 systemd[1]: Reached target initrd-fs.target. May 10 00:49:35.726369 systemd[1]: Reached target initrd.target. May 10 00:49:35.727704 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:49:35.729171 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:49:35.747981 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:49:35.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.768924 kernel: audit: type=1130 audit(1746838175.762:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.764521 systemd[1]: Starting initrd-cleanup.service... May 10 00:49:35.779847 systemd[1]: Stopped target nss-lookup.target. May 10 00:49:35.781621 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:49:35.783254 systemd[1]: Stopped target timers.target. May 10 00:49:35.785296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:49:35.786370 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:49:35.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.789125 systemd[1]: Stopped target initrd.target. May 10 00:49:35.794016 kernel: audit: type=1131 audit(1746838175.787:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.794674 systemd[1]: Stopped target basic.target. May 10 00:49:35.796189 systemd[1]: Stopped target ignition-complete.target. May 10 00:49:35.800986 systemd[1]: Stopped target ignition-diskful.target. May 10 00:49:35.801984 systemd[1]: Stopped target initrd-root-device.target. May 10 00:49:35.803281 systemd[1]: Stopped target remote-fs.target. May 10 00:49:35.804995 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:49:35.806296 systemd[1]: Stopped target sysinit.target. May 10 00:49:35.807777 systemd[1]: Stopped target local-fs.target. May 10 00:49:35.809108 systemd[1]: Stopped target local-fs-pre.target. May 10 00:49:35.810317 systemd[1]: Stopped target swap.target. May 10 00:49:35.819951 kernel: audit: type=1131 audit(1746838175.811:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.812238 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:49:35.812530 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:49:35.818896 systemd[1]: Stopped target cryptsetup.target. May 10 00:49:35.819737 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:49:35.823449 systemd[1]: Stopped dracut-initqueue.service. May 10 00:49:35.829782 kernel: audit: type=1131 audit(1746838175.823:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.824534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:49:35.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.824761 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:49:35.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.830906 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:49:35.831156 systemd[1]: Stopped ignition-files.service. May 10 00:49:35.833657 systemd[1]: Stopping ignition-mount.service... May 10 00:49:35.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.841745 systemd[1]: Stopping iscsiuio.service... May 10 00:49:35.847041 systemd[1]: Stopping sysroot-boot.service... May 10 00:49:35.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.848062 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:49:35.848330 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:49:35.849211 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:49:35.849368 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:49:35.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.852417 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:49:35.852586 systemd[1]: Stopped iscsiuio.service. May 10 00:49:35.857486 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:49:35.857636 systemd[1]: Finished initrd-cleanup.service. May 10 00:49:35.870905 ignition[869]: INFO : Ignition 2.14.0 May 10 00:49:35.870905 ignition[869]: INFO : Stage: umount May 10 00:49:35.870905 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 00:49:35.870905 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 00:49:35.870905 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 00:49:35.876849 ignition[869]: INFO : umount: umount passed May 10 00:49:35.876849 ignition[869]: INFO : Ignition finished successfully May 10 00:49:35.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.874308 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:49:35.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.875833 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:49:35.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.876005 systemd[1]: Stopped ignition-mount.service. May 10 00:49:35.878486 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:49:35.878579 systemd[1]: Stopped ignition-disks.service. May 10 00:49:35.879241 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:49:35.879300 systemd[1]: Stopped ignition-kargs.service. May 10 00:49:35.879964 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:49:35.880024 systemd[1]: Stopped ignition-fetch.service. May 10 00:49:35.882317 systemd[1]: Stopped target network.target. May 10 00:49:35.885298 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:49:35.885376 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:49:35.886120 systemd[1]: Stopped target paths.target. May 10 00:49:35.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.887342 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:49:35.890933 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:49:35.891853 systemd[1]: Stopped target slices.target. May 10 00:49:35.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.893260 systemd[1]: Stopped target sockets.target. May 10 00:49:35.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.894537 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:49:35.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.894584 systemd[1]: Closed iscsid.socket. May 10 00:49:35.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.895716 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:49:35.895773 systemd[1]: Closed iscsiuio.socket. May 10 00:49:35.897125 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:49:35.912000 audit: BPF prog-id=6 op=UNLOAD May 10 00:49:35.897193 systemd[1]: Stopped ignition-setup.service. May 10 00:49:35.898568 systemd[1]: Stopping systemd-networkd.service... May 10 00:49:35.900446 systemd[1]: Stopping systemd-resolved.service... May 10 00:49:35.902322 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:49:35.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.902453 systemd[1]: Stopped sysroot-boot.service. May 10 00:49:35.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.903839 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:49:35.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.903936 systemd[1]: Stopped initrd-setup-root.service. May 10 00:49:35.903972 systemd-networkd[710]: eth0: DHCPv6 lease lost May 10 00:49:35.920000 audit: BPF prog-id=9 op=UNLOAD May 10 00:49:35.906154 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:49:35.906300 systemd[1]: Stopped systemd-networkd.service. May 10 00:49:35.909238 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:49:35.909378 systemd[1]: Stopped systemd-resolved.service. May 10 00:49:35.910982 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:49:35.911049 systemd[1]: Closed systemd-networkd.socket. May 10 00:49:35.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.913090 systemd[1]: Stopping network-cleanup.service... May 10 00:49:35.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.915760 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:49:35.915848 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:49:35.917323 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:49:35.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.917397 systemd[1]: Stopped systemd-sysctl.service. May 10 00:49:35.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.918962 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:49:35.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.919027 systemd[1]: Stopped systemd-modules-load.service. May 10 00:49:35.920097 systemd[1]: Stopping systemd-udevd.service... May 10 00:49:35.922757 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:49:35.925433 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:49:35.925679 systemd[1]: Stopped systemd-udevd.service. May 10 00:49:35.929179 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:49:35.929319 systemd[1]: Stopped network-cleanup.service. May 10 00:49:35.930275 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:49:35.930337 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:49:35.931341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:49:35.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.931394 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:49:35.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.932723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:49:35.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.932787 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:49:35.934149 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:49:35.934212 systemd[1]: Stopped dracut-cmdline.service. May 10 00:49:35.935404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:49:35.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:35.935464 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:49:35.937583 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:49:35.957937 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:49:35.958032 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 10 00:49:35.960029 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:49:35.960111 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:49:35.961125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:49:35.961187 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:49:35.963762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 00:49:35.964544 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:49:35.964682 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:49:35.966170 systemd[1]: Reached target initrd-switch-root.target. May 10 00:49:35.968294 systemd[1]: Starting initrd-switch-root.service... May 10 00:49:35.985994 systemd[1]: Switching root. May 10 00:49:36.010064 iscsid[716]: iscsid shutting down. May 10 00:49:36.010894 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). May 10 00:49:36.010961 systemd-journald[202]: Journal stopped May 10 00:49:40.016809 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:49:40.016925 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:49:40.016951 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:49:40.016977 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:49:40.017007 kernel: SELinux: policy capability open_perms=1 May 10 00:49:40.017039 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:49:40.017060 kernel: SELinux: policy capability always_check_network=0 May 10 00:49:40.017079 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:49:40.017103 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:49:40.017130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:49:40.017154 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:49:40.017185 systemd[1]: Successfully loaded SELinux policy in 58.831ms. May 10 00:49:40.017225 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.682ms. May 10 00:49:40.017260 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:49:40.017285 systemd[1]: Detected virtualization kvm. May 10 00:49:40.017306 systemd[1]: Detected architecture x86-64. May 10 00:49:40.017326 systemd[1]: Detected first boot. May 10 00:49:40.017364 systemd[1]: Hostname set to . May 10 00:49:40.017386 systemd[1]: Initializing machine ID from VM UUID. May 10 00:49:40.017407 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:49:40.017439 systemd[1]: Populated /etc with preset unit settings. May 10 00:49:40.017462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:49:40.017484 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:49:40.017507 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:49:40.017538 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:49:40.017560 systemd[1]: Stopped iscsid.service. May 10 00:49:40.017581 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:49:40.017612 systemd[1]: Stopped initrd-switch-root.service. May 10 00:49:40.017635 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:49:40.017656 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:49:40.017676 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:49:40.017697 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 00:49:40.017718 systemd[1]: Created slice system-getty.slice. May 10 00:49:40.017749 systemd[1]: Created slice system-modprobe.slice. May 10 00:49:40.017774 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:49:40.017794 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:49:40.017829 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:49:40.017851 systemd[1]: Created slice user.slice. May 10 00:49:40.017900 systemd[1]: Started systemd-ask-password-console.path. May 10 00:49:40.017923 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:49:40.017944 systemd[1]: Set up automount boot.automount. May 10 00:49:40.017966 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:49:40.017999 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:49:40.018022 systemd[1]: Stopped target initrd-fs.target. May 10 00:49:40.018042 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:49:40.018069 systemd[1]: Reached target integritysetup.target. May 10 00:49:40.018090 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:49:40.018110 systemd[1]: Reached target remote-fs.target. May 10 00:49:40.018131 systemd[1]: Reached target slices.target. May 10 00:49:40.018152 systemd[1]: Reached target swap.target. May 10 00:49:40.018172 systemd[1]: Reached target torcx.target. May 10 00:49:40.018192 systemd[1]: Reached target veritysetup.target. May 10 00:49:40.018228 systemd[1]: Listening on systemd-coredump.socket. May 10 00:49:40.018250 systemd[1]: Listening on systemd-initctl.socket. May 10 00:49:40.018270 systemd[1]: Listening on systemd-networkd.socket. May 10 00:49:40.018298 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:49:40.018319 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:49:40.018340 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:49:40.018360 systemd[1]: Mounting dev-hugepages.mount... May 10 00:49:40.018395 systemd[1]: Mounting dev-mqueue.mount... May 10 00:49:40.018417 systemd[1]: Mounting media.mount... May 10 00:49:40.018449 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:40.018473 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:49:40.018493 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:49:40.018513 systemd[1]: Mounting tmp.mount... May 10 00:49:40.018535 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:49:40.018555 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:49:40.018583 systemd[1]: Starting kmod-static-nodes.service... May 10 00:49:40.018604 systemd[1]: Starting modprobe@configfs.service... May 10 00:49:40.018631 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:49:40.018663 systemd[1]: Starting modprobe@drm.service... May 10 00:49:40.018686 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:49:40.018706 systemd[1]: Starting modprobe@fuse.service... May 10 00:49:40.018727 systemd[1]: Starting modprobe@loop.service... May 10 00:49:40.018761 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:49:40.018784 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:49:40.018805 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:49:40.018826 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:49:40.018846 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:49:40.023442 systemd[1]: Stopped systemd-journald.service. May 10 00:49:40.023472 kernel: fuse: init (API version 7.34) May 10 00:49:40.023494 systemd[1]: Starting systemd-journald.service... May 10 00:49:40.023515 kernel: loop: module loaded May 10 00:49:40.023536 systemd[1]: Starting systemd-modules-load.service... May 10 00:49:40.023557 systemd[1]: Starting systemd-network-generator.service... May 10 00:49:40.023577 systemd[1]: Starting systemd-remount-fs.service... May 10 00:49:40.023598 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:49:40.023618 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:49:40.023656 systemd[1]: Stopped verity-setup.service. May 10 00:49:40.023679 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:40.023700 systemd[1]: Mounted dev-hugepages.mount. May 10 00:49:40.023720 systemd[1]: Mounted dev-mqueue.mount. May 10 00:49:40.023753 systemd[1]: Mounted media.mount. May 10 00:49:40.023777 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:49:40.023798 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:49:40.023819 systemd[1]: Mounted tmp.mount. May 10 00:49:40.023843 systemd-journald[987]: Journal started May 10 00:49:40.023940 systemd-journald[987]: Runtime Journal (/run/log/journal/5f3908e3cfef452baaf9d22112409d5a) is 4.7M, max 38.1M, 33.3M free. May 10 00:49:36.173000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:49:36.249000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:49:36.249000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:49:36.250000 audit: BPF prog-id=10 op=LOAD May 10 00:49:36.250000 audit: BPF prog-id=10 op=UNLOAD May 10 00:49:36.250000 audit: BPF prog-id=11 op=LOAD May 10 00:49:36.250000 audit: BPF prog-id=11 op=UNLOAD May 10 00:49:36.381000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:49:36.381000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:49:36.381000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:49:36.385000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:49:36.385000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:49:36.385000 audit: CWD cwd="/" May 10 00:49:36.385000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:36.385000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:36.385000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:49:39.759000 audit: BPF prog-id=12 op=LOAD May 10 00:49:39.759000 audit: BPF prog-id=3 op=UNLOAD May 10 00:49:39.759000 audit: BPF prog-id=13 op=LOAD May 10 00:49:39.759000 audit: BPF prog-id=14 op=LOAD May 10 00:49:39.759000 audit: BPF prog-id=4 op=UNLOAD May 10 00:49:39.759000 audit: BPF prog-id=5 op=UNLOAD May 10 00:49:39.760000 audit: BPF prog-id=15 op=LOAD May 10 00:49:39.760000 audit: BPF prog-id=12 op=UNLOAD May 10 00:49:39.760000 audit: BPF prog-id=16 op=LOAD May 10 00:49:39.760000 audit: BPF prog-id=17 op=LOAD May 10 00:49:39.760000 audit: BPF prog-id=13 op=UNLOAD May 10 00:49:39.760000 audit: BPF prog-id=14 op=UNLOAD May 10 00:49:40.029924 systemd[1]: Finished kmod-static-nodes.service. May 10 00:49:40.029972 systemd[1]: Started systemd-journald.service. May 10 00:49:39.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.772000 audit: BPF prog-id=15 op=UNLOAD May 10 00:49:39.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.960000 audit: BPF prog-id=18 op=LOAD May 10 00:49:39.961000 audit: BPF prog-id=19 op=LOAD May 10 00:49:39.961000 audit: BPF prog-id=20 op=LOAD May 10 00:49:39.961000 audit: BPF prog-id=16 op=UNLOAD May 10 00:49:39.961000 audit: BPF prog-id=17 op=UNLOAD May 10 00:49:39.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.013000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:49:40.013000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff4c1437c0 a2=4000 a3=7fff4c14385c items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:49:40.013000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:49:40.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:39.755358 systemd[1]: Queued start job for default target multi-user.target. May 10 00:49:36.377766 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:49:39.755392 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 00:49:36.378450 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:49:39.763042 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:49:36.378490 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:49:40.030477 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:49:40.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:36.378545 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:49:40.030676 systemd[1]: Finished modprobe@configfs.service. May 10 00:49:36.378563 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:49:40.031778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:49:36.378622 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:49:40.032735 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:49:36.378644 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:49:40.033943 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:49:36.379054 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:49:40.034142 systemd[1]: Finished modprobe@drm.service. May 10 00:49:36.379127 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:49:40.035595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:49:36.379155 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:49:40.035851 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:49:36.381725 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:49:40.037034 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:49:36.381785 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:49:40.038098 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:49:36.381833 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:49:40.038278 systemd[1]: Finished modprobe@fuse.service. May 10 00:49:36.381861 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:49:40.039397 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:49:36.381928 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:49:40.039574 systemd[1]: Finished modprobe@loop.service. May 10 00:49:36.381955 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:49:39.133736 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:49:39.134584 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:49:39.134893 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:49:39.135319 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:49:39.135430 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:49:39.135568 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-05-10T00:49:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:49:40.047053 systemd[1]: Finished systemd-modules-load.service. May 10 00:49:40.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.049073 systemd[1]: Finished systemd-network-generator.service. May 10 00:49:40.050134 systemd[1]: Finished systemd-remount-fs.service. May 10 00:49:40.051601 systemd[1]: Reached target network-pre.target. May 10 00:49:40.054595 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:49:40.064111 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:49:40.064799 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:49:40.067383 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:49:40.070270 systemd[1]: Starting systemd-journal-flush.service... May 10 00:49:40.071821 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:49:40.076093 systemd[1]: Starting systemd-random-seed.service... May 10 00:49:40.077005 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:49:40.079105 systemd[1]: Starting systemd-sysctl.service... May 10 00:49:40.084648 systemd[1]: Starting systemd-sysusers.service... May 10 00:49:40.088659 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:49:40.089496 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:49:40.093835 systemd-journald[987]: Time spent on flushing to /var/log/journal/5f3908e3cfef452baaf9d22112409d5a is 40.663ms for 1298 entries. May 10 00:49:40.093835 systemd-journald[987]: System Journal (/var/log/journal/5f3908e3cfef452baaf9d22112409d5a) is 8.0M, max 584.8M, 576.8M free. May 10 00:49:40.157803 systemd-journald[987]: Received client request to flush runtime journal. May 10 00:49:40.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.110102 systemd[1]: Finished systemd-random-seed.service. May 10 00:49:40.111032 systemd[1]: Reached target first-boot-complete.target. May 10 00:49:40.128679 systemd[1]: Finished systemd-sysctl.service. May 10 00:49:40.146893 systemd[1]: Finished systemd-sysusers.service. May 10 00:49:40.150058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:49:40.158795 systemd[1]: Finished systemd-journal-flush.service. May 10 00:49:40.190245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:49:40.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.231095 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:49:40.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.233638 systemd[1]: Starting systemd-udev-settle.service... May 10 00:49:40.245215 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:49:40.750174 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:49:40.758231 kernel: kauditd_printk_skb: 102 callbacks suppressed May 10 00:49:40.758319 kernel: audit: type=1130 audit(1746838180.750:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.760248 kernel: audit: type=1334 audit(1746838180.757:143): prog-id=21 op=LOAD May 10 00:49:40.757000 audit: BPF prog-id=21 op=LOAD May 10 00:49:40.759422 systemd[1]: Starting systemd-udevd.service... May 10 00:49:40.757000 audit: BPF prog-id=22 op=LOAD May 10 00:49:40.757000 audit: BPF prog-id=7 op=UNLOAD May 10 00:49:40.757000 audit: BPF prog-id=8 op=UNLOAD May 10 00:49:40.764656 kernel: audit: type=1334 audit(1746838180.757:144): prog-id=22 op=LOAD May 10 00:49:40.764723 kernel: audit: type=1334 audit(1746838180.757:145): prog-id=7 op=UNLOAD May 10 00:49:40.764778 kernel: audit: type=1334 audit(1746838180.757:146): prog-id=8 op=UNLOAD May 10 00:49:40.790883 systemd-udevd[1014]: Using default interface naming scheme 'v252'. May 10 00:49:40.824393 systemd[1]: Started systemd-udevd.service. May 10 00:49:40.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.833913 kernel: audit: type=1130 audit(1746838180.824:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.836165 systemd[1]: Starting systemd-networkd.service... May 10 00:49:40.825000 audit: BPF prog-id=23 op=LOAD May 10 00:49:40.839895 kernel: audit: type=1334 audit(1746838180.825:148): prog-id=23 op=LOAD May 10 00:49:40.850000 audit: BPF prog-id=24 op=LOAD May 10 00:49:40.850000 audit: BPF prog-id=25 op=LOAD May 10 00:49:40.855642 kernel: audit: type=1334 audit(1746838180.850:149): prog-id=24 op=LOAD May 10 00:49:40.855711 kernel: audit: type=1334 audit(1746838180.850:150): prog-id=25 op=LOAD May 10 00:49:40.858504 kernel: audit: type=1334 audit(1746838180.850:151): prog-id=26 op=LOAD May 10 00:49:40.850000 audit: BPF prog-id=26 op=LOAD May 10 00:49:40.856002 systemd[1]: Starting systemd-userdbd.service... May 10 00:49:40.915369 systemd[1]: Started systemd-userdbd.service. May 10 00:49:40.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:40.964032 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:49:41.032153 systemd-networkd[1020]: lo: Link UP May 10 00:49:41.032168 systemd-networkd[1020]: lo: Gained carrier May 10 00:49:41.033573 systemd-networkd[1020]: Enumeration completed May 10 00:49:41.033732 systemd[1]: Started systemd-networkd.service. May 10 00:49:41.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.034748 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:49:41.038532 systemd-networkd[1020]: eth0: Link UP May 10 00:49:41.038546 systemd-networkd[1020]: eth0: Gained carrier May 10 00:49:41.053256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:49:41.064786 systemd-networkd[1020]: eth0: DHCPv4 address 10.244.24.230/30, gateway 10.244.24.229 acquired from 10.244.24.229 May 10 00:49:41.064953 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 10 00:49:41.071909 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:49:41.073915 kernel: ACPI: button: Power Button [PWRF] May 10 00:49:41.115000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:49:41.115000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556f4c07f8c0 a1=338ac a2=7fb7b0cdcbc5 a3=5 items=110 ppid=1014 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:49:41.115000 audit: CWD cwd="/" May 10 00:49:41.115000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=1 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=2 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=3 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=4 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=5 name=(null) inode=13925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=6 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=7 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=8 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=9 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=10 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=11 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=12 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=13 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=14 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=15 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=16 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=17 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=18 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=19 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=20 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=21 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=22 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=23 name=(null) inode=13934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=24 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=25 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=26 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=27 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=28 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=29 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=30 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=31 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=32 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=33 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=34 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=35 name=(null) inode=13940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=36 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=37 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=38 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=39 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=40 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=41 name=(null) inode=13943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=42 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=43 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=44 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=45 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=46 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=47 name=(null) inode=13946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=48 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=49 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=50 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=51 name=(null) inode=13948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=52 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=53 name=(null) inode=13949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=55 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=56 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=57 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=58 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=59 name=(null) inode=13952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=60 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=61 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=62 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=63 name=(null) inode=13954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=64 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=65 name=(null) inode=13955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=66 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=67 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=68 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=69 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=70 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=71 name=(null) inode=13958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=72 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=73 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=74 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=75 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=76 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=77 name=(null) inode=13961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=78 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=79 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=80 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=81 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=82 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=83 name=(null) inode=13964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=84 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=85 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=86 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=87 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=88 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=89 name=(null) inode=13967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=90 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=91 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=92 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=93 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=94 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=95 name=(null) inode=13970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=96 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=97 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=98 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=99 name=(null) inode=13972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=100 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=101 name=(null) inode=13973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=102 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=103 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=104 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=105 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=106 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=107 name=(null) inode=13976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PATH item=109 name=(null) inode=13977 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:49:41.115000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:49:41.158918 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 May 10 00:49:41.175901 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 00:49:41.200956 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 00:49:41.201246 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 00:49:41.335490 systemd[1]: Finished systemd-udev-settle.service. May 10 00:49:41.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.337979 systemd[1]: Starting lvm2-activation-early.service... May 10 00:49:41.361099 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:49:41.392379 systemd[1]: Finished lvm2-activation-early.service. May 10 00:49:41.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.393325 systemd[1]: Reached target cryptsetup.target. May 10 00:49:41.395655 systemd[1]: Starting lvm2-activation.service... May 10 00:49:41.401685 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:49:41.425537 systemd[1]: Finished lvm2-activation.service. May 10 00:49:41.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.426518 systemd[1]: Reached target local-fs-pre.target. May 10 00:49:41.427203 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:49:41.427250 systemd[1]: Reached target local-fs.target. May 10 00:49:41.427887 systemd[1]: Reached target machines.target. May 10 00:49:41.430475 systemd[1]: Starting ldconfig.service... May 10 00:49:41.431770 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:49:41.431832 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:41.433692 systemd[1]: Starting systemd-boot-update.service... May 10 00:49:41.436590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:49:41.442647 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:49:41.446549 systemd[1]: Starting systemd-sysext.service... May 10 00:49:41.456679 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) May 10 00:49:41.463019 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:49:41.467822 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:49:41.544123 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:49:41.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.595266 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:49:41.595538 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:49:41.624789 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:49:41.626933 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:49:41.629933 kernel: loop0: detected capacity change from 0 to 205544 May 10 00:49:41.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.670024 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:49:41.693352 kernel: loop1: detected capacity change from 0 to 205544 May 10 00:49:41.714931 (sd-sysext)[1058]: Using extensions 'kubernetes'. May 10 00:49:41.716241 (sd-sysext)[1058]: Merged extensions into '/usr'. May 10 00:49:41.732311 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) May 10 00:49:41.732311 systemd-fsck[1055]: /dev/vda1: 790 files, 120688/258078 clusters May 10 00:49:41.736337 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:49:41.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.739121 systemd[1]: Mounting boot.mount... May 10 00:49:41.765321 systemd[1]: Mounted boot.mount. May 10 00:49:41.770329 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:41.772560 systemd[1]: Mounting usr-share-oem.mount... May 10 00:49:41.773568 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:49:41.775448 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:49:41.779736 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:49:41.783441 systemd[1]: Starting modprobe@loop.service... May 10 00:49:41.784674 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:49:41.784858 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:41.785834 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:41.791432 systemd[1]: Mounted usr-share-oem.mount. May 10 00:49:41.792573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:49:41.792803 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:49:41.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.794588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:49:41.794809 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:49:41.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.796185 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:49:41.796371 systemd[1]: Finished modprobe@loop.service. May 10 00:49:41.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.797667 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:49:41.797845 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:49:41.800354 systemd[1]: Finished systemd-sysext.service. May 10 00:49:41.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.804216 systemd[1]: Starting ensure-sysext.service... May 10 00:49:41.806852 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:49:41.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:41.809964 systemd[1]: Finished systemd-boot-update.service. May 10 00:49:41.825399 systemd[1]: Reloading. May 10 00:49:41.830843 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:49:41.833593 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:49:41.836443 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:49:41.886426 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-10T00:49:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:49:41.887143 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2025-05-10T00:49:41Z" level=info msg="torcx already run" May 10 00:49:42.084742 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:49:42.114819 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:49:42.114849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:49:42.143806 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:49:42.225000 audit: BPF prog-id=27 op=LOAD May 10 00:49:42.225000 audit: BPF prog-id=24 op=UNLOAD May 10 00:49:42.225000 audit: BPF prog-id=28 op=LOAD May 10 00:49:42.225000 audit: BPF prog-id=29 op=LOAD May 10 00:49:42.225000 audit: BPF prog-id=25 op=UNLOAD May 10 00:49:42.225000 audit: BPF prog-id=26 op=UNLOAD May 10 00:49:42.228000 audit: BPF prog-id=30 op=LOAD May 10 00:49:42.228000 audit: BPF prog-id=31 op=LOAD May 10 00:49:42.228000 audit: BPF prog-id=21 op=UNLOAD May 10 00:49:42.228000 audit: BPF prog-id=22 op=UNLOAD May 10 00:49:42.232000 audit: BPF prog-id=32 op=LOAD May 10 00:49:42.232000 audit: BPF prog-id=23 op=UNLOAD May 10 00:49:42.233000 audit: BPF prog-id=33 op=LOAD May 10 00:49:42.233000 audit: BPF prog-id=18 op=UNLOAD May 10 00:49:42.233000 audit: BPF prog-id=34 op=LOAD May 10 00:49:42.233000 audit: BPF prog-id=35 op=LOAD May 10 00:49:42.233000 audit: BPF prog-id=19 op=UNLOAD May 10 00:49:42.233000 audit: BPF prog-id=20 op=UNLOAD May 10 00:49:42.238613 systemd[1]: Finished ldconfig.service. May 10 00:49:42.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.245263 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:49:42.252476 systemd[1]: Starting audit-rules.service... May 10 00:49:42.255349 systemd[1]: Starting clean-ca-certificates.service... May 10 00:49:42.258711 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:49:42.263000 audit: BPF prog-id=36 op=LOAD May 10 00:49:42.266432 systemd[1]: Starting systemd-resolved.service... May 10 00:49:42.269000 audit: BPF prog-id=37 op=LOAD May 10 00:49:42.273347 systemd[1]: Starting systemd-timesyncd.service... May 10 00:49:42.276381 systemd[1]: Starting systemd-update-utmp.service... May 10 00:49:42.286710 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:49:42.292892 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:49:42.297014 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:49:42.300325 systemd[1]: Starting modprobe@loop.service... May 10 00:49:42.301133 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:49:42.301356 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:42.302963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:49:42.304365 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:49:42.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.306738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:49:42.306953 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:49:42.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.309125 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:49:42.312468 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:49:42.316774 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:49:42.321446 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:49:42.324384 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:49:42.324643 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:42.326087 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:49:42.326320 systemd[1]: Finished modprobe@loop.service. May 10 00:49:42.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.327589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:49:42.327793 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:49:42.328000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:49:42.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.333765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:49:42.334409 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:49:42.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.340696 systemd[1]: Finished clean-ca-certificates.service. May 10 00:49:42.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.343537 systemd[1]: Finished systemd-update-utmp.service. May 10 00:49:42.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.345586 systemd[1]: Finished ensure-sysext.service. May 10 00:49:42.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.349034 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:49:42.351050 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:49:42.355621 systemd[1]: Starting modprobe@drm.service... May 10 00:49:42.358922 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:49:42.361341 systemd[1]: Starting modprobe@loop.service... May 10 00:49:42.363420 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:49:42.363576 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:42.367233 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:49:42.368156 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:49:42.369474 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:49:42.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.371050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:49:42.371256 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:49:42.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.373447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:49:42.373653 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:49:42.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.374950 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:49:42.375178 systemd[1]: Finished modprobe@loop.service. May 10 00:49:42.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.377569 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:49:42.377657 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:49:42.380146 systemd[1]: Starting systemd-update-done.service... May 10 00:49:42.392350 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:49:42.392586 systemd[1]: Finished modprobe@drm.service. May 10 00:49:42.394351 systemd[1]: Finished systemd-update-done.service. May 10 00:49:42.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:49:42.404000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:49:42.404000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed2dc9460 a2=420 a3=0 items=0 ppid=1134 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:49:42.404000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:49:42.405344 augenrules[1165]: No rules May 10 00:49:42.406422 systemd[1]: Finished audit-rules.service. May 10 00:49:42.432671 systemd[1]: Started systemd-timesyncd.service. May 10 00:49:42.433640 systemd[1]: Reached target time-set.target. May 10 00:49:42.451359 systemd-resolved[1138]: Positive Trust Anchors: May 10 00:49:42.451381 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:49:42.451419 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:49:42.459256 systemd-resolved[1138]: Using system hostname 'srv-3yk6k.gb1.brightbox.com'. May 10 00:49:42.461932 systemd[1]: Started systemd-resolved.service. May 10 00:49:42.462751 systemd[1]: Reached target network.target. May 10 00:49:42.477545 systemd[1]: Reached target nss-lookup.target. May 10 00:49:42.478262 systemd[1]: Reached target sysinit.target. May 10 00:49:42.479095 systemd[1]: Started motdgen.path. May 10 00:49:42.479752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:49:42.480799 systemd[1]: Started logrotate.timer. May 10 00:49:42.481538 systemd[1]: Started mdadm.timer. May 10 00:49:42.482159 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:49:42.482820 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:49:42.482893 systemd[1]: Reached target paths.target. May 10 00:49:42.483485 systemd[1]: Reached target timers.target. May 10 00:49:42.484620 systemd[1]: Listening on dbus.socket. May 10 00:49:42.487789 systemd[1]: Starting docker.socket... May 10 00:49:42.492860 systemd[1]: Listening on sshd.socket. May 10 00:49:42.493708 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:42.494404 systemd[1]: Listening on docker.socket. May 10 00:49:42.495177 systemd[1]: Reached target sockets.target. May 10 00:49:42.495808 systemd[1]: Reached target basic.target. May 10 00:49:42.496531 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:49:42.496592 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:49:42.498805 systemd[1]: Starting containerd.service... May 10 00:49:42.501101 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 00:49:42.503513 systemd[1]: Starting dbus.service... May 10 00:49:42.508138 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:49:42.511532 systemd[1]: Starting extend-filesystems.service... May 10 00:49:42.512803 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:49:42.515878 systemd[1]: Starting motdgen.service... May 10 00:49:42.522101 systemd[1]: Starting prepare-helm.service... May 10 00:49:42.526459 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:49:42.531991 systemd[1]: Starting sshd-keygen.service... May 10 00:49:42.541149 jq[1178]: false May 10 00:49:42.542119 systemd[1]: Starting systemd-logind.service... May 10 00:49:42.543336 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:49:42.543502 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:49:42.546967 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:49:42.548187 systemd[1]: Starting update-engine.service... May 10 00:49:42.550740 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:49:42.555745 jq[1192]: true May 10 00:49:42.555743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:49:42.557285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:49:42.561531 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:49:42.562600 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:49:42.583176 tar[1194]: linux-amd64/helm May 10 00:49:42.587594 jq[1196]: true May 10 00:49:42.597839 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:42.597924 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:49:43.459712 systemd-resolved[1138]: Clock change detected. Flushing caches. May 10 00:49:43.459904 systemd-timesyncd[1139]: Contacted time server 77.104.162.218:123 (0.flatcar.pool.ntp.org). May 10 00:49:43.460121 systemd-timesyncd[1139]: Initial clock synchronization to Sat 2025-05-10 00:49:43.459269 UTC. May 10 00:49:43.468809 extend-filesystems[1179]: Found loop1 May 10 00:49:43.475500 extend-filesystems[1179]: Found vda May 10 00:49:43.475539 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:49:43.475781 systemd[1]: Finished motdgen.service. May 10 00:49:43.480015 dbus-daemon[1175]: [system] SELinux support is enabled May 10 00:49:43.480659 systemd[1]: Started dbus.service. May 10 00:49:43.483857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:49:43.483918 systemd[1]: Reached target system-config.target. May 10 00:49:43.484603 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:49:43.484653 systemd[1]: Reached target user-config.target. May 10 00:49:43.485402 extend-filesystems[1179]: Found vda1 May 10 00:49:43.487559 dbus-daemon[1175]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1020 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 00:49:43.487757 extend-filesystems[1179]: Found vda2 May 10 00:49:43.488542 extend-filesystems[1179]: Found vda3 May 10 00:49:43.489399 extend-filesystems[1179]: Found usr May 10 00:49:43.491233 extend-filesystems[1179]: Found vda4 May 10 00:49:43.491233 extend-filesystems[1179]: Found vda6 May 10 00:49:43.491233 extend-filesystems[1179]: Found vda7 May 10 00:49:43.494375 extend-filesystems[1179]: Found vda9 May 10 00:49:43.494375 extend-filesystems[1179]: Checking size of /dev/vda9 May 10 00:49:43.506933 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.systemd1' May 10 00:49:43.512385 systemd[1]: Starting systemd-hostnamed.service... May 10 00:49:43.537790 update_engine[1191]: I0510 00:49:43.537189 1191 main.cc:92] Flatcar Update Engine starting May 10 00:49:43.543304 systemd[1]: Started update-engine.service. May 10 00:49:43.545214 update_engine[1191]: I0510 00:49:43.545017 1191 update_check_scheduler.cc:74] Next update check in 7m20s May 10 00:49:43.548713 systemd[1]: Started locksmithd.service. May 10 00:49:43.558700 extend-filesystems[1179]: Resized partition /dev/vda9 May 10 00:49:43.572237 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:49:43.577375 bash[1228]: Updated "/home/core/.ssh/authorized_keys" May 10 00:49:43.578839 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:49:43.585069 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks May 10 00:49:43.591195 env[1199]: time="2025-05-10T00:49:43.591072593Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:49:43.637521 systemd-logind[1189]: Watching system buttons on /dev/input/event2 (Power Button) May 10 00:49:43.638156 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:49:43.638963 systemd-logind[1189]: New seat seat0. May 10 00:49:43.643537 systemd[1]: Started systemd-logind.service. May 10 00:49:43.697145 env[1199]: time="2025-05-10T00:49:43.697087739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:49:43.708893 env[1199]: time="2025-05-10T00:49:43.708808249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.718023 env[1199]: time="2025-05-10T00:49:43.717951573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:49:43.718367 env[1199]: time="2025-05-10T00:49:43.718334827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.719641 env[1199]: time="2025-05-10T00:49:43.719604131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:49:43.719987 env[1199]: time="2025-05-10T00:49:43.719954499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.720291 env[1199]: time="2025-05-10T00:49:43.720257085Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:49:43.720562 env[1199]: time="2025-05-10T00:49:43.720530549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.723764 env[1199]: time="2025-05-10T00:49:43.723732340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.729507 env[1199]: time="2025-05-10T00:49:43.729170197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:49:43.730452 env[1199]: time="2025-05-10T00:49:43.730403073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:49:43.730847 env[1199]: time="2025-05-10T00:49:43.730815146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:49:43.731395 env[1199]: time="2025-05-10T00:49:43.731361286Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:49:43.731584 env[1199]: time="2025-05-10T00:49:43.731553668Z" level=info msg="metadata content store policy set" policy=shared May 10 00:49:43.750128 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 10 00:49:43.769593 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 00:49:43.769593 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 8 May 10 00:49:43.769593 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 10 00:49:43.775721 extend-filesystems[1179]: Resized filesystem in /dev/vda9 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.770831515Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.770900077Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.770928882Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771029936Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771081383Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771107409Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771129373Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771152945Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771173585Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771205561Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771229314Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771256124Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771438649Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:49:43.778679 env[1199]: time="2025-05-10T00:49:43.771594256Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:49:43.770022 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.771916856Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.771971804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.771998834Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772098071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772124752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772150840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772176854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772200435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772221010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772245091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772264450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772294957Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772512825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772539150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:49:43.783013 env[1199]: time="2025-05-10T00:49:43.772560845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:49:43.770279 systemd[1]: Finished extend-filesystems.service. May 10 00:49:43.784498 env[1199]: time="2025-05-10T00:49:43.772581315Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:49:43.784498 env[1199]: time="2025-05-10T00:49:43.772604443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:49:43.784498 env[1199]: time="2025-05-10T00:49:43.772622785Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:49:43.784498 env[1199]: time="2025-05-10T00:49:43.772676415Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:49:43.784498 env[1199]: time="2025-05-10T00:49:43.772742872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:49:43.777828 systemd-networkd[1020]: eth0: Gained IPv6LL May 10 00:49:43.780966 systemd[1]: Started containerd.service. May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.773029831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.776233416Z" level=info msg="Connect containerd service" May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.776326216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.780178937Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.780690294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.780772346Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:49:43.785997 env[1199]: time="2025-05-10T00:49:43.780883045Z" level=info msg="containerd successfully booted in 0.206083s" May 10 00:49:43.782968 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:49:43.789436 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.787880941Z" level=info msg="Start subscribing containerd event" May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.787986281Z" level=info msg="Start recovering state" May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.788295636Z" level=info msg="Start event monitor" May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.788346147Z" level=info msg="Start snapshots syncer" May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.788372539Z" level=info msg="Start cni network conf syncer for default" May 10 00:49:43.804708 env[1199]: time="2025-05-10T00:49:43.788389349Z" level=info msg="Start streaming server" May 10 00:49:43.784011 systemd[1]: Reached target network-online.target. May 10 00:49:43.790336 dbus-daemon[1175]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1222 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 00:49:43.786929 systemd[1]: Starting kubelet.service... May 10 00:49:43.789719 systemd[1]: Started systemd-hostnamed.service. May 10 00:49:43.795354 systemd[1]: Starting polkit.service... May 10 00:49:43.821931 polkitd[1239]: Started polkitd version 121 May 10 00:49:43.841331 polkitd[1239]: Loading rules from directory /etc/polkit-1/rules.d May 10 00:49:43.841450 polkitd[1239]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 00:49:43.843635 polkitd[1239]: Finished loading, compiling and executing 2 rules May 10 00:49:43.844535 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 00:49:43.844758 systemd[1]: Started polkit.service. May 10 00:49:43.846632 polkitd[1239]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 00:49:43.875417 systemd-hostnamed[1222]: Hostname set to (static) May 10 00:49:44.352841 tar[1194]: linux-amd64/LICENSE May 10 00:49:44.353441 tar[1194]: linux-amd64/README.md May 10 00:49:44.361654 systemd[1]: Finished prepare-helm.service. May 10 00:49:44.512434 locksmithd[1227]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:49:44.701492 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:49:44.735009 systemd[1]: Finished sshd-keygen.service. May 10 00:49:44.738424 systemd[1]: Starting issuegen.service... May 10 00:49:44.748713 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:49:44.748985 systemd[1]: Finished issuegen.service. May 10 00:49:44.751907 systemd[1]: Starting systemd-user-sessions.service... May 10 00:49:44.761868 systemd[1]: Finished systemd-user-sessions.service. May 10 00:49:44.764653 systemd[1]: Started getty@tty1.service. May 10 00:49:44.767461 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:49:44.768679 systemd[1]: Reached target getty.target. May 10 00:49:44.977891 systemd[1]: Started kubelet.service. May 10 00:49:45.288657 systemd-networkd[1020]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:639:24:19ff:fef4:18e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:639:24:19ff:fef4:18e6/64 assigned by NDisc. May 10 00:49:45.288675 systemd-networkd[1020]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 00:49:45.613818 kubelet[1268]: E0510 00:49:45.613684 1268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:49:45.616225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:49:45.616460 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:49:45.616971 systemd[1]: kubelet.service: Consumed 1.064s CPU time. May 10 00:49:50.507662 coreos-metadata[1174]: May 10 00:49:50.507 WARN failed to locate config-drive, using the metadata service API instead May 10 00:49:50.560552 coreos-metadata[1174]: May 10 00:49:50.560 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 00:49:50.587159 coreos-metadata[1174]: May 10 00:49:50.586 INFO Fetch successful May 10 00:49:50.587537 coreos-metadata[1174]: May 10 00:49:50.587 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 00:49:50.613178 coreos-metadata[1174]: May 10 00:49:50.612 INFO Fetch successful May 10 00:49:50.616800 unknown[1174]: wrote ssh authorized keys file for user: core May 10 00:49:50.629806 update-ssh-keys[1277]: Updated "/home/core/.ssh/authorized_keys" May 10 00:49:50.630913 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 00:49:50.631457 systemd[1]: Reached target multi-user.target. May 10 00:49:50.634714 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:49:50.645151 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:49:50.645481 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:49:50.646465 systemd[1]: Startup finished in 1.148s (kernel) + 8.424s (initrd) + 13.714s (userspace) = 23.286s. May 10 00:49:53.591296 systemd[1]: Created slice system-sshd.slice. May 10 00:49:53.593809 systemd[1]: Started sshd@0-10.244.24.230:22-139.178.68.195:39192.service. May 10 00:49:54.507765 sshd[1280]: Accepted publickey for core from 139.178.68.195 port 39192 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:49:54.511315 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:54.526631 systemd[1]: Created slice user-500.slice. May 10 00:49:54.530396 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:49:54.534704 systemd-logind[1189]: New session 1 of user core. May 10 00:49:54.594825 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:49:54.597571 systemd[1]: Starting user@500.service... May 10 00:49:54.603660 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:54.705373 systemd[1283]: Queued start job for default target default.target. May 10 00:49:54.706723 systemd[1283]: Reached target paths.target. May 10 00:49:54.706936 systemd[1283]: Reached target sockets.target. May 10 00:49:54.707148 systemd[1283]: Reached target timers.target. May 10 00:49:54.707316 systemd[1283]: Reached target basic.target. May 10 00:49:54.707549 systemd[1283]: Reached target default.target. May 10 00:49:54.707646 systemd[1]: Started user@500.service. May 10 00:49:54.707868 systemd[1283]: Startup finished in 95ms. May 10 00:49:54.709998 systemd[1]: Started session-1.scope. May 10 00:49:55.332294 systemd[1]: Started sshd@1-10.244.24.230:22-139.178.68.195:59924.service. May 10 00:49:55.867829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:49:55.868155 systemd[1]: Stopped kubelet.service. May 10 00:49:55.868229 systemd[1]: kubelet.service: Consumed 1.064s CPU time. May 10 00:49:55.870461 systemd[1]: Starting kubelet.service... May 10 00:49:56.038667 systemd[1]: Started kubelet.service. May 10 00:49:56.108076 kubelet[1298]: E0510 00:49:56.107996 1298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:49:56.112151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:49:56.112374 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:49:56.221028 sshd[1292]: Accepted publickey for core from 139.178.68.195 port 59924 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:49:56.223464 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:56.231511 systemd[1]: Started session-2.scope. May 10 00:49:56.232287 systemd-logind[1189]: New session 2 of user core. May 10 00:49:56.838083 sshd[1292]: pam_unix(sshd:session): session closed for user core May 10 00:49:56.842206 systemd[1]: sshd@1-10.244.24.230:22-139.178.68.195:59924.service: Deactivated successfully. May 10 00:49:56.843215 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:49:56.843917 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. May 10 00:49:56.844952 systemd-logind[1189]: Removed session 2. May 10 00:49:56.984351 systemd[1]: Started sshd@2-10.244.24.230:22-139.178.68.195:59936.service. May 10 00:49:57.870940 sshd[1308]: Accepted publickey for core from 139.178.68.195 port 59936 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:49:57.872881 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:57.879446 systemd-logind[1189]: New session 3 of user core. May 10 00:49:57.880182 systemd[1]: Started session-3.scope. May 10 00:49:58.480457 sshd[1308]: pam_unix(sshd:session): session closed for user core May 10 00:49:58.484159 systemd[1]: sshd@2-10.244.24.230:22-139.178.68.195:59936.service: Deactivated successfully. May 10 00:49:58.485116 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:49:58.485959 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. May 10 00:49:58.487590 systemd-logind[1189]: Removed session 3. May 10 00:49:58.625943 systemd[1]: Started sshd@3-10.244.24.230:22-139.178.68.195:59944.service. May 10 00:49:59.509111 sshd[1314]: Accepted publickey for core from 139.178.68.195 port 59944 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:49:59.511690 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:59.518660 systemd[1]: Started session-4.scope. May 10 00:49:59.519104 systemd-logind[1189]: New session 4 of user core. May 10 00:50:00.123818 sshd[1314]: pam_unix(sshd:session): session closed for user core May 10 00:50:00.127506 systemd[1]: sshd@3-10.244.24.230:22-139.178.68.195:59944.service: Deactivated successfully. May 10 00:50:00.128471 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:50:00.130123 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. May 10 00:50:00.131915 systemd-logind[1189]: Removed session 4. May 10 00:50:00.273878 systemd[1]: Started sshd@4-10.244.24.230:22-139.178.68.195:59954.service. May 10 00:50:01.168998 sshd[1320]: Accepted publickey for core from 139.178.68.195 port 59954 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:50:01.171083 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:50:01.179764 systemd[1]: Started session-5.scope. May 10 00:50:01.180532 systemd-logind[1189]: New session 5 of user core. May 10 00:50:01.659820 sudo[1323]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:50:01.660229 sudo[1323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:50:01.701345 systemd[1]: Starting docker.service... May 10 00:50:01.757830 env[1334]: time="2025-05-10T00:50:01.757711905Z" level=info msg="Starting up" May 10 00:50:01.761492 env[1334]: time="2025-05-10T00:50:01.761343499Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:50:01.761492 env[1334]: time="2025-05-10T00:50:01.761380689Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:50:01.761492 env[1334]: time="2025-05-10T00:50:01.761421867Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:50:01.761492 env[1334]: time="2025-05-10T00:50:01.761447374Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:50:01.765161 env[1334]: time="2025-05-10T00:50:01.765103340Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:50:01.765161 env[1334]: time="2025-05-10T00:50:01.765157050Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:50:01.765327 env[1334]: time="2025-05-10T00:50:01.765182203Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:50:01.765327 env[1334]: time="2025-05-10T00:50:01.765223762Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:50:01.801498 env[1334]: time="2025-05-10T00:50:01.801446719Z" level=info msg="Loading containers: start." May 10 00:50:01.999107 kernel: Initializing XFRM netlink socket May 10 00:50:02.059188 env[1334]: time="2025-05-10T00:50:02.058319524Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:50:02.168730 systemd-networkd[1020]: docker0: Link UP May 10 00:50:02.189200 env[1334]: time="2025-05-10T00:50:02.189131240Z" level=info msg="Loading containers: done." May 10 00:50:02.215817 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck630944009-merged.mount: Deactivated successfully. May 10 00:50:02.220062 env[1334]: time="2025-05-10T00:50:02.219999361Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:50:02.220493 env[1334]: time="2025-05-10T00:50:02.220461262Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:50:02.220811 env[1334]: time="2025-05-10T00:50:02.220782949Z" level=info msg="Daemon has completed initialization" May 10 00:50:02.244370 systemd[1]: Started docker.service. May 10 00:50:02.255396 env[1334]: time="2025-05-10T00:50:02.255325420Z" level=info msg="API listen on /run/docker.sock" May 10 00:50:03.572470 env[1199]: time="2025-05-10T00:50:03.572257715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:50:04.629913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount11512243.mount: Deactivated successfully. May 10 00:50:06.363684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:50:06.363982 systemd[1]: Stopped kubelet.service. May 10 00:50:06.366717 systemd[1]: Starting kubelet.service... May 10 00:50:06.517346 systemd[1]: Started kubelet.service. May 10 00:50:06.629247 kubelet[1465]: E0510 00:50:06.628836 1465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:50:06.630851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:50:06.631089 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:50:07.747568 env[1199]: time="2025-05-10T00:50:07.747390126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:07.751742 env[1199]: time="2025-05-10T00:50:07.751679646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:07.754680 env[1199]: time="2025-05-10T00:50:07.754642778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:07.755954 env[1199]: time="2025-05-10T00:50:07.755918438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:07.757691 env[1199]: time="2025-05-10T00:50:07.757610513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 10 00:50:07.760740 env[1199]: time="2025-05-10T00:50:07.760675753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:50:10.238530 env[1199]: time="2025-05-10T00:50:10.238326610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:10.241161 env[1199]: time="2025-05-10T00:50:10.241126766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:10.244380 env[1199]: time="2025-05-10T00:50:10.244335124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:10.246943 env[1199]: time="2025-05-10T00:50:10.246902897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:10.248441 env[1199]: time="2025-05-10T00:50:10.248355605Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 10 00:50:10.250661 env[1199]: time="2025-05-10T00:50:10.250626736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:50:12.224294 env[1199]: time="2025-05-10T00:50:12.224080519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:12.227192 env[1199]: time="2025-05-10T00:50:12.227141892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:12.230244 env[1199]: time="2025-05-10T00:50:12.230202734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:12.234575 env[1199]: time="2025-05-10T00:50:12.233403879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:12.234829 env[1199]: time="2025-05-10T00:50:12.234540864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 10 00:50:12.236470 env[1199]: time="2025-05-10T00:50:12.236433403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:50:13.844585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436780472.mount: Deactivated successfully. May 10 00:50:14.888475 env[1199]: time="2025-05-10T00:50:14.888308750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:14.891284 env[1199]: time="2025-05-10T00:50:14.891228616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:14.895423 env[1199]: time="2025-05-10T00:50:14.895383676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:14.899683 env[1199]: time="2025-05-10T00:50:14.899643634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:14.900503 env[1199]: time="2025-05-10T00:50:14.900441497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 10 00:50:14.902449 env[1199]: time="2025-05-10T00:50:14.902293804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:50:15.305190 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 00:50:15.481939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471302926.mount: Deactivated successfully. May 10 00:50:16.817441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:50:16.817994 systemd[1]: Stopped kubelet.service. May 10 00:50:16.822096 systemd[1]: Starting kubelet.service... May 10 00:50:17.075402 systemd[1]: Started kubelet.service. May 10 00:50:17.109035 env[1199]: time="2025-05-10T00:50:17.108948011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.112741 env[1199]: time="2025-05-10T00:50:17.112665033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.116791 env[1199]: time="2025-05-10T00:50:17.116749927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.120769 env[1199]: time="2025-05-10T00:50:17.120724541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.122506 env[1199]: time="2025-05-10T00:50:17.122462767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:50:17.125340 env[1199]: time="2025-05-10T00:50:17.125102745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:50:17.151853 kubelet[1477]: E0510 00:50:17.151790 1477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:50:17.154145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:50:17.154409 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:50:17.759433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065704862.mount: Deactivated successfully. May 10 00:50:17.766090 env[1199]: time="2025-05-10T00:50:17.766004582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.768789 env[1199]: time="2025-05-10T00:50:17.768733706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.771516 env[1199]: time="2025-05-10T00:50:17.771469186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.774115 env[1199]: time="2025-05-10T00:50:17.774075334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:17.775075 env[1199]: time="2025-05-10T00:50:17.775012068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 00:50:17.776011 env[1199]: time="2025-05-10T00:50:17.775972280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:50:18.405224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464108870.mount: Deactivated successfully. May 10 00:50:22.307195 env[1199]: time="2025-05-10T00:50:22.307031590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:22.310646 env[1199]: time="2025-05-10T00:50:22.310603432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:22.313683 env[1199]: time="2025-05-10T00:50:22.313641003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:22.316712 env[1199]: time="2025-05-10T00:50:22.316675055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:22.319357 env[1199]: time="2025-05-10T00:50:22.319277375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 10 00:50:27.311709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:50:27.312284 systemd[1]: Stopped kubelet.service. May 10 00:50:27.316812 systemd[1]: Starting kubelet.service... May 10 00:50:27.342324 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:50:27.342470 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:50:27.342813 systemd[1]: Stopped kubelet.service. May 10 00:50:27.346432 systemd[1]: Starting kubelet.service... May 10 00:50:27.387390 systemd[1]: Reloading. May 10 00:50:27.538362 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-10T00:50:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:50:27.539088 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-05-10T00:50:27Z" level=info msg="torcx already run" May 10 00:50:27.645304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:50:27.645623 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:50:27.674437 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:50:27.813688 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 10 00:50:27.814016 systemd[1]: kubelet.service: Failed with result 'signal'. May 10 00:50:27.814579 systemd[1]: Stopped kubelet.service. May 10 00:50:27.817595 systemd[1]: Starting kubelet.service... May 10 00:50:27.940778 systemd[1]: Started kubelet.service. May 10 00:50:28.068204 kubelet[1580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:50:28.068833 kubelet[1580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:50:28.068955 kubelet[1580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:50:28.069277 kubelet[1580]: I0510 00:50:28.069209 1580 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:50:28.373991 update_engine[1191]: I0510 00:50:28.373319 1191 update_attempter.cc:509] Updating boot flags... May 10 00:50:28.657305 kubelet[1580]: I0510 00:50:28.656907 1580 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:50:28.657543 kubelet[1580]: I0510 00:50:28.657518 1580 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:50:28.657972 kubelet[1580]: I0510 00:50:28.657946 1580 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:50:28.716436 kubelet[1580]: E0510 00:50:28.716351 1580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.24.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:28.716643 kubelet[1580]: I0510 00:50:28.716563 1580 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:50:28.732594 kubelet[1580]: E0510 00:50:28.732551 1580 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:50:28.732817 kubelet[1580]: I0510 00:50:28.732788 1580 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:50:28.740189 kubelet[1580]: I0510 00:50:28.740163 1580 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:50:28.742012 kubelet[1580]: I0510 00:50:28.741984 1580 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:50:28.742486 kubelet[1580]: I0510 00:50:28.742448 1580 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:50:28.742892 kubelet[1580]: I0510 00:50:28.742600 1580 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3yk6k.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:50:28.743252 kubelet[1580]: I0510 00:50:28.743226 1580 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:50:28.743370 kubelet[1580]: I0510 00:50:28.743349 1580 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:50:28.743691 kubelet[1580]: I0510 00:50:28.743668 1580 state_mem.go:36] "Initialized new in-memory state store" May 10 00:50:28.750711 kubelet[1580]: I0510 00:50:28.750684 1580 kubelet.go:408] "Attempting to sync node with API server" May 10 00:50:28.750858 kubelet[1580]: I0510 00:50:28.750834 1580 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:50:28.751075 kubelet[1580]: I0510 00:50:28.751021 1580 kubelet.go:314] "Adding apiserver pod source" May 10 00:50:28.751253 kubelet[1580]: I0510 00:50:28.751219 1580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:50:28.762475 kubelet[1580]: W0510 00:50:28.762390 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.24.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3yk6k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:28.762599 kubelet[1580]: E0510 00:50:28.762493 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.24.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3yk6k.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:28.769523 kubelet[1580]: I0510 00:50:28.769486 1580 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:50:28.771840 kubelet[1580]: I0510 00:50:28.771814 1580 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:50:28.773274 kubelet[1580]: W0510 00:50:28.773175 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.24.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:28.773274 kubelet[1580]: E0510 00:50:28.773256 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.24.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:28.774066 kubelet[1580]: W0510 00:50:28.774021 1580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:50:28.776230 kubelet[1580]: I0510 00:50:28.776152 1580 server.go:1269] "Started kubelet" May 10 00:50:28.778624 kubelet[1580]: I0510 00:50:28.778450 1580 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:50:28.780543 kubelet[1580]: I0510 00:50:28.780518 1580 server.go:460] "Adding debug handlers to kubelet server" May 10 00:50:28.781915 kubelet[1580]: I0510 00:50:28.781844 1580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:50:28.782432 kubelet[1580]: I0510 00:50:28.782400 1580 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:50:28.784020 kubelet[1580]: E0510 00:50:28.782633 1580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.24.230:6443/api/v1/namespaces/default/events\": dial tcp 10.244.24.230:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3yk6k.gb1.brightbox.com.183e041d2a45de0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3yk6k.gb1.brightbox.com,UID:srv-3yk6k.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3yk6k.gb1.brightbox.com,},FirstTimestamp:2025-05-10 00:50:28.776115725 +0000 UTC m=+0.824932355,LastTimestamp:2025-05-10 00:50:28.776115725 +0000 UTC m=+0.824932355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3yk6k.gb1.brightbox.com,}" May 10 00:50:28.789705 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:50:28.789863 kubelet[1580]: E0510 00:50:28.787392 1580 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:50:28.790277 kubelet[1580]: I0510 00:50:28.790254 1580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:50:28.793317 kubelet[1580]: I0510 00:50:28.793286 1580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:50:28.794369 kubelet[1580]: I0510 00:50:28.794341 1580 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:50:28.794569 kubelet[1580]: I0510 00:50:28.794543 1580 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:50:28.794666 kubelet[1580]: I0510 00:50:28.794656 1580 reconciler.go:26] "Reconciler: start to sync state" May 10 00:50:28.795423 kubelet[1580]: E0510 00:50:28.795368 1580 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3yk6k.gb1.brightbox.com\" not found" May 10 00:50:28.795554 kubelet[1580]: E0510 00:50:28.795518 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3yk6k.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.230:6443: connect: connection refused" interval="200ms" May 10 00:50:28.795892 kubelet[1580]: I0510 00:50:28.795840 1580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:50:28.796386 kubelet[1580]: W0510 00:50:28.796318 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.24.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:28.796497 kubelet[1580]: E0510 00:50:28.796405 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.24.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:28.797992 kubelet[1580]: I0510 00:50:28.797956 1580 factory.go:221] Registration of the containerd container factory successfully May 10 00:50:28.797992 kubelet[1580]: I0510 00:50:28.797982 1580 factory.go:221] Registration of the systemd container factory successfully May 10 00:50:28.833732 kubelet[1580]: I0510 00:50:28.833670 1580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:50:28.835554 kubelet[1580]: I0510 00:50:28.835528 1580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:50:28.835734 kubelet[1580]: I0510 00:50:28.835710 1580 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:50:28.835929 kubelet[1580]: I0510 00:50:28.835905 1580 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:50:28.836146 kubelet[1580]: E0510 00:50:28.836116 1580 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:50:28.840622 kubelet[1580]: W0510 00:50:28.840584 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.24.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:28.840746 kubelet[1580]: E0510 00:50:28.840636 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.24.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:28.847242 kubelet[1580]: I0510 00:50:28.847211 1580 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:50:28.847406 kubelet[1580]: I0510 00:50:28.847370 1580 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:50:28.847603 kubelet[1580]: I0510 00:50:28.847572 1580 state_mem.go:36] "Initialized new in-memory state store" May 10 00:50:28.849660 kubelet[1580]: I0510 00:50:28.849630 1580 policy_none.go:49] "None policy: Start" May 10 00:50:28.850676 kubelet[1580]: I0510 00:50:28.850633 1580 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:50:28.850852 kubelet[1580]: I0510 00:50:28.850829 1580 state_mem.go:35] "Initializing new in-memory state store" May 10 00:50:28.858802 systemd[1]: Created slice kubepods.slice. May 10 00:50:28.865769 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:50:28.873718 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:50:28.881510 kubelet[1580]: I0510 00:50:28.881479 1580 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:50:28.882618 kubelet[1580]: I0510 00:50:28.882463 1580 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:50:28.882618 kubelet[1580]: I0510 00:50:28.882518 1580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:50:28.883833 kubelet[1580]: I0510 00:50:28.883092 1580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:50:28.886613 kubelet[1580]: E0510 00:50:28.886568 1580 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3yk6k.gb1.brightbox.com\" not found" May 10 00:50:28.951748 systemd[1]: Created slice kubepods-burstable-podee448bc81d44811781432620768b89c0.slice. May 10 00:50:28.965599 systemd[1]: Created slice kubepods-burstable-pod128608d1d9a6e7097d11ee19bbe7126b.slice. May 10 00:50:28.975094 systemd[1]: Created slice kubepods-burstable-pod104ac5a93a2186b0dcfef504f33d4f65.slice. May 10 00:50:28.985657 kubelet[1580]: I0510 00:50:28.985610 1580 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:28.986128 kubelet[1580]: E0510 00:50:28.986093 1580 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.24.230:6443/api/v1/nodes\": dial tcp 10.244.24.230:6443: connect: connection refused" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:28.996101 kubelet[1580]: E0510 00:50:28.996024 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3yk6k.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.230:6443: connect: connection refused" interval="400ms" May 10 00:50:29.096232 kubelet[1580]: I0510 00:50:29.096143 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-flexvolume-dir\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.096852 kubelet[1580]: I0510 00:50:29.096226 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-k8s-certs\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.096852 kubelet[1580]: I0510 00:50:29.096282 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104ac5a93a2186b0dcfef504f33d4f65-kubeconfig\") pod \"kube-scheduler-srv-3yk6k.gb1.brightbox.com\" (UID: \"104ac5a93a2186b0dcfef504f33d4f65\") " pod="kube-system/kube-scheduler-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.096852 kubelet[1580]: I0510 00:50:29.096310 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-k8s-certs\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.096852 kubelet[1580]: I0510 00:50:29.096357 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-ca-certs\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.096852 kubelet[1580]: I0510 00:50:29.096422 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-kubeconfig\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.097174 kubelet[1580]: I0510 00:50:29.096458 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.097174 kubelet[1580]: I0510 00:50:29.096510 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-ca-certs\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.097174 kubelet[1580]: I0510 00:50:29.096539 1580 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.189654 kubelet[1580]: I0510 00:50:29.189581 1580 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.190147 kubelet[1580]: E0510 00:50:29.190100 1580 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.24.230:6443/api/v1/nodes\": dial tcp 10.244.24.230:6443: connect: connection refused" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.265664 env[1199]: time="2025-05-10T00:50:29.264013722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3yk6k.gb1.brightbox.com,Uid:ee448bc81d44811781432620768b89c0,Namespace:kube-system,Attempt:0,}" May 10 00:50:29.273665 env[1199]: time="2025-05-10T00:50:29.273472626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3yk6k.gb1.brightbox.com,Uid:128608d1d9a6e7097d11ee19bbe7126b,Namespace:kube-system,Attempt:0,}" May 10 00:50:29.279816 env[1199]: time="2025-05-10T00:50:29.279676492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3yk6k.gb1.brightbox.com,Uid:104ac5a93a2186b0dcfef504f33d4f65,Namespace:kube-system,Attempt:0,}" May 10 00:50:29.397671 kubelet[1580]: E0510 00:50:29.397574 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3yk6k.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.230:6443: connect: connection refused" interval="800ms" May 10 00:50:29.593426 kubelet[1580]: I0510 00:50:29.593323 1580 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.593782 kubelet[1580]: E0510 00:50:29.593718 1580 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.24.230:6443/api/v1/nodes\": dial tcp 10.244.24.230:6443: connect: connection refused" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:29.894503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57668383.mount: Deactivated successfully. May 10 00:50:29.901418 env[1199]: time="2025-05-10T00:50:29.901328564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.905384 env[1199]: time="2025-05-10T00:50:29.905337891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.906712 env[1199]: time="2025-05-10T00:50:29.906675053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.907676 kubelet[1580]: W0510 00:50:29.907527 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.24.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:29.907676 kubelet[1580]: E0510 00:50:29.907621 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.24.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:29.908341 env[1199]: time="2025-05-10T00:50:29.908305081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.910507 env[1199]: time="2025-05-10T00:50:29.910474834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.911562 env[1199]: time="2025-05-10T00:50:29.911529056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.912645 env[1199]: time="2025-05-10T00:50:29.912610175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.913751 env[1199]: time="2025-05-10T00:50:29.913714732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.917990 env[1199]: time="2025-05-10T00:50:29.917952088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.922255 env[1199]: time="2025-05-10T00:50:29.922220063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.923424 env[1199]: time="2025-05-10T00:50:29.923390205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.926436 env[1199]: time="2025-05-10T00:50:29.926398439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:29.936514 kubelet[1580]: W0510 00:50:29.936365 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.24.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3yk6k.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:29.936514 kubelet[1580]: E0510 00:50:29.936469 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.24.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3yk6k.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:29.971874 env[1199]: time="2025-05-10T00:50:29.971729921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:29.972235 env[1199]: time="2025-05-10T00:50:29.972158571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:29.972369 env[1199]: time="2025-05-10T00:50:29.972223410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:29.972369 env[1199]: time="2025-05-10T00:50:29.972242468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:29.972589 env[1199]: time="2025-05-10T00:50:29.972532522Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa7e0e5d24d02a0babb4ddf61d2245400120c8bfc8660863e6955a241e0dea2 pid=1643 runtime=io.containerd.runc.v2 May 10 00:50:29.972987 env[1199]: time="2025-05-10T00:50:29.972738779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:29.972987 env[1199]: time="2025-05-10T00:50:29.972835875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:29.974831 env[1199]: time="2025-05-10T00:50:29.973444649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a4b92b4cb2d577760143ad970c87b3322f297af95c96c71ad51add3e370035e pid=1639 runtime=io.containerd.runc.v2 May 10 00:50:29.978296 env[1199]: time="2025-05-10T00:50:29.978219055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:29.978416 env[1199]: time="2025-05-10T00:50:29.978309878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:29.978416 env[1199]: time="2025-05-10T00:50:29.978358394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:29.978626 env[1199]: time="2025-05-10T00:50:29.978564190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a60e6ec951f9df2ad8db858915a7d7f98163ca7bbb5cf8e120c917bff17ed71 pid=1664 runtime=io.containerd.runc.v2 May 10 00:50:30.008341 systemd[1]: Started cri-containerd-4a60e6ec951f9df2ad8db858915a7d7f98163ca7bbb5cf8e120c917bff17ed71.scope. May 10 00:50:30.032410 systemd[1]: Started cri-containerd-4a4b92b4cb2d577760143ad970c87b3322f297af95c96c71ad51add3e370035e.scope. May 10 00:50:30.039188 systemd[1]: Started cri-containerd-7aa7e0e5d24d02a0babb4ddf61d2245400120c8bfc8660863e6955a241e0dea2.scope. May 10 00:50:30.126802 kubelet[1580]: W0510 00:50:30.125117 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.24.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:30.126802 kubelet[1580]: E0510 00:50:30.125243 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.24.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:30.144543 env[1199]: time="2025-05-10T00:50:30.144458220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3yk6k.gb1.brightbox.com,Uid:128608d1d9a6e7097d11ee19bbe7126b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a60e6ec951f9df2ad8db858915a7d7f98163ca7bbb5cf8e120c917bff17ed71\"" May 10 00:50:30.154518 env[1199]: time="2025-05-10T00:50:30.154412691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3yk6k.gb1.brightbox.com,Uid:ee448bc81d44811781432620768b89c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aa7e0e5d24d02a0babb4ddf61d2245400120c8bfc8660863e6955a241e0dea2\"" May 10 00:50:30.159536 env[1199]: time="2025-05-10T00:50:30.159496819Z" level=info msg="CreateContainer within sandbox \"4a60e6ec951f9df2ad8db858915a7d7f98163ca7bbb5cf8e120c917bff17ed71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:50:30.164007 env[1199]: time="2025-05-10T00:50:30.163965915Z" level=info msg="CreateContainer within sandbox \"7aa7e0e5d24d02a0babb4ddf61d2245400120c8bfc8660863e6955a241e0dea2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:50:30.187035 kubelet[1580]: W0510 00:50:30.186959 1580 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.24.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.24.230:6443: connect: connection refused May 10 00:50:30.187336 kubelet[1580]: E0510 00:50:30.187299 1580 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.24.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:30.194807 env[1199]: time="2025-05-10T00:50:30.194746131Z" level=info msg="CreateContainer within sandbox \"7aa7e0e5d24d02a0babb4ddf61d2245400120c8bfc8660863e6955a241e0dea2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89de15552ce15500a6d13f764755c46a254a29acf0615efdaecbe661c64a2d54\"" May 10 00:50:30.195686 env[1199]: time="2025-05-10T00:50:30.195639841Z" level=info msg="StartContainer for \"89de15552ce15500a6d13f764755c46a254a29acf0615efdaecbe661c64a2d54\"" May 10 00:50:30.198968 kubelet[1580]: E0510 00:50:30.198915 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.24.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3yk6k.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.24.230:6443: connect: connection refused" interval="1.6s" May 10 00:50:30.200934 env[1199]: time="2025-05-10T00:50:30.200893537Z" level=info msg="CreateContainer within sandbox \"4a60e6ec951f9df2ad8db858915a7d7f98163ca7bbb5cf8e120c917bff17ed71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1bd099c0a7ec7ec4238f20ad8a1742b13cbdf800c41463ba279ce089f972a36e\"" May 10 00:50:30.201445 env[1199]: time="2025-05-10T00:50:30.201409274Z" level=info msg="StartContainer for \"1bd099c0a7ec7ec4238f20ad8a1742b13cbdf800c41463ba279ce089f972a36e\"" May 10 00:50:30.207483 env[1199]: time="2025-05-10T00:50:30.207443042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3yk6k.gb1.brightbox.com,Uid:104ac5a93a2186b0dcfef504f33d4f65,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a4b92b4cb2d577760143ad970c87b3322f297af95c96c71ad51add3e370035e\"" May 10 00:50:30.210294 env[1199]: time="2025-05-10T00:50:30.210241504Z" level=info msg="CreateContainer within sandbox \"4a4b92b4cb2d577760143ad970c87b3322f297af95c96c71ad51add3e370035e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:50:30.225508 env[1199]: time="2025-05-10T00:50:30.225451544Z" level=info msg="CreateContainer within sandbox \"4a4b92b4cb2d577760143ad970c87b3322f297af95c96c71ad51add3e370035e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"088b15d2882737c0a696c3e076a62f6a6ebfd79ccb651df79af520e6087c8808\"" May 10 00:50:30.226257 env[1199]: time="2025-05-10T00:50:30.226223716Z" level=info msg="StartContainer for \"088b15d2882737c0a696c3e076a62f6a6ebfd79ccb651df79af520e6087c8808\"" May 10 00:50:30.236465 systemd[1]: Started cri-containerd-1bd099c0a7ec7ec4238f20ad8a1742b13cbdf800c41463ba279ce089f972a36e.scope. May 10 00:50:30.254488 systemd[1]: Started cri-containerd-89de15552ce15500a6d13f764755c46a254a29acf0615efdaecbe661c64a2d54.scope. May 10 00:50:30.280360 systemd[1]: Started cri-containerd-088b15d2882737c0a696c3e076a62f6a6ebfd79ccb651df79af520e6087c8808.scope. May 10 00:50:30.367092 env[1199]: time="2025-05-10T00:50:30.366945475Z" level=info msg="StartContainer for \"1bd099c0a7ec7ec4238f20ad8a1742b13cbdf800c41463ba279ce089f972a36e\" returns successfully" May 10 00:50:30.371982 env[1199]: time="2025-05-10T00:50:30.371929250Z" level=info msg="StartContainer for \"89de15552ce15500a6d13f764755c46a254a29acf0615efdaecbe661c64a2d54\" returns successfully" May 10 00:50:30.390074 env[1199]: time="2025-05-10T00:50:30.389826661Z" level=info msg="StartContainer for \"088b15d2882737c0a696c3e076a62f6a6ebfd79ccb651df79af520e6087c8808\" returns successfully" May 10 00:50:30.396778 kubelet[1580]: I0510 00:50:30.396202 1580 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:30.397118 kubelet[1580]: E0510 00:50:30.397071 1580 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.24.230:6443/api/v1/nodes\": dial tcp 10.244.24.230:6443: connect: connection refused" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:30.898804 kubelet[1580]: E0510 00:50:30.898738 1580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.24.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.24.230:6443: connect: connection refused" logger="UnhandledError" May 10 00:50:32.000160 kubelet[1580]: I0510 00:50:32.000099 1580 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:33.469023 kubelet[1580]: E0510 00:50:33.468883 1580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-3yk6k.gb1.brightbox.com\" not found" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:33.496027 kubelet[1580]: E0510 00:50:33.495755 1580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-3yk6k.gb1.brightbox.com.183e041d2a45de0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3yk6k.gb1.brightbox.com,UID:srv-3yk6k.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3yk6k.gb1.brightbox.com,},FirstTimestamp:2025-05-10 00:50:28.776115725 +0000 UTC m=+0.824932355,LastTimestamp:2025-05-10 00:50:28.776115725 +0000 UTC m=+0.824932355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3yk6k.gb1.brightbox.com,}" May 10 00:50:33.541754 kubelet[1580]: I0510 00:50:33.541687 1580 kubelet_node_status.go:75] "Successfully registered node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:33.556768 kubelet[1580]: E0510 00:50:33.556619 1580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-3yk6k.gb1.brightbox.com.183e041d2af1a19b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3yk6k.gb1.brightbox.com,UID:srv-3yk6k.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-3yk6k.gb1.brightbox.com,},FirstTimestamp:2025-05-10 00:50:28.787372443 +0000 UTC m=+0.836189068,LastTimestamp:2025-05-10 00:50:28.787372443 +0000 UTC m=+0.836189068,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3yk6k.gb1.brightbox.com,}" May 10 00:50:33.764931 kubelet[1580]: I0510 00:50:33.764777 1580 apiserver.go:52] "Watching apiserver" May 10 00:50:33.795717 kubelet[1580]: I0510 00:50:33.795669 1580 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:50:35.991260 systemd[1]: Reloading. May 10 00:50:36.177195 /usr/lib/systemd/system-generators/torcx-generator[1885]: time="2025-05-10T00:50:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:50:36.177257 /usr/lib/systemd/system-generators/torcx-generator[1885]: time="2025-05-10T00:50:36Z" level=info msg="torcx already run" May 10 00:50:36.317480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:50:36.317523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:50:36.350582 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:50:36.547642 kubelet[1580]: I0510 00:50:36.546949 1580 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:50:36.547096 systemd[1]: Stopping kubelet.service... May 10 00:50:36.573189 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:50:36.573585 systemd[1]: Stopped kubelet.service. May 10 00:50:36.573703 systemd[1]: kubelet.service: Consumed 1.196s CPU time. May 10 00:50:36.577584 systemd[1]: Starting kubelet.service... May 10 00:50:37.823179 systemd[1]: Started kubelet.service. May 10 00:50:37.965104 kubelet[1935]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:50:37.965104 kubelet[1935]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:50:37.965104 kubelet[1935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:50:37.967227 kubelet[1935]: I0510 00:50:37.967162 1935 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:50:37.980008 kubelet[1935]: I0510 00:50:37.979965 1935 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:50:37.980008 kubelet[1935]: I0510 00:50:37.980000 1935 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:50:37.980409 kubelet[1935]: I0510 00:50:37.980380 1935 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:50:37.983888 kubelet[1935]: I0510 00:50:37.983859 1935 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:50:38.004753 kubelet[1935]: I0510 00:50:38.004705 1935 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:50:38.011051 kubelet[1935]: E0510 00:50:38.010981 1935 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:50:38.011051 kubelet[1935]: I0510 00:50:38.011024 1935 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:50:38.018163 kubelet[1935]: I0510 00:50:38.017173 1935 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:50:38.018163 kubelet[1935]: I0510 00:50:38.017374 1935 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:50:38.018163 kubelet[1935]: I0510 00:50:38.017596 1935 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:50:38.018491 kubelet[1935]: I0510 00:50:38.017638 1935 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3yk6k.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:50:38.018491 kubelet[1935]: I0510 00:50:38.017906 1935 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:50:38.018491 kubelet[1935]: I0510 00:50:38.017924 1935 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:50:38.018491 kubelet[1935]: I0510 00:50:38.018016 1935 state_mem.go:36] "Initialized new in-memory state store" May 10 00:50:38.020637 kubelet[1935]: I0510 00:50:38.019565 1935 kubelet.go:408] "Attempting to sync node with API server" May 10 00:50:38.020637 kubelet[1935]: I0510 00:50:38.019599 1935 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:50:38.020637 kubelet[1935]: I0510 00:50:38.019656 1935 kubelet.go:314] "Adding apiserver pod source" May 10 00:50:38.020637 kubelet[1935]: I0510 00:50:38.019684 1935 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:50:38.034291 kubelet[1935]: I0510 00:50:38.034253 1935 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:50:38.035209 kubelet[1935]: I0510 00:50:38.035184 1935 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:50:38.036328 kubelet[1935]: I0510 00:50:38.036294 1935 server.go:1269] "Started kubelet" May 10 00:50:38.048749 kubelet[1935]: I0510 00:50:38.048694 1935 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:50:38.048986 kubelet[1935]: I0510 00:50:38.048955 1935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:50:38.051848 kubelet[1935]: I0510 00:50:38.051822 1935 server.go:460] "Adding debug handlers to kubelet server" May 10 00:50:38.055532 kubelet[1935]: I0510 00:50:38.055466 1935 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:50:38.055930 kubelet[1935]: I0510 00:50:38.055905 1935 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:50:38.062486 kubelet[1935]: I0510 00:50:38.062446 1935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:50:38.064055 sudo[1949]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:50:38.064515 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:50:38.067084 kubelet[1935]: I0510 00:50:38.067015 1935 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:50:38.067466 kubelet[1935]: E0510 00:50:38.067435 1935 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3yk6k.gb1.brightbox.com\" not found" May 10 00:50:38.099478 kubelet[1935]: I0510 00:50:38.099349 1935 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:50:38.110651 kubelet[1935]: I0510 00:50:38.107799 1935 reconciler.go:26] "Reconciler: start to sync state" May 10 00:50:38.114716 kubelet[1935]: I0510 00:50:38.114681 1935 factory.go:221] Registration of the systemd container factory successfully May 10 00:50:38.114843 kubelet[1935]: I0510 00:50:38.114801 1935 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:50:38.121557 kubelet[1935]: I0510 00:50:38.121526 1935 factory.go:221] Registration of the containerd container factory successfully May 10 00:50:38.128302 kubelet[1935]: E0510 00:50:38.124451 1935 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:50:38.149601 kubelet[1935]: I0510 00:50:38.149555 1935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:50:38.151646 kubelet[1935]: I0510 00:50:38.151621 1935 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:50:38.151840 kubelet[1935]: I0510 00:50:38.151816 1935 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:50:38.152108 kubelet[1935]: I0510 00:50:38.152085 1935 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:50:38.152390 kubelet[1935]: E0510 00:50:38.152351 1935 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:50:38.234655 kubelet[1935]: I0510 00:50:38.234601 1935 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:50:38.235217 kubelet[1935]: I0510 00:50:38.235161 1935 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:50:38.235416 kubelet[1935]: I0510 00:50:38.235393 1935 state_mem.go:36] "Initialized new in-memory state store" May 10 00:50:38.235867 kubelet[1935]: I0510 00:50:38.235836 1935 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:50:38.236209 kubelet[1935]: I0510 00:50:38.236166 1935 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:50:38.236440 kubelet[1935]: I0510 00:50:38.236389 1935 policy_none.go:49] "None policy: Start" May 10 00:50:38.237547 kubelet[1935]: I0510 00:50:38.237523 1935 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:50:38.237709 kubelet[1935]: I0510 00:50:38.237686 1935 state_mem.go:35] "Initializing new in-memory state store" May 10 00:50:38.238015 kubelet[1935]: I0510 00:50:38.237991 1935 state_mem.go:75] "Updated machine memory state" May 10 00:50:38.246521 kubelet[1935]: I0510 00:50:38.246491 1935 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:50:38.246957 kubelet[1935]: I0510 00:50:38.246933 1935 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:50:38.247149 kubelet[1935]: I0510 00:50:38.247099 1935 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:50:38.248420 kubelet[1935]: I0510 00:50:38.248396 1935 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:50:38.285833 kubelet[1935]: W0510 00:50:38.285793 1935 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:50:38.297641 kubelet[1935]: W0510 00:50:38.286960 1935 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:50:38.298001 kubelet[1935]: W0510 00:50:38.286997 1935 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:50:38.313342 kubelet[1935]: I0510 00:50:38.313287 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-k8s-certs\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.313576 kubelet[1935]: I0510 00:50:38.313539 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-ca-certs\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.313747 kubelet[1935]: I0510 00:50:38.313715 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-k8s-certs\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.313971 kubelet[1935]: I0510 00:50:38.313942 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-kubeconfig\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.314265 kubelet[1935]: I0510 00:50:38.314236 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.314432 kubelet[1935]: I0510 00:50:38.314403 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-ca-certs\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.314571 kubelet[1935]: I0510 00:50:38.314544 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee448bc81d44811781432620768b89c0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" (UID: \"ee448bc81d44811781432620768b89c0\") " pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.314709 kubelet[1935]: I0510 00:50:38.314682 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/128608d1d9a6e7097d11ee19bbe7126b-flexvolume-dir\") pod \"kube-controller-manager-srv-3yk6k.gb1.brightbox.com\" (UID: \"128608d1d9a6e7097d11ee19bbe7126b\") " pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.314859 kubelet[1935]: I0510 00:50:38.314832 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/104ac5a93a2186b0dcfef504f33d4f65-kubeconfig\") pod \"kube-scheduler-srv-3yk6k.gb1.brightbox.com\" (UID: \"104ac5a93a2186b0dcfef504f33d4f65\") " pod="kube-system/kube-scheduler-srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.379853 kubelet[1935]: I0510 00:50:38.377747 1935 kubelet_node_status.go:72] "Attempting to register node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.393585 kubelet[1935]: I0510 00:50:38.393535 1935 kubelet_node_status.go:111] "Node was previously registered" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:38.393947 kubelet[1935]: I0510 00:50:38.393925 1935 kubelet_node_status.go:75] "Successfully registered node" node="srv-3yk6k.gb1.brightbox.com" May 10 00:50:39.033696 kubelet[1935]: I0510 00:50:39.033627 1935 apiserver.go:52] "Watching apiserver" May 10 00:50:39.067975 kubelet[1935]: I0510 00:50:39.067934 1935 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:50:39.113457 sudo[1949]: pam_unix(sudo:session): session closed for user root May 10 00:50:39.211335 kubelet[1935]: W0510 00:50:39.211241 1935 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 10 00:50:39.211560 kubelet[1935]: E0510 00:50:39.211408 1935 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-3yk6k.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" May 10 00:50:39.230190 kubelet[1935]: I0510 00:50:39.230008 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3yk6k.gb1.brightbox.com" podStartSLOduration=1.229948175 podStartE2EDuration="1.229948175s" podCreationTimestamp="2025-05-10 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:50:39.22967055 +0000 UTC m=+1.375896171" watchObservedRunningTime="2025-05-10 00:50:39.229948175 +0000 UTC m=+1.376173792" May 10 00:50:39.230467 kubelet[1935]: I0510 00:50:39.230266 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3yk6k.gb1.brightbox.com" podStartSLOduration=1.230256967 podStartE2EDuration="1.230256967s" podCreationTimestamp="2025-05-10 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:50:39.215949048 +0000 UTC m=+1.362174665" watchObservedRunningTime="2025-05-10 00:50:39.230256967 +0000 UTC m=+1.376482851" May 10 00:50:39.249984 kubelet[1935]: I0510 00:50:39.249720 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3yk6k.gb1.brightbox.com" podStartSLOduration=1.249704274 podStartE2EDuration="1.249704274s" podCreationTimestamp="2025-05-10 00:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:50:39.248628563 +0000 UTC m=+1.394854182" watchObservedRunningTime="2025-05-10 00:50:39.249704274 +0000 UTC m=+1.395929897" May 10 00:50:40.810603 kubelet[1935]: I0510 00:50:40.810534 1935 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:50:40.812226 env[1199]: time="2025-05-10T00:50:40.812130238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:50:40.813311 kubelet[1935]: I0510 00:50:40.813272 1935 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:50:41.222449 sudo[1323]: pam_unix(sudo:session): session closed for user root May 10 00:50:41.368515 sshd[1320]: pam_unix(sshd:session): session closed for user core May 10 00:50:41.374752 systemd[1]: sshd@4-10.244.24.230:22-139.178.68.195:59954.service: Deactivated successfully. May 10 00:50:41.376979 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:50:41.377377 systemd[1]: session-5.scope: Consumed 7.202s CPU time. May 10 00:50:41.378207 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. May 10 00:50:41.380520 systemd-logind[1189]: Removed session 5. May 10 00:50:41.563320 systemd[1]: Created slice kubepods-besteffort-pod7fac6cb2_9d21_4969_892e_55c389516aa9.slice. May 10 00:50:41.603894 systemd[1]: Created slice kubepods-burstable-pod4425e984_4cd1_4e94_b859_b856b63d825d.slice. May 10 00:50:41.618195 kubelet[1935]: W0510 00:50:41.618150 1935 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-3yk6k.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-3yk6k.gb1.brightbox.com' and this object May 10 00:50:41.618528 kubelet[1935]: E0510 00:50:41.618489 1935 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-3yk6k.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-3yk6k.gb1.brightbox.com' and this object" logger="UnhandledError" May 10 00:50:41.618667 kubelet[1935]: W0510 00:50:41.618159 1935 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-3yk6k.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-3yk6k.gb1.brightbox.com' and this object May 10 00:50:41.618815 kubelet[1935]: E0510 00:50:41.618784 1935 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-3yk6k.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-3yk6k.gb1.brightbox.com' and this object" logger="UnhandledError" May 10 00:50:41.637593 kubelet[1935]: I0510 00:50:41.637525 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fac6cb2-9d21-4969-892e-55c389516aa9-kube-proxy\") pod \"kube-proxy-dk854\" (UID: \"7fac6cb2-9d21-4969-892e-55c389516aa9\") " pod="kube-system/kube-proxy-dk854" May 10 00:50:41.637593 kubelet[1935]: I0510 00:50:41.637594 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fac6cb2-9d21-4969-892e-55c389516aa9-lib-modules\") pod \"kube-proxy-dk854\" (UID: \"7fac6cb2-9d21-4969-892e-55c389516aa9\") " pod="kube-system/kube-proxy-dk854" May 10 00:50:41.637893 kubelet[1935]: I0510 00:50:41.637625 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fac6cb2-9d21-4969-892e-55c389516aa9-xtables-lock\") pod \"kube-proxy-dk854\" (UID: \"7fac6cb2-9d21-4969-892e-55c389516aa9\") " pod="kube-system/kube-proxy-dk854" May 10 00:50:41.637893 kubelet[1935]: I0510 00:50:41.637654 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6f6\" (UniqueName: \"kubernetes.io/projected/7fac6cb2-9d21-4969-892e-55c389516aa9-kube-api-access-gt6f6\") pod \"kube-proxy-dk854\" (UID: \"7fac6cb2-9d21-4969-892e-55c389516aa9\") " pod="kube-system/kube-proxy-dk854" May 10 00:50:41.738155 kubelet[1935]: I0510 00:50:41.738070 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-kernel\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738448 kubelet[1935]: I0510 00:50:41.738153 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-hubble-tls\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738448 kubelet[1935]: I0510 00:50:41.738217 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-run\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738448 kubelet[1935]: I0510 00:50:41.738276 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4425e984-4cd1-4e94-b859-b856b63d825d-clustermesh-secrets\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738448 kubelet[1935]: I0510 00:50:41.738322 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-lib-modules\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738448 kubelet[1935]: I0510 00:50:41.738414 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-hostproc\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738466 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25st\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-kube-api-access-p25st\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738494 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-net\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738538 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-etc-cni-netd\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738565 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738635 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-bpf-maps\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.738752 kubelet[1935]: I0510 00:50:41.738663 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-xtables-lock\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.739148 kubelet[1935]: I0510 00:50:41.738772 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-cgroup\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.739148 kubelet[1935]: I0510 00:50:41.738806 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cni-path\") pod \"cilium-tfsjd\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " pod="kube-system/cilium-tfsjd" May 10 00:50:41.775169 kubelet[1935]: E0510 00:50:41.775105 1935 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 10 00:50:41.775371 kubelet[1935]: E0510 00:50:41.775176 1935 projected.go:194] Error preparing data for projected volume kube-api-access-gt6f6 for pod kube-system/kube-proxy-dk854: configmap "kube-root-ca.crt" not found May 10 00:50:41.776242 kubelet[1935]: E0510 00:50:41.776192 1935 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7fac6cb2-9d21-4969-892e-55c389516aa9-kube-api-access-gt6f6 podName:7fac6cb2-9d21-4969-892e-55c389516aa9 nodeName:}" failed. No retries permitted until 2025-05-10 00:50:42.275312767 +0000 UTC m=+4.421538387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gt6f6" (UniqueName: "kubernetes.io/projected/7fac6cb2-9d21-4969-892e-55c389516aa9-kube-api-access-gt6f6") pod "kube-proxy-dk854" (UID: "7fac6cb2-9d21-4969-892e-55c389516aa9") : configmap "kube-root-ca.crt" not found May 10 00:50:41.812275 kubelet[1935]: E0510 00:50:41.812200 1935 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-p25st lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-tfsjd" podUID="4425e984-4cd1-4e94-b859-b856b63d825d" May 10 00:50:41.840813 kubelet[1935]: I0510 00:50:41.840763 1935 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:50:41.907343 systemd[1]: Created slice kubepods-besteffort-pod5c6028e2_a347_4522_b747_9b3a28f9776d.slice. May 10 00:50:42.042265 kubelet[1935]: I0510 00:50:42.042214 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c6028e2-a347-4522-b747-9b3a28f9776d-cilium-config-path\") pod \"cilium-operator-5d85765b45-djs6m\" (UID: \"5c6028e2-a347-4522-b747-9b3a28f9776d\") " pod="kube-system/cilium-operator-5d85765b45-djs6m" May 10 00:50:42.042607 kubelet[1935]: I0510 00:50:42.042570 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hqtg\" (UniqueName: \"kubernetes.io/projected/5c6028e2-a347-4522-b747-9b3a28f9776d-kube-api-access-5hqtg\") pod \"cilium-operator-5d85765b45-djs6m\" (UID: \"5c6028e2-a347-4522-b747-9b3a28f9776d\") " pod="kube-system/cilium-operator-5d85765b45-djs6m" May 10 00:50:42.345336 kubelet[1935]: I0510 00:50:42.345289 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cni-path\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.345683 kubelet[1935]: I0510 00:50:42.345417 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cni-path" (OuterVolumeSpecName: "cni-path") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.345808 kubelet[1935]: I0510 00:50:42.345662 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-cgroup\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.345970 kubelet[1935]: I0510 00:50:42.345942 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-run\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.346179 kubelet[1935]: I0510 00:50:42.346152 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-kernel\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.346393 kubelet[1935]: I0510 00:50:42.346367 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25st\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-kube-api-access-p25st\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.346542 kubelet[1935]: I0510 00:50:42.346515 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-net\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.346675 kubelet[1935]: I0510 00:50:42.346650 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-etc-cni-netd\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.346842 kubelet[1935]: I0510 00:50:42.346816 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-xtables-lock\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.347002 kubelet[1935]: I0510 00:50:42.346976 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4425e984-4cd1-4e94-b859-b856b63d825d-clustermesh-secrets\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.347172 kubelet[1935]: I0510 00:50:42.347146 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-lib-modules\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.347341 kubelet[1935]: I0510 00:50:42.347307 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-hostproc\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.347491 kubelet[1935]: I0510 00:50:42.347462 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-bpf-maps\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.347736 kubelet[1935]: I0510 00:50:42.347709 1935 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cni-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.351831 systemd[1]: var-lib-kubelet-pods-4425e984\x2d4cd1\x2d4e94\x2db859\x2db856b63d825d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp25st.mount: Deactivated successfully. May 10 00:50:42.353821 kubelet[1935]: I0510 00:50:42.345782 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353821 kubelet[1935]: I0510 00:50:42.346012 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.346221 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.353747 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-kube-api-access-p25st" (OuterVolumeSpecName: "kube-api-access-p25st") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "kube-api-access-p25st". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.353788 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.353880 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.353913 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.353993 kubelet[1935]: I0510 00:50:42.353946 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.359018 kubelet[1935]: I0510 00:50:42.358988 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-hostproc" (OuterVolumeSpecName: "hostproc") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.359398 kubelet[1935]: I0510 00:50:42.359284 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:50:42.362397 kubelet[1935]: I0510 00:50:42.362351 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4425e984-4cd1-4e94-b859-b856b63d825d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:50:42.448494 kubelet[1935]: I0510 00:50:42.448438 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-cgroup\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.448794 kubelet[1935]: I0510 00:50:42.448768 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-run\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.448936 kubelet[1935]: I0510 00:50:42.448910 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-kernel\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449084 kubelet[1935]: I0510 00:50:42.449029 1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p25st\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-kube-api-access-p25st\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449223 kubelet[1935]: I0510 00:50:42.449198 1935 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-etc-cni-netd\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449382 kubelet[1935]: I0510 00:50:42.449359 1935 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-xtables-lock\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449525 kubelet[1935]: I0510 00:50:42.449501 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-host-proc-sys-net\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449660 kubelet[1935]: I0510 00:50:42.449635 1935 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-lib-modules\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449773 kubelet[1935]: I0510 00:50:42.449750 1935 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4425e984-4cd1-4e94-b859-b856b63d825d-clustermesh-secrets\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.449916 kubelet[1935]: I0510 00:50:42.449888 1935 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-hostproc\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.450031 kubelet[1935]: I0510 00:50:42.450008 1935 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4425e984-4cd1-4e94-b859-b856b63d825d-bpf-maps\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:42.474212 env[1199]: time="2025-05-10T00:50:42.473771176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk854,Uid:7fac6cb2-9d21-4969-892e-55c389516aa9,Namespace:kube-system,Attempt:0,}" May 10 00:50:42.504608 env[1199]: time="2025-05-10T00:50:42.504430532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:42.504608 env[1199]: time="2025-05-10T00:50:42.504538106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:42.504608 env[1199]: time="2025-05-10T00:50:42.504556559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:42.505568 env[1199]: time="2025-05-10T00:50:42.505458122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bd56fa390f4a616078e377b66fab555bd4d77930f2f2d9968cb28766e1ac7c5 pid=2020 runtime=io.containerd.runc.v2 May 10 00:50:42.532030 systemd[1]: Started cri-containerd-9bd56fa390f4a616078e377b66fab555bd4d77930f2f2d9968cb28766e1ac7c5.scope. May 10 00:50:42.580448 env[1199]: time="2025-05-10T00:50:42.580347490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk854,Uid:7fac6cb2-9d21-4969-892e-55c389516aa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bd56fa390f4a616078e377b66fab555bd4d77930f2f2d9968cb28766e1ac7c5\"" May 10 00:50:42.586854 env[1199]: time="2025-05-10T00:50:42.586796599Z" level=info msg="CreateContainer within sandbox \"9bd56fa390f4a616078e377b66fab555bd4d77930f2f2d9968cb28766e1ac7c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:50:42.608199 env[1199]: time="2025-05-10T00:50:42.608003406Z" level=info msg="CreateContainer within sandbox \"9bd56fa390f4a616078e377b66fab555bd4d77930f2f2d9968cb28766e1ac7c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ff2fbe2729999e15390ba9dca96740e1fc98ead2bc6cdfb80c7a44fca28346c\"" May 10 00:50:42.610112 env[1199]: time="2025-05-10T00:50:42.610077151Z" level=info msg="StartContainer for \"2ff2fbe2729999e15390ba9dca96740e1fc98ead2bc6cdfb80c7a44fca28346c\"" May 10 00:50:42.639067 systemd[1]: Started cri-containerd-2ff2fbe2729999e15390ba9dca96740e1fc98ead2bc6cdfb80c7a44fca28346c.scope. May 10 00:50:42.720843 env[1199]: time="2025-05-10T00:50:42.720772136Z" level=info msg="StartContainer for \"2ff2fbe2729999e15390ba9dca96740e1fc98ead2bc6cdfb80c7a44fca28346c\" returns successfully" May 10 00:50:42.841620 kubelet[1935]: E0510 00:50:42.841567 1935 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:50:42.842368 kubelet[1935]: E0510 00:50:42.842342 1935 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path podName:4425e984-4cd1-4e94-b859-b856b63d825d nodeName:}" failed. No retries permitted until 2025-05-10 00:50:43.342316214 +0000 UTC m=+5.488541826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path") pod "cilium-tfsjd" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d") : failed to sync configmap cache: timed out waiting for the condition May 10 00:50:42.854487 systemd[1]: var-lib-kubelet-pods-4425e984\x2d4cd1\x2d4e94\x2db859\x2db856b63d825d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:50:42.954148 kubelet[1935]: I0510 00:50:42.953976 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-hubble-tls\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:42.964178 systemd[1]: var-lib-kubelet-pods-4425e984\x2d4cd1\x2d4e94\x2db859\x2db856b63d825d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:50:42.969701 kubelet[1935]: I0510 00:50:42.969645 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:50:43.055278 kubelet[1935]: I0510 00:50:43.055216 1935 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4425e984-4cd1-4e94-b859-b856b63d825d-hubble-tls\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:43.200736 systemd[1]: Removed slice kubepods-burstable-pod4425e984_4cd1_4e94_b859_b856b63d825d.slice. May 10 00:50:43.278340 kubelet[1935]: I0510 00:50:43.278129 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dk854" podStartSLOduration=2.278096025 podStartE2EDuration="2.278096025s" podCreationTimestamp="2025-05-10 00:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:50:43.254157556 +0000 UTC m=+5.400383182" watchObservedRunningTime="2025-05-10 00:50:43.278096025 +0000 UTC m=+5.424321643" May 10 00:50:43.316438 systemd[1]: Created slice kubepods-burstable-pod31a4a06c_35f8_495c_9895_89674a12a81c.slice. May 10 00:50:43.412881 env[1199]: time="2025-05-10T00:50:43.412791756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-djs6m,Uid:5c6028e2-a347-4522-b747-9b3a28f9776d,Namespace:kube-system,Attempt:0,}" May 10 00:50:43.454019 env[1199]: time="2025-05-10T00:50:43.453897527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:43.454019 env[1199]: time="2025-05-10T00:50:43.453971523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:43.454415 env[1199]: time="2025-05-10T00:50:43.454336670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:43.454842 env[1199]: time="2025-05-10T00:50:43.454777485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81 pid=2173 runtime=io.containerd.runc.v2 May 10 00:50:43.465959 kubelet[1935]: I0510 00:50:43.460983 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path\") pod \"4425e984-4cd1-4e94-b859-b856b63d825d\" (UID: \"4425e984-4cd1-4e94-b859-b856b63d825d\") " May 10 00:50:43.466135 kubelet[1935]: I0510 00:50:43.466007 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-bpf-maps\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466135 kubelet[1935]: I0510 00:50:43.466064 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-etc-cni-netd\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466135 kubelet[1935]: I0510 00:50:43.466101 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cni-path\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466135 kubelet[1935]: I0510 00:50:43.466128 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-xtables-lock\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466153 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-config-path\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466178 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn27w\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-kube-api-access-mn27w\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466203 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-hostproc\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466229 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-cgroup\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466256 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-lib-modules\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466313 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-net\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466344 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-hubble-tls\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466370 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-run\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466396 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31a4a06c-35f8-495c-9895-89674a12a81c-clustermesh-secrets\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.466433 kubelet[1935]: I0510 00:50:43.466433 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-kernel\") pod \"cilium-fzqrg\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " pod="kube-system/cilium-fzqrg" May 10 00:50:43.467003 kubelet[1935]: I0510 00:50:43.465902 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4425e984-4cd1-4e94-b859-b856b63d825d" (UID: "4425e984-4cd1-4e94-b859-b856b63d825d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:50:43.491629 systemd[1]: Started cri-containerd-927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81.scope. May 10 00:50:43.568127 kubelet[1935]: I0510 00:50:43.568085 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4425e984-4cd1-4e94-b859-b856b63d825d-cilium-config-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:50:43.604272 env[1199]: time="2025-05-10T00:50:43.604144644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-djs6m,Uid:5c6028e2-a347-4522-b747-9b3a28f9776d,Namespace:kube-system,Attempt:0,} returns sandbox id \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\"" May 10 00:50:43.607439 env[1199]: time="2025-05-10T00:50:43.607391871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:50:43.620659 env[1199]: time="2025-05-10T00:50:43.620583709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzqrg,Uid:31a4a06c-35f8-495c-9895-89674a12a81c,Namespace:kube-system,Attempt:0,}" May 10 00:50:43.642772 env[1199]: time="2025-05-10T00:50:43.642461689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:50:43.642772 env[1199]: time="2025-05-10T00:50:43.642536842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:50:43.642772 env[1199]: time="2025-05-10T00:50:43.642562010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:50:43.643347 env[1199]: time="2025-05-10T00:50:43.643292960Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794 pid=2256 runtime=io.containerd.runc.v2 May 10 00:50:43.665587 systemd[1]: Started cri-containerd-b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794.scope. May 10 00:50:43.712095 env[1199]: time="2025-05-10T00:50:43.711347243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzqrg,Uid:31a4a06c-35f8-495c-9895-89674a12a81c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\"" May 10 00:50:44.157595 kubelet[1935]: I0510 00:50:44.157523 1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4425e984-4cd1-4e94-b859-b856b63d825d" path="/var/lib/kubelet/pods/4425e984-4cd1-4e94-b859-b856b63d825d/volumes" May 10 00:50:45.432407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660769632.mount: Deactivated successfully. May 10 00:50:46.668090 env[1199]: time="2025-05-10T00:50:46.667602238Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:46.673517 env[1199]: time="2025-05-10T00:50:46.673477221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:46.678501 env[1199]: time="2025-05-10T00:50:46.678462449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:50:46.679397 env[1199]: time="2025-05-10T00:50:46.679331317Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:50:46.683131 env[1199]: time="2025-05-10T00:50:46.683091170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:50:46.701013 env[1199]: time="2025-05-10T00:50:46.700579196Z" level=info msg="CreateContainer within sandbox \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:50:46.718100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761265840.mount: Deactivated successfully. May 10 00:50:46.727703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262971563.mount: Deactivated successfully. May 10 00:50:46.733587 env[1199]: time="2025-05-10T00:50:46.733538019Z" level=info msg="CreateContainer within sandbox \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\"" May 10 00:50:46.736112 env[1199]: time="2025-05-10T00:50:46.734956399Z" level=info msg="StartContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\"" May 10 00:50:46.766635 systemd[1]: Started cri-containerd-3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f.scope. May 10 00:50:46.818076 env[1199]: time="2025-05-10T00:50:46.817492530Z" level=info msg="StartContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" returns successfully" May 10 00:50:47.385720 kubelet[1935]: I0510 00:50:47.385607 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-djs6m" podStartSLOduration=3.309365685 podStartE2EDuration="6.385529082s" podCreationTimestamp="2025-05-10 00:50:41 +0000 UTC" firstStartedPulling="2025-05-10 00:50:43.606492965 +0000 UTC m=+5.752718583" lastFinishedPulling="2025-05-10 00:50:46.682656367 +0000 UTC m=+8.828881980" observedRunningTime="2025-05-10 00:50:47.284439006 +0000 UTC m=+9.430664638" watchObservedRunningTime="2025-05-10 00:50:47.385529082 +0000 UTC m=+9.531754701" May 10 00:50:54.867456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724546977.mount: Deactivated successfully. May 10 00:51:00.129156 env[1199]: time="2025-05-10T00:51:00.128981869Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.131943 env[1199]: time="2025-05-10T00:51:00.131900855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.134715 env[1199]: time="2025-05-10T00:51:00.134678173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:51:00.135659 env[1199]: time="2025-05-10T00:51:00.135589576Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:51:00.179147 env[1199]: time="2025-05-10T00:51:00.179085725Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:51:00.196232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773141990.mount: Deactivated successfully. May 10 00:51:00.209346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115294453.mount: Deactivated successfully. May 10 00:51:00.212294 env[1199]: time="2025-05-10T00:51:00.212226811Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\"" May 10 00:51:00.214269 env[1199]: time="2025-05-10T00:51:00.214203560Z" level=info msg="StartContainer for \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\"" May 10 00:51:00.272432 systemd[1]: Started cri-containerd-692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5.scope. May 10 00:51:00.330982 env[1199]: time="2025-05-10T00:51:00.330924896Z" level=info msg="StartContainer for \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\" returns successfully" May 10 00:51:00.349095 systemd[1]: cri-containerd-692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5.scope: Deactivated successfully. May 10 00:51:00.496610 env[1199]: time="2025-05-10T00:51:00.494905058Z" level=info msg="shim disconnected" id=692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5 May 10 00:51:00.497350 env[1199]: time="2025-05-10T00:51:00.496997865Z" level=warning msg="cleaning up after shim disconnected" id=692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5 namespace=k8s.io May 10 00:51:00.497773 env[1199]: time="2025-05-10T00:51:00.497022966Z" level=info msg="cleaning up dead shim" May 10 00:51:00.513785 env[1199]: time="2025-05-10T00:51:00.513713746Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2397 runtime=io.containerd.runc.v2\n" May 10 00:51:01.192571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5-rootfs.mount: Deactivated successfully. May 10 00:51:01.292326 env[1199]: time="2025-05-10T00:51:01.292267915Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:51:01.307833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243009447.mount: Deactivated successfully. May 10 00:51:01.317444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364824816.mount: Deactivated successfully. May 10 00:51:01.324418 env[1199]: time="2025-05-10T00:51:01.324362876Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\"" May 10 00:51:01.325622 env[1199]: time="2025-05-10T00:51:01.325571105Z" level=info msg="StartContainer for \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\"" May 10 00:51:01.359902 systemd[1]: Started cri-containerd-c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78.scope. May 10 00:51:01.415106 env[1199]: time="2025-05-10T00:51:01.414988091Z" level=info msg="StartContainer for \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\" returns successfully" May 10 00:51:01.459338 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:51:01.459736 systemd[1]: Stopped systemd-sysctl.service. May 10 00:51:01.462628 systemd[1]: Stopping systemd-sysctl.service... May 10 00:51:01.466538 systemd[1]: Starting systemd-sysctl.service... May 10 00:51:01.468059 systemd[1]: cri-containerd-c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78.scope: Deactivated successfully. May 10 00:51:01.507215 systemd[1]: Finished systemd-sysctl.service. May 10 00:51:01.513103 env[1199]: time="2025-05-10T00:51:01.513003759Z" level=info msg="shim disconnected" id=c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78 May 10 00:51:01.513490 env[1199]: time="2025-05-10T00:51:01.513443299Z" level=warning msg="cleaning up after shim disconnected" id=c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78 namespace=k8s.io May 10 00:51:01.513640 env[1199]: time="2025-05-10T00:51:01.513610826Z" level=info msg="cleaning up dead shim" May 10 00:51:01.526004 env[1199]: time="2025-05-10T00:51:01.525905307Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2459 runtime=io.containerd.runc.v2\n" May 10 00:51:02.294382 env[1199]: time="2025-05-10T00:51:02.292962045Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:51:02.313669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164595039.mount: Deactivated successfully. May 10 00:51:02.324828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775458185.mount: Deactivated successfully. May 10 00:51:02.329029 env[1199]: time="2025-05-10T00:51:02.328963775Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\"" May 10 00:51:02.332559 env[1199]: time="2025-05-10T00:51:02.332520127Z" level=info msg="StartContainer for \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\"" May 10 00:51:02.369348 systemd[1]: Started cri-containerd-c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf.scope. May 10 00:51:02.434096 env[1199]: time="2025-05-10T00:51:02.433297847Z" level=info msg="StartContainer for \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\" returns successfully" May 10 00:51:02.441062 systemd[1]: cri-containerd-c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf.scope: Deactivated successfully. May 10 00:51:02.471424 env[1199]: time="2025-05-10T00:51:02.471360374Z" level=info msg="shim disconnected" id=c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf May 10 00:51:02.471424 env[1199]: time="2025-05-10T00:51:02.471425521Z" level=warning msg="cleaning up after shim disconnected" id=c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf namespace=k8s.io May 10 00:51:02.471424 env[1199]: time="2025-05-10T00:51:02.471443757Z" level=info msg="cleaning up dead shim" May 10 00:51:02.486928 env[1199]: time="2025-05-10T00:51:02.486867356Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\n" May 10 00:51:03.298718 env[1199]: time="2025-05-10T00:51:03.298545069Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:51:03.329986 env[1199]: time="2025-05-10T00:51:03.329881356Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\"" May 10 00:51:03.330935 env[1199]: time="2025-05-10T00:51:03.330882303Z" level=info msg="StartContainer for \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\"" May 10 00:51:03.367995 systemd[1]: Started cri-containerd-633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87.scope. May 10 00:51:03.414293 systemd[1]: cri-containerd-633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87.scope: Deactivated successfully. May 10 00:51:03.417305 env[1199]: time="2025-05-10T00:51:03.417251007Z" level=info msg="StartContainer for \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\" returns successfully" May 10 00:51:03.456929 env[1199]: time="2025-05-10T00:51:03.456869047Z" level=info msg="shim disconnected" id=633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87 May 10 00:51:03.457381 env[1199]: time="2025-05-10T00:51:03.457348513Z" level=warning msg="cleaning up after shim disconnected" id=633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87 namespace=k8s.io May 10 00:51:03.457532 env[1199]: time="2025-05-10T00:51:03.457499935Z" level=info msg="cleaning up dead shim" May 10 00:51:03.468425 env[1199]: time="2025-05-10T00:51:03.468362893Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:51:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2570 runtime=io.containerd.runc.v2\n" May 10 00:51:04.192598 systemd[1]: run-containerd-runc-k8s.io-633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87-runc.tVu3A0.mount: Deactivated successfully. May 10 00:51:04.192760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87-rootfs.mount: Deactivated successfully. May 10 00:51:04.305416 env[1199]: time="2025-05-10T00:51:04.305358112Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:51:04.351860 env[1199]: time="2025-05-10T00:51:04.351770964Z" level=info msg="CreateContainer within sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\"" May 10 00:51:04.354914 env[1199]: time="2025-05-10T00:51:04.354866109Z" level=info msg="StartContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\"" May 10 00:51:04.385228 systemd[1]: Started cri-containerd-7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5.scope. May 10 00:51:04.432494 env[1199]: time="2025-05-10T00:51:04.432427325Z" level=info msg="StartContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" returns successfully" May 10 00:51:04.684412 kubelet[1935]: I0510 00:51:04.682289 1935 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:51:04.781598 systemd[1]: Created slice kubepods-burstable-podbf4f164d_0829_46bf_adf6_5b9294b9d3b5.slice. May 10 00:51:04.792121 systemd[1]: Created slice kubepods-burstable-poda94357ac_de38_49c6_b7ba_5bdf4883e9c8.slice. May 10 00:51:04.850299 kubelet[1935]: I0510 00:51:04.850237 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjqk\" (UniqueName: \"kubernetes.io/projected/a94357ac-de38-49c6-b7ba-5bdf4883e9c8-kube-api-access-xcjqk\") pod \"coredns-6f6b679f8f-ltr4f\" (UID: \"a94357ac-de38-49c6-b7ba-5bdf4883e9c8\") " pod="kube-system/coredns-6f6b679f8f-ltr4f" May 10 00:51:04.850573 kubelet[1935]: I0510 00:51:04.850322 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a94357ac-de38-49c6-b7ba-5bdf4883e9c8-config-volume\") pod \"coredns-6f6b679f8f-ltr4f\" (UID: \"a94357ac-de38-49c6-b7ba-5bdf4883e9c8\") " pod="kube-system/coredns-6f6b679f8f-ltr4f" May 10 00:51:04.850573 kubelet[1935]: I0510 00:51:04.850367 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4f164d-0829-46bf-adf6-5b9294b9d3b5-config-volume\") pod \"coredns-6f6b679f8f-tgr77\" (UID: \"bf4f164d-0829-46bf-adf6-5b9294b9d3b5\") " pod="kube-system/coredns-6f6b679f8f-tgr77" May 10 00:51:04.850573 kubelet[1935]: I0510 00:51:04.850397 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdml\" (UniqueName: \"kubernetes.io/projected/bf4f164d-0829-46bf-adf6-5b9294b9d3b5-kube-api-access-crdml\") pod \"coredns-6f6b679f8f-tgr77\" (UID: \"bf4f164d-0829-46bf-adf6-5b9294b9d3b5\") " pod="kube-system/coredns-6f6b679f8f-tgr77" May 10 00:51:05.087359 env[1199]: time="2025-05-10T00:51:05.087292795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tgr77,Uid:bf4f164d-0829-46bf-adf6-5b9294b9d3b5,Namespace:kube-system,Attempt:0,}" May 10 00:51:05.097114 env[1199]: time="2025-05-10T00:51:05.096764529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ltr4f,Uid:a94357ac-de38-49c6-b7ba-5bdf4883e9c8,Namespace:kube-system,Attempt:0,}" May 10 00:51:05.341886 kubelet[1935]: I0510 00:51:05.341672 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fzqrg" podStartSLOduration=5.921814558 podStartE2EDuration="22.339538036s" podCreationTimestamp="2025-05-10 00:50:43 +0000 UTC" firstStartedPulling="2025-05-10 00:50:43.72178502 +0000 UTC m=+5.868010632" lastFinishedPulling="2025-05-10 00:51:00.139508503 +0000 UTC m=+22.285734110" observedRunningTime="2025-05-10 00:51:05.337591221 +0000 UTC m=+27.483816860" watchObservedRunningTime="2025-05-10 00:51:05.339538036 +0000 UTC m=+27.485763654" May 10 00:51:07.244134 systemd-networkd[1020]: cilium_host: Link UP May 10 00:51:07.254458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:51:07.257389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:51:07.249517 systemd-networkd[1020]: cilium_net: Link UP May 10 00:51:07.251853 systemd-networkd[1020]: cilium_net: Gained carrier May 10 00:51:07.253408 systemd-networkd[1020]: cilium_host: Gained carrier May 10 00:51:07.450528 systemd-networkd[1020]: cilium_vxlan: Link UP May 10 00:51:07.450540 systemd-networkd[1020]: cilium_vxlan: Gained carrier May 10 00:51:07.673308 systemd-networkd[1020]: cilium_net: Gained IPv6LL May 10 00:51:08.019099 kernel: NET: Registered PF_ALG protocol family May 10 00:51:08.258343 systemd-networkd[1020]: cilium_host: Gained IPv6LL May 10 00:51:08.834369 systemd-networkd[1020]: cilium_vxlan: Gained IPv6LL May 10 00:51:09.164365 systemd-networkd[1020]: lxc_health: Link UP May 10 00:51:09.205740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:51:09.204987 systemd-networkd[1020]: lxc_health: Gained carrier May 10 00:51:09.689012 systemd-networkd[1020]: lxce04161e29a66: Link UP May 10 00:51:09.703076 kernel: eth0: renamed from tmp01d2a May 10 00:51:09.718152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce04161e29a66: link becomes ready May 10 00:51:09.720760 systemd-networkd[1020]: lxce04161e29a66: Gained carrier May 10 00:51:09.721049 systemd-networkd[1020]: lxcacbc0d379e23: Link UP May 10 00:51:09.743173 kernel: eth0: renamed from tmpd4f99 May 10 00:51:09.765259 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcacbc0d379e23: link becomes ready May 10 00:51:09.764300 systemd-networkd[1020]: lxcacbc0d379e23: Gained carrier May 10 00:51:10.881571 systemd-networkd[1020]: lxcacbc0d379e23: Gained IPv6LL May 10 00:51:11.201422 systemd-networkd[1020]: lxc_health: Gained IPv6LL May 10 00:51:11.207091 systemd-networkd[1020]: lxce04161e29a66: Gained IPv6LL May 10 00:51:13.004882 kubelet[1935]: I0510 00:51:13.004723 1935 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:51:15.538693 env[1199]: time="2025-05-10T00:51:15.538540836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:15.539767 env[1199]: time="2025-05-10T00:51:15.539717932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:15.539956 env[1199]: time="2025-05-10T00:51:15.539912173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:15.542336 env[1199]: time="2025-05-10T00:51:15.542265858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1 pid=3126 runtime=io.containerd.runc.v2 May 10 00:51:15.546336 env[1199]: time="2025-05-10T00:51:15.546242775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:51:15.546605 env[1199]: time="2025-05-10T00:51:15.546535586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:51:15.546908 env[1199]: time="2025-05-10T00:51:15.546829768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:51:15.558282 env[1199]: time="2025-05-10T00:51:15.558183794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4f99597920e72418d08b33111cfc9881c0c42b7aff38dccb34babe198766574 pid=3133 runtime=io.containerd.runc.v2 May 10 00:51:15.615299 systemd[1]: Started cri-containerd-01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1.scope. May 10 00:51:15.625105 systemd[1]: run-containerd-runc-k8s.io-01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1-runc.1bmJuB.mount: Deactivated successfully. May 10 00:51:15.636704 systemd[1]: Started cri-containerd-d4f99597920e72418d08b33111cfc9881c0c42b7aff38dccb34babe198766574.scope. May 10 00:51:15.741278 env[1199]: time="2025-05-10T00:51:15.741193793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ltr4f,Uid:a94357ac-de38-49c6-b7ba-5bdf4883e9c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4f99597920e72418d08b33111cfc9881c0c42b7aff38dccb34babe198766574\"" May 10 00:51:15.749208 env[1199]: time="2025-05-10T00:51:15.749085912Z" level=info msg="CreateContainer within sandbox \"d4f99597920e72418d08b33111cfc9881c0c42b7aff38dccb34babe198766574\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:51:15.793149 env[1199]: time="2025-05-10T00:51:15.791887294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tgr77,Uid:bf4f164d-0829-46bf-adf6-5b9294b9d3b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1\"" May 10 00:51:15.799205 env[1199]: time="2025-05-10T00:51:15.798463703Z" level=info msg="CreateContainer within sandbox \"d4f99597920e72418d08b33111cfc9881c0c42b7aff38dccb34babe198766574\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e04ec0ebec011810e1157be51b0ab5861b28cb6154cd2bd62d040920d856a14b\"" May 10 00:51:15.800610 env[1199]: time="2025-05-10T00:51:15.800539276Z" level=info msg="CreateContainer within sandbox \"01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:51:15.800928 env[1199]: time="2025-05-10T00:51:15.800889660Z" level=info msg="StartContainer for \"e04ec0ebec011810e1157be51b0ab5861b28cb6154cd2bd62d040920d856a14b\"" May 10 00:51:15.822340 env[1199]: time="2025-05-10T00:51:15.822283980Z" level=info msg="CreateContainer within sandbox \"01d2a48843584a20dbad36bf50b32b649842336871545d059be4350090e7a1a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a74ec32c38030c20e20bfad3204f8e2e54f3ae90f9d849af51e401e9eadc4192\"" May 10 00:51:15.823375 env[1199]: time="2025-05-10T00:51:15.823330087Z" level=info msg="StartContainer for \"a74ec32c38030c20e20bfad3204f8e2e54f3ae90f9d849af51e401e9eadc4192\"" May 10 00:51:15.828871 systemd[1]: Started cri-containerd-e04ec0ebec011810e1157be51b0ab5861b28cb6154cd2bd62d040920d856a14b.scope. May 10 00:51:15.860030 systemd[1]: Started cri-containerd-a74ec32c38030c20e20bfad3204f8e2e54f3ae90f9d849af51e401e9eadc4192.scope. May 10 00:51:15.903179 env[1199]: time="2025-05-10T00:51:15.903118533Z" level=info msg="StartContainer for \"e04ec0ebec011810e1157be51b0ab5861b28cb6154cd2bd62d040920d856a14b\" returns successfully" May 10 00:51:15.925566 env[1199]: time="2025-05-10T00:51:15.925383458Z" level=info msg="StartContainer for \"a74ec32c38030c20e20bfad3204f8e2e54f3ae90f9d849af51e401e9eadc4192\" returns successfully" May 10 00:51:16.367764 kubelet[1935]: I0510 00:51:16.367644 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ltr4f" podStartSLOduration=35.367587334 podStartE2EDuration="35.367587334s" podCreationTimestamp="2025-05-10 00:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:16.365379677 +0000 UTC m=+38.511605296" watchObservedRunningTime="2025-05-10 00:51:16.367587334 +0000 UTC m=+38.513812954" May 10 00:51:16.408242 kubelet[1935]: I0510 00:51:16.408171 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tgr77" podStartSLOduration=35.408149662 podStartE2EDuration="35.408149662s" podCreationTimestamp="2025-05-10 00:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:51:16.407961487 +0000 UTC m=+38.554187115" watchObservedRunningTime="2025-05-10 00:51:16.408149662 +0000 UTC m=+38.554375282" May 10 00:51:54.338392 systemd[1]: Started sshd@5-10.244.24.230:22-139.178.68.195:36290.service. May 10 00:51:55.267033 sshd[3281]: Accepted publickey for core from 139.178.68.195 port 36290 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:51:55.270992 sshd[3281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:51:55.282075 systemd[1]: Started session-6.scope. May 10 00:51:55.283985 systemd-logind[1189]: New session 6 of user core. May 10 00:51:56.261773 sshd[3281]: pam_unix(sshd:session): session closed for user core May 10 00:51:56.267534 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. May 10 00:51:56.269378 systemd[1]: sshd@5-10.244.24.230:22-139.178.68.195:36290.service: Deactivated successfully. May 10 00:51:56.270549 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:51:56.272102 systemd-logind[1189]: Removed session 6. May 10 00:52:01.410180 systemd[1]: Started sshd@6-10.244.24.230:22-139.178.68.195:33890.service. May 10 00:52:02.303473 sshd[3293]: Accepted publickey for core from 139.178.68.195 port 33890 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:02.306364 sshd[3293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:02.314420 systemd[1]: Started session-7.scope. May 10 00:52:02.315200 systemd-logind[1189]: New session 7 of user core. May 10 00:52:03.054525 sshd[3293]: pam_unix(sshd:session): session closed for user core May 10 00:52:03.060376 systemd[1]: sshd@6-10.244.24.230:22-139.178.68.195:33890.service: Deactivated successfully. May 10 00:52:03.061716 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:52:03.063569 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. May 10 00:52:03.065350 systemd-logind[1189]: Removed session 7. May 10 00:52:08.206463 systemd[1]: Started sshd@7-10.244.24.230:22-139.178.68.195:53158.service. May 10 00:52:09.103129 sshd[3305]: Accepted publickey for core from 139.178.68.195 port 53158 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:09.104986 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:09.113163 systemd-logind[1189]: New session 8 of user core. May 10 00:52:09.114458 systemd[1]: Started session-8.scope. May 10 00:52:09.820543 sshd[3305]: pam_unix(sshd:session): session closed for user core May 10 00:52:09.824629 systemd[1]: sshd@7-10.244.24.230:22-139.178.68.195:53158.service: Deactivated successfully. May 10 00:52:09.825715 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:52:09.826834 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. May 10 00:52:09.828521 systemd-logind[1189]: Removed session 8. May 10 00:52:14.971920 systemd[1]: Started sshd@8-10.244.24.230:22-139.178.68.195:53170.service. May 10 00:52:15.883381 sshd[3319]: Accepted publickey for core from 139.178.68.195 port 53170 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:15.886269 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:15.894772 systemd-logind[1189]: New session 9 of user core. May 10 00:52:15.895771 systemd[1]: Started session-9.scope. May 10 00:52:16.640649 sshd[3319]: pam_unix(sshd:session): session closed for user core May 10 00:52:16.645620 systemd[1]: sshd@8-10.244.24.230:22-139.178.68.195:53170.service: Deactivated successfully. May 10 00:52:16.646987 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:52:16.648225 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. May 10 00:52:16.650275 systemd-logind[1189]: Removed session 9. May 10 00:52:16.792617 systemd[1]: Started sshd@9-10.244.24.230:22-139.178.68.195:54522.service. May 10 00:52:17.697251 sshd[3333]: Accepted publickey for core from 139.178.68.195 port 54522 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:17.699805 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:17.706106 systemd-logind[1189]: New session 10 of user core. May 10 00:52:17.708782 systemd[1]: Started session-10.scope. May 10 00:52:18.514795 sshd[3333]: pam_unix(sshd:session): session closed for user core May 10 00:52:18.520530 systemd[1]: sshd@9-10.244.24.230:22-139.178.68.195:54522.service: Deactivated successfully. May 10 00:52:18.521671 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:52:18.523449 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. May 10 00:52:18.525687 systemd-logind[1189]: Removed session 10. May 10 00:52:18.661319 systemd[1]: Started sshd@10-10.244.24.230:22-139.178.68.195:54526.service. May 10 00:52:19.568618 sshd[3343]: Accepted publickey for core from 139.178.68.195 port 54526 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:19.570958 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:19.579440 systemd-logind[1189]: New session 11 of user core. May 10 00:52:19.581024 systemd[1]: Started session-11.scope. May 10 00:52:20.290439 sshd[3343]: pam_unix(sshd:session): session closed for user core May 10 00:52:20.294176 systemd[1]: sshd@10-10.244.24.230:22-139.178.68.195:54526.service: Deactivated successfully. May 10 00:52:20.295186 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:52:20.295933 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. May 10 00:52:20.297220 systemd-logind[1189]: Removed session 11. May 10 00:52:25.440191 systemd[1]: Started sshd@11-10.244.24.230:22-139.178.68.195:54694.service. May 10 00:52:26.332975 sshd[3355]: Accepted publickey for core from 139.178.68.195 port 54694 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:26.335498 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:26.344146 systemd-logind[1189]: New session 12 of user core. May 10 00:52:26.344332 systemd[1]: Started session-12.scope. May 10 00:52:27.035542 sshd[3355]: pam_unix(sshd:session): session closed for user core May 10 00:52:27.039587 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. May 10 00:52:27.040095 systemd[1]: sshd@11-10.244.24.230:22-139.178.68.195:54694.service: Deactivated successfully. May 10 00:52:27.041220 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:52:27.042353 systemd-logind[1189]: Removed session 12. May 10 00:52:32.184110 systemd[1]: Started sshd@12-10.244.24.230:22-139.178.68.195:54698.service. May 10 00:52:33.078068 sshd[3367]: Accepted publickey for core from 139.178.68.195 port 54698 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:33.080460 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:33.094704 systemd-logind[1189]: New session 13 of user core. May 10 00:52:33.096111 systemd[1]: Started session-13.scope. May 10 00:52:33.809503 sshd[3367]: pam_unix(sshd:session): session closed for user core May 10 00:52:33.813645 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. May 10 00:52:33.814453 systemd[1]: sshd@12-10.244.24.230:22-139.178.68.195:54698.service: Deactivated successfully. May 10 00:52:33.815707 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:52:33.817102 systemd-logind[1189]: Removed session 13. May 10 00:52:33.956737 systemd[1]: Started sshd@13-10.244.24.230:22-139.178.68.195:54700.service. May 10 00:52:34.849513 sshd[3379]: Accepted publickey for core from 139.178.68.195 port 54700 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:34.851194 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:34.860150 systemd-logind[1189]: New session 14 of user core. May 10 00:52:34.860365 systemd[1]: Started session-14.scope. May 10 00:52:35.891571 sshd[3379]: pam_unix(sshd:session): session closed for user core May 10 00:52:35.896368 systemd[1]: sshd@13-10.244.24.230:22-139.178.68.195:54700.service: Deactivated successfully. May 10 00:52:35.897514 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:52:35.899323 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. May 10 00:52:35.900845 systemd-logind[1189]: Removed session 14. May 10 00:52:36.040458 systemd[1]: Started sshd@14-10.244.24.230:22-139.178.68.195:59166.service. May 10 00:52:36.951536 sshd[3389]: Accepted publickey for core from 139.178.68.195 port 59166 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:36.954207 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:36.961121 systemd-logind[1189]: New session 15 of user core. May 10 00:52:36.962163 systemd[1]: Started session-15.scope. May 10 00:52:39.816792 sshd[3389]: pam_unix(sshd:session): session closed for user core May 10 00:52:39.825906 systemd[1]: sshd@14-10.244.24.230:22-139.178.68.195:59166.service: Deactivated successfully. May 10 00:52:39.827417 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:52:39.828301 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. May 10 00:52:39.830003 systemd-logind[1189]: Removed session 15. May 10 00:52:39.965698 systemd[1]: Started sshd@15-10.244.24.230:22-139.178.68.195:59174.service. May 10 00:52:40.856602 sshd[3408]: Accepted publickey for core from 139.178.68.195 port 59174 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:40.858974 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:40.865839 systemd-logind[1189]: New session 16 of user core. May 10 00:52:40.866731 systemd[1]: Started session-16.scope. May 10 00:52:41.838092 sshd[3408]: pam_unix(sshd:session): session closed for user core May 10 00:52:41.842393 systemd[1]: sshd@15-10.244.24.230:22-139.178.68.195:59174.service: Deactivated successfully. May 10 00:52:41.843518 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:52:41.844364 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. May 10 00:52:41.845776 systemd-logind[1189]: Removed session 16. May 10 00:52:41.989947 systemd[1]: Started sshd@16-10.244.24.230:22-139.178.68.195:59190.service. May 10 00:52:42.903359 sshd[3418]: Accepted publickey for core from 139.178.68.195 port 59190 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:42.905826 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:42.915353 systemd[1]: Started session-17.scope. May 10 00:52:42.915916 systemd-logind[1189]: New session 17 of user core. May 10 00:52:43.608744 sshd[3418]: pam_unix(sshd:session): session closed for user core May 10 00:52:43.613252 systemd[1]: sshd@16-10.244.24.230:22-139.178.68.195:59190.service: Deactivated successfully. May 10 00:52:43.614369 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:52:43.615563 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. May 10 00:52:43.616799 systemd-logind[1189]: Removed session 17. May 10 00:52:48.759308 systemd[1]: Started sshd@17-10.244.24.230:22-139.178.68.195:50700.service. May 10 00:52:49.659360 sshd[3431]: Accepted publickey for core from 139.178.68.195 port 50700 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:49.661579 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:49.671776 systemd[1]: Started session-18.scope. May 10 00:52:49.673694 systemd-logind[1189]: New session 18 of user core. May 10 00:52:50.399909 sshd[3431]: pam_unix(sshd:session): session closed for user core May 10 00:52:50.404621 systemd[1]: sshd@17-10.244.24.230:22-139.178.68.195:50700.service: Deactivated successfully. May 10 00:52:50.405635 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. May 10 00:52:50.405731 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:52:50.407358 systemd-logind[1189]: Removed session 18. May 10 00:52:55.546155 systemd[1]: Started sshd@18-10.244.24.230:22-139.178.68.195:35404.service. May 10 00:52:56.449132 sshd[3446]: Accepted publickey for core from 139.178.68.195 port 35404 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:52:56.451436 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:52:56.458921 systemd-logind[1189]: New session 19 of user core. May 10 00:52:56.460227 systemd[1]: Started session-19.scope. May 10 00:52:57.152408 sshd[3446]: pam_unix(sshd:session): session closed for user core May 10 00:52:57.156119 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. May 10 00:52:57.156589 systemd[1]: sshd@18-10.244.24.230:22-139.178.68.195:35404.service: Deactivated successfully. May 10 00:52:57.157593 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:52:57.158937 systemd-logind[1189]: Removed session 19. May 10 00:53:02.302294 systemd[1]: Started sshd@19-10.244.24.230:22-139.178.68.195:35416.service. May 10 00:53:03.193872 sshd[3458]: Accepted publickey for core from 139.178.68.195 port 35416 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:53:03.197328 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:03.206025 systemd-logind[1189]: New session 20 of user core. May 10 00:53:03.206286 systemd[1]: Started session-20.scope. May 10 00:53:03.974362 sshd[3458]: pam_unix(sshd:session): session closed for user core May 10 00:53:03.978193 systemd[1]: sshd@19-10.244.24.230:22-139.178.68.195:35416.service: Deactivated successfully. May 10 00:53:03.979667 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. May 10 00:53:03.979751 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:53:03.981479 systemd-logind[1189]: Removed session 20. May 10 00:53:04.124492 systemd[1]: Started sshd@20-10.244.24.230:22-139.178.68.195:35418.service. May 10 00:53:05.035003 sshd[3470]: Accepted publickey for core from 139.178.68.195 port 35418 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:53:05.036607 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:05.044793 systemd-logind[1189]: New session 21 of user core. May 10 00:53:05.045915 systemd[1]: Started session-21.scope. May 10 00:53:07.252292 env[1199]: time="2025-05-10T00:53:07.252162146Z" level=info msg="StopContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" with timeout 30 (s)" May 10 00:53:07.253974 env[1199]: time="2025-05-10T00:53:07.253922971Z" level=info msg="Stop container \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" with signal terminated" May 10 00:53:07.278278 systemd[1]: run-containerd-runc-k8s.io-7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5-runc.IMtQgj.mount: Deactivated successfully. May 10 00:53:07.312083 systemd[1]: cri-containerd-3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f.scope: Deactivated successfully. May 10 00:53:07.328713 env[1199]: time="2025-05-10T00:53:07.328599216Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:53:07.348635 env[1199]: time="2025-05-10T00:53:07.348585040Z" level=info msg="StopContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" with timeout 2 (s)" May 10 00:53:07.349433 env[1199]: time="2025-05-10T00:53:07.349397954Z" level=info msg="Stop container \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" with signal terminated" May 10 00:53:07.355718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f-rootfs.mount: Deactivated successfully. May 10 00:53:07.365911 env[1199]: time="2025-05-10T00:53:07.365838359Z" level=info msg="shim disconnected" id=3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f May 10 00:53:07.366278 env[1199]: time="2025-05-10T00:53:07.366244783Z" level=warning msg="cleaning up after shim disconnected" id=3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f namespace=k8s.io May 10 00:53:07.366453 env[1199]: time="2025-05-10T00:53:07.366422808Z" level=info msg="cleaning up dead shim" May 10 00:53:07.376268 systemd-networkd[1020]: lxc_health: Link DOWN May 10 00:53:07.376282 systemd-networkd[1020]: lxc_health: Lost carrier May 10 00:53:07.405877 env[1199]: time="2025-05-10T00:53:07.405212303Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3526 runtime=io.containerd.runc.v2\n" May 10 00:53:07.410325 env[1199]: time="2025-05-10T00:53:07.409268333Z" level=info msg="StopContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" returns successfully" May 10 00:53:07.423364 env[1199]: time="2025-05-10T00:53:07.418452200Z" level=info msg="StopPodSandbox for \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\"" May 10 00:53:07.423364 env[1199]: time="2025-05-10T00:53:07.418558007Z" level=info msg="Container to stop \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.421558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81-shm.mount: Deactivated successfully. May 10 00:53:07.428203 systemd[1]: cri-containerd-7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5.scope: Deactivated successfully. May 10 00:53:07.428630 systemd[1]: cri-containerd-7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5.scope: Consumed 10.441s CPU time. May 10 00:53:07.440434 systemd[1]: cri-containerd-927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81.scope: Deactivated successfully. May 10 00:53:07.472709 env[1199]: time="2025-05-10T00:53:07.472463024Z" level=info msg="shim disconnected" id=7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5 May 10 00:53:07.472709 env[1199]: time="2025-05-10T00:53:07.472526634Z" level=warning msg="cleaning up after shim disconnected" id=7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5 namespace=k8s.io May 10 00:53:07.472709 env[1199]: time="2025-05-10T00:53:07.472544425Z" level=info msg="cleaning up dead shim" May 10 00:53:07.505637 env[1199]: time="2025-05-10T00:53:07.503681290Z" level=info msg="shim disconnected" id=927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81 May 10 00:53:07.506035 env[1199]: time="2025-05-10T00:53:07.505987016Z" level=warning msg="cleaning up after shim disconnected" id=927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81 namespace=k8s.io May 10 00:53:07.506192 env[1199]: time="2025-05-10T00:53:07.506162859Z" level=info msg="cleaning up dead shim" May 10 00:53:07.510635 env[1199]: time="2025-05-10T00:53:07.510572769Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3567 runtime=io.containerd.runc.v2\n" May 10 00:53:07.513555 env[1199]: time="2025-05-10T00:53:07.513488085Z" level=info msg="StopContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" returns successfully" May 10 00:53:07.514698 env[1199]: time="2025-05-10T00:53:07.514580061Z" level=info msg="StopPodSandbox for \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\"" May 10 00:53:07.514816 env[1199]: time="2025-05-10T00:53:07.514760656Z" level=info msg="Container to stop \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.514816 env[1199]: time="2025-05-10T00:53:07.514789468Z" level=info msg="Container to stop \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.514816 env[1199]: time="2025-05-10T00:53:07.514809513Z" level=info msg="Container to stop \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.515073 env[1199]: time="2025-05-10T00:53:07.514829180Z" level=info msg="Container to stop \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.515073 env[1199]: time="2025-05-10T00:53:07.514864959Z" level=info msg="Container to stop \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:07.523735 systemd[1]: cri-containerd-b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794.scope: Deactivated successfully. May 10 00:53:07.525959 env[1199]: time="2025-05-10T00:53:07.525894453Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3586 runtime=io.containerd.runc.v2\n" May 10 00:53:07.528313 env[1199]: time="2025-05-10T00:53:07.528272436Z" level=info msg="TearDown network for sandbox \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\" successfully" May 10 00:53:07.528432 env[1199]: time="2025-05-10T00:53:07.528310200Z" level=info msg="StopPodSandbox for \"927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81\" returns successfully" May 10 00:53:07.566659 env[1199]: time="2025-05-10T00:53:07.566599115Z" level=info msg="shim disconnected" id=b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794 May 10 00:53:07.567100 env[1199]: time="2025-05-10T00:53:07.567057738Z" level=warning msg="cleaning up after shim disconnected" id=b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794 namespace=k8s.io May 10 00:53:07.567243 env[1199]: time="2025-05-10T00:53:07.567214364Z" level=info msg="cleaning up dead shim" May 10 00:53:07.578774 env[1199]: time="2025-05-10T00:53:07.578713159Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3618 runtime=io.containerd.runc.v2\n" May 10 00:53:07.580207 env[1199]: time="2025-05-10T00:53:07.580166613Z" level=info msg="TearDown network for sandbox \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" successfully" May 10 00:53:07.580364 env[1199]: time="2025-05-10T00:53:07.580330083Z" level=info msg="StopPodSandbox for \"b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794\" returns successfully" May 10 00:53:07.664710 kubelet[1935]: I0510 00:53:07.664637 1935 scope.go:117] "RemoveContainer" containerID="7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5" May 10 00:53:07.668570 env[1199]: time="2025-05-10T00:53:07.667682946Z" level=info msg="RemoveContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\"" May 10 00:53:07.672946 env[1199]: time="2025-05-10T00:53:07.672760144Z" level=info msg="RemoveContainer for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" returns successfully" May 10 00:53:07.675703 kubelet[1935]: I0510 00:53:07.675655 1935 scope.go:117] "RemoveContainer" containerID="633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87" May 10 00:53:07.679455 env[1199]: time="2025-05-10T00:53:07.679413381Z" level=info msg="RemoveContainer for \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\"" May 10 00:53:07.682530 env[1199]: time="2025-05-10T00:53:07.682494394Z" level=info msg="RemoveContainer for \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\" returns successfully" May 10 00:53:07.682753 kubelet[1935]: I0510 00:53:07.682717 1935 scope.go:117] "RemoveContainer" containerID="c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf" May 10 00:53:07.684384 env[1199]: time="2025-05-10T00:53:07.684326861Z" level=info msg="RemoveContainer for \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\"" May 10 00:53:07.687392 kubelet[1935]: I0510 00:53:07.687361 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hqtg\" (UniqueName: \"kubernetes.io/projected/5c6028e2-a347-4522-b747-9b3a28f9776d-kube-api-access-5hqtg\") pod \"5c6028e2-a347-4522-b747-9b3a28f9776d\" (UID: \"5c6028e2-a347-4522-b747-9b3a28f9776d\") " May 10 00:53:07.687546 kubelet[1935]: I0510 00:53:07.687512 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn27w\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-kube-api-access-mn27w\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687648 kubelet[1935]: I0510 00:53:07.687606 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c6028e2-a347-4522-b747-9b3a28f9776d-cilium-config-path\") pod \"5c6028e2-a347-4522-b747-9b3a28f9776d\" (UID: \"5c6028e2-a347-4522-b747-9b3a28f9776d\") " May 10 00:53:07.687724 kubelet[1935]: I0510 00:53:07.687661 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-xtables-lock\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687840 kubelet[1935]: I0510 00:53:07.687733 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-hostproc\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687840 kubelet[1935]: I0510 00:53:07.687781 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-run\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687967 kubelet[1935]: I0510 00:53:07.687824 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-config-path\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687967 kubelet[1935]: I0510 00:53:07.687885 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-bpf-maps\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.687967 kubelet[1935]: I0510 00:53:07.687940 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31a4a06c-35f8-495c-9895-89674a12a81c-clustermesh-secrets\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688194 kubelet[1935]: I0510 00:53:07.687978 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-kernel\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688194 kubelet[1935]: I0510 00:53:07.688028 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-etc-cni-netd\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688194 kubelet[1935]: I0510 00:53:07.688090 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-lib-modules\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688194 kubelet[1935]: I0510 00:53:07.688117 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-net\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688194 kubelet[1935]: I0510 00:53:07.688165 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-hubble-tls\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688483 kubelet[1935]: I0510 00:53:07.688198 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cni-path\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.688483 kubelet[1935]: I0510 00:53:07.688241 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-cgroup\") pod \"31a4a06c-35f8-495c-9895-89674a12a81c\" (UID: \"31a4a06c-35f8-495c-9895-89674a12a81c\") " May 10 00:53:07.691794 env[1199]: time="2025-05-10T00:53:07.691749672Z" level=info msg="RemoveContainer for \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\" returns successfully" May 10 00:53:07.694472 kubelet[1935]: I0510 00:53:07.694441 1935 scope.go:117] "RemoveContainer" containerID="c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78" May 10 00:53:07.697345 env[1199]: time="2025-05-10T00:53:07.696854230Z" level=info msg="RemoveContainer for \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\"" May 10 00:53:07.697985 kubelet[1935]: I0510 00:53:07.695243 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.698233 kubelet[1935]: I0510 00:53:07.698098 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.698233 kubelet[1935]: I0510 00:53:07.698138 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.698233 kubelet[1935]: I0510 00:53:07.698167 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.698233 kubelet[1935]: I0510 00:53:07.698194 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.699859 kubelet[1935]: I0510 00:53:07.699815 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cni-path" (OuterVolumeSpecName: "cni-path") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.702387 env[1199]: time="2025-05-10T00:53:07.702347906Z" level=info msg="RemoveContainer for \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\" returns successfully" May 10 00:53:07.702648 kubelet[1935]: I0510 00:53:07.700442 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-hostproc" (OuterVolumeSpecName: "hostproc") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.704368 kubelet[1935]: I0510 00:53:07.704339 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.712464 kubelet[1935]: I0510 00:53:07.712408 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:07.712650 kubelet[1935]: I0510 00:53:07.712527 1935 scope.go:117] "RemoveContainer" containerID="692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5" May 10 00:53:07.712650 kubelet[1935]: I0510 00:53:07.712627 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.714230 kubelet[1935]: I0510 00:53:07.714191 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-kube-api-access-mn27w" (OuterVolumeSpecName: "kube-api-access-mn27w") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "kube-api-access-mn27w". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:07.714501 kubelet[1935]: I0510 00:53:07.714464 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6028e2-a347-4522-b747-9b3a28f9776d-kube-api-access-5hqtg" (OuterVolumeSpecName: "kube-api-access-5hqtg") pod "5c6028e2-a347-4522-b747-9b3a28f9776d" (UID: "5c6028e2-a347-4522-b747-9b3a28f9776d"). InnerVolumeSpecName "kube-api-access-5hqtg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:07.717375 env[1199]: time="2025-05-10T00:53:07.717320982Z" level=info msg="RemoveContainer for \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\"" May 10 00:53:07.717929 kubelet[1935]: I0510 00:53:07.717899 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:07.719177 kubelet[1935]: I0510 00:53:07.719146 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:07.720302 kubelet[1935]: I0510 00:53:07.720206 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a4a06c-35f8-495c-9895-89674a12a81c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31a4a06c-35f8-495c-9895-89674a12a81c" (UID: "31a4a06c-35f8-495c-9895-89674a12a81c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:07.720637 kubelet[1935]: I0510 00:53:07.720604 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6028e2-a347-4522-b747-9b3a28f9776d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c6028e2-a347-4522-b747-9b3a28f9776d" (UID: "5c6028e2-a347-4522-b747-9b3a28f9776d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:07.722127 env[1199]: time="2025-05-10T00:53:07.722087333Z" level=info msg="RemoveContainer for \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\" returns successfully" May 10 00:53:07.722490 kubelet[1935]: I0510 00:53:07.722447 1935 scope.go:117] "RemoveContainer" containerID="7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5" May 10 00:53:07.723161 env[1199]: time="2025-05-10T00:53:07.722835601Z" level=error msg="ContainerStatus for \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\": not found" May 10 00:53:07.725578 kubelet[1935]: E0510 00:53:07.725512 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\": not found" containerID="7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5" May 10 00:53:07.727016 kubelet[1935]: I0510 00:53:07.726744 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5"} err="failed to get container status \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5\": not found" May 10 00:53:07.727223 kubelet[1935]: I0510 00:53:07.727189 1935 scope.go:117] "RemoveContainer" containerID="633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87" May 10 00:53:07.727801 env[1199]: time="2025-05-10T00:53:07.727645190Z" level=error msg="ContainerStatus for \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\": not found" May 10 00:53:07.728143 kubelet[1935]: E0510 00:53:07.728097 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\": not found" containerID="633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87" May 10 00:53:07.728247 kubelet[1935]: I0510 00:53:07.728138 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87"} err="failed to get container status \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\": rpc error: code = NotFound desc = an error occurred when try to find container \"633bf1899a92e25e07356fa967d7c96b1ba1aab0f425860c6f8cb432f7ebdc87\": not found" May 10 00:53:07.728247 kubelet[1935]: I0510 00:53:07.728163 1935 scope.go:117] "RemoveContainer" containerID="c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf" May 10 00:53:07.728728 env[1199]: time="2025-05-10T00:53:07.728576245Z" level=error msg="ContainerStatus for \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\": not found" May 10 00:53:07.728821 kubelet[1935]: E0510 00:53:07.728779 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\": not found" containerID="c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf" May 10 00:53:07.728916 kubelet[1935]: I0510 00:53:07.728816 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf"} err="failed to get container status \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\": rpc error: code = NotFound desc = an error occurred when try to find container \"c17f2c2224070ab5fa349b518701758812c80bb2af60d257f1acbcb0749c4daf\": not found" May 10 00:53:07.728916 kubelet[1935]: I0510 00:53:07.728838 1935 scope.go:117] "RemoveContainer" containerID="c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78" May 10 00:53:07.729226 env[1199]: time="2025-05-10T00:53:07.729150137Z" level=error msg="ContainerStatus for \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\": not found" May 10 00:53:07.729521 kubelet[1935]: E0510 00:53:07.729491 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\": not found" containerID="c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78" May 10 00:53:07.729761 kubelet[1935]: I0510 00:53:07.729700 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78"} err="failed to get container status \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\": rpc error: code = NotFound desc = an error occurred when try to find container \"c44e21e81841106a530bc04ada8ba18007d6af5796b4f49689483fa2d02b1f78\": not found" May 10 00:53:07.729942 kubelet[1935]: I0510 00:53:07.729916 1935 scope.go:117] "RemoveContainer" containerID="692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5" May 10 00:53:07.730614 env[1199]: time="2025-05-10T00:53:07.730470178Z" level=error msg="ContainerStatus for \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\": not found" May 10 00:53:07.730808 kubelet[1935]: E0510 00:53:07.730689 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\": not found" containerID="692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5" May 10 00:53:07.730924 kubelet[1935]: I0510 00:53:07.730818 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5"} err="failed to get container status \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"692f5ea01b187b658aeb623e94a71c24872127f775638e567c9b70372ee8efe5\": not found" May 10 00:53:07.731001 kubelet[1935]: I0510 00:53:07.730878 1935 scope.go:117] "RemoveContainer" containerID="3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f" May 10 00:53:07.733006 env[1199]: time="2025-05-10T00:53:07.732962058Z" level=info msg="RemoveContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\"" May 10 00:53:07.737160 env[1199]: time="2025-05-10T00:53:07.737099081Z" level=info msg="RemoveContainer for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" returns successfully" May 10 00:53:07.737654 kubelet[1935]: I0510 00:53:07.737624 1935 scope.go:117] "RemoveContainer" containerID="3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f" May 10 00:53:07.738222 env[1199]: time="2025-05-10T00:53:07.738140750Z" level=error msg="ContainerStatus for \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\": not found" May 10 00:53:07.738917 kubelet[1935]: E0510 00:53:07.738775 1935 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\": not found" containerID="3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f" May 10 00:53:07.739245 kubelet[1935]: I0510 00:53:07.739187 1935 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f"} err="failed to get container status \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3df26f50af8b73778ee9e98555daf3fac0a67b2f9d00f2c18a92af529940368f\": not found" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790509 1935 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-hostproc\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790564 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-run\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790583 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-config-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790610 1935 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-bpf-maps\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790626 1935 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31a4a06c-35f8-495c-9895-89674a12a81c-clustermesh-secrets\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790641 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-kernel\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790665 1935 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-etc-cni-netd\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790680 1935 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-lib-modules\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790694 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-host-proc-sys-net\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790707 1935 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cni-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790726 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-cilium-cgroup\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790740 1935 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-hubble-tls\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790754 1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5hqtg\" (UniqueName: \"kubernetes.io/projected/5c6028e2-a347-4522-b747-9b3a28f9776d-kube-api-access-5hqtg\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790768 1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mn27w\" (UniqueName: \"kubernetes.io/projected/31a4a06c-35f8-495c-9895-89674a12a81c-kube-api-access-mn27w\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790783 1935 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a4a06c-35f8-495c-9895-89674a12a81c-xtables-lock\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.794943 kubelet[1935]: I0510 00:53:07.790798 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c6028e2-a347-4522-b747-9b3a28f9776d-cilium-config-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:07.969407 systemd[1]: Removed slice kubepods-burstable-pod31a4a06c_35f8_495c_9895_89674a12a81c.slice. May 10 00:53:07.969549 systemd[1]: kubepods-burstable-pod31a4a06c_35f8_495c_9895_89674a12a81c.slice: Consumed 10.606s CPU time. May 10 00:53:07.975904 systemd[1]: Removed slice kubepods-besteffort-pod5c6028e2_a347_4522_b747_9b3a28f9776d.slice. May 10 00:53:08.157122 kubelet[1935]: I0510 00:53:08.157063 1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" path="/var/lib/kubelet/pods/31a4a06c-35f8-495c-9895-89674a12a81c/volumes" May 10 00:53:08.159340 kubelet[1935]: I0510 00:53:08.159289 1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6028e2-a347-4522-b747-9b3a28f9776d" path="/var/lib/kubelet/pods/5c6028e2-a347-4522-b747-9b3a28f9776d/volumes" May 10 00:53:08.258611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e53e13f80dbda8138f545086929f584f8427f5d74cd4bf507b511d301356df5-rootfs.mount: Deactivated successfully. May 10 00:53:08.259183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794-rootfs.mount: Deactivated successfully. May 10 00:53:08.259494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2eeb5f3838a0ca6631604b3803a52a8250131236cbefcf5a461c5d266ef5794-shm.mount: Deactivated successfully. May 10 00:53:08.259788 systemd[1]: var-lib-kubelet-pods-31a4a06c\x2d35f8\x2d495c\x2d9895\x2d89674a12a81c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmn27w.mount: Deactivated successfully. May 10 00:53:08.260178 systemd[1]: var-lib-kubelet-pods-31a4a06c\x2d35f8\x2d495c\x2d9895\x2d89674a12a81c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:53:08.260470 systemd[1]: var-lib-kubelet-pods-31a4a06c\x2d35f8\x2d495c\x2d9895\x2d89674a12a81c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:53:08.260772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-927036fbcd5afc097ce6ccbcb201ac8b7e0f693cbbb5c31a75063e80c7ce1f81-rootfs.mount: Deactivated successfully. May 10 00:53:08.261118 systemd[1]: var-lib-kubelet-pods-5c6028e2\x2da347\x2d4522\x2db747\x2d9b3a28f9776d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hqtg.mount: Deactivated successfully. May 10 00:53:08.295767 kubelet[1935]: E0510 00:53:08.295678 1935 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:53:09.295233 sshd[3470]: pam_unix(sshd:session): session closed for user core May 10 00:53:09.299679 systemd[1]: sshd@20-10.244.24.230:22-139.178.68.195:35418.service: Deactivated successfully. May 10 00:53:09.300810 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:53:09.301791 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. May 10 00:53:09.303463 systemd-logind[1189]: Removed session 21. May 10 00:53:09.443610 systemd[1]: Started sshd@21-10.244.24.230:22-139.178.68.195:45400.service. May 10 00:53:10.340880 sshd[3640]: Accepted publickey for core from 139.178.68.195 port 45400 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:53:10.343921 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:10.351966 systemd[1]: Started session-22.scope. May 10 00:53:10.353347 systemd-logind[1189]: New session 22 of user core. May 10 00:53:10.769869 kubelet[1935]: I0510 00:53:10.768961 1935 setters.go:600] "Node became not ready" node="srv-3yk6k.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:53:10Z","lastTransitionTime":"2025-05-10T00:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:53:11.770857 kubelet[1935]: E0510 00:53:11.770777 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="mount-cgroup" May 10 00:53:11.771836 kubelet[1935]: E0510 00:53:11.771801 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="mount-bpf-fs" May 10 00:53:11.771968 kubelet[1935]: E0510 00:53:11.771944 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="cilium-agent" May 10 00:53:11.772182 kubelet[1935]: E0510 00:53:11.772158 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c6028e2-a347-4522-b747-9b3a28f9776d" containerName="cilium-operator" May 10 00:53:11.772342 kubelet[1935]: E0510 00:53:11.772319 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="apply-sysctl-overwrites" May 10 00:53:11.772487 kubelet[1935]: E0510 00:53:11.772463 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="clean-cilium-state" May 10 00:53:11.774767 kubelet[1935]: I0510 00:53:11.774733 1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6028e2-a347-4522-b747-9b3a28f9776d" containerName="cilium-operator" May 10 00:53:11.774987 kubelet[1935]: I0510 00:53:11.774940 1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a4a06c-35f8-495c-9895-89674a12a81c" containerName="cilium-agent" May 10 00:53:11.795697 systemd[1]: Created slice kubepods-burstable-pod90923b92_db26_42f9_9805_960d6e3551ab.slice. May 10 00:53:11.819892 sshd[3640]: pam_unix(sshd:session): session closed for user core May 10 00:53:11.826208 systemd[1]: sshd@21-10.244.24.230:22-139.178.68.195:45400.service: Deactivated successfully. May 10 00:53:11.828327 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:53:11.828393 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. May 10 00:53:11.831148 systemd-logind[1189]: Removed session 22. May 10 00:53:11.933871 kubelet[1935]: I0510 00:53:11.933796 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-lib-modules\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.934227 kubelet[1935]: I0510 00:53:11.934195 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-xtables-lock\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.934480 kubelet[1935]: I0510 00:53:11.934440 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-hubble-tls\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.934661 kubelet[1935]: I0510 00:53:11.934632 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdc9\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-kube-api-access-8sdc9\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.934861 kubelet[1935]: I0510 00:53:11.934833 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-bpf-maps\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.935030 kubelet[1935]: I0510 00:53:11.935003 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-cgroup\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.935216 kubelet[1935]: I0510 00:53:11.935183 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90923b92-db26-42f9-9805-960d6e3551ab-cilium-config-path\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.935369 kubelet[1935]: I0510 00:53:11.935341 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-cilium-ipsec-secrets\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.936238 kubelet[1935]: I0510 00:53:11.936207 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-etc-cni-netd\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.936474 kubelet[1935]: I0510 00:53:11.936444 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-kernel\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.936635 kubelet[1935]: I0510 00:53:11.936607 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-net\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.936862 kubelet[1935]: I0510 00:53:11.936769 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-clustermesh-secrets\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.939282 kubelet[1935]: I0510 00:53:11.937035 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-run\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.939282 kubelet[1935]: I0510 00:53:11.937112 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-hostproc\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.939282 kubelet[1935]: I0510 00:53:11.937155 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cni-path\") pod \"cilium-t6tpf\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " pod="kube-system/cilium-t6tpf" May 10 00:53:11.968403 systemd[1]: Started sshd@22-10.244.24.230:22-139.178.68.195:45406.service. May 10 00:53:12.103950 env[1199]: time="2025-05-10T00:53:12.102641652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6tpf,Uid:90923b92-db26-42f9-9805-960d6e3551ab,Namespace:kube-system,Attempt:0,}" May 10 00:53:12.137454 env[1199]: time="2025-05-10T00:53:12.137324986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:53:12.137454 env[1199]: time="2025-05-10T00:53:12.137404820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:53:12.137858 env[1199]: time="2025-05-10T00:53:12.137429798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:53:12.138440 env[1199]: time="2025-05-10T00:53:12.138291233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d pid=3663 runtime=io.containerd.runc.v2 May 10 00:53:12.157066 systemd[1]: Started cri-containerd-ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d.scope. May 10 00:53:12.206310 env[1199]: time="2025-05-10T00:53:12.206244167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6tpf,Uid:90923b92-db26-42f9-9805-960d6e3551ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\"" May 10 00:53:12.212586 env[1199]: time="2025-05-10T00:53:12.212003998Z" level=info msg="CreateContainer within sandbox \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:53:12.225821 env[1199]: time="2025-05-10T00:53:12.225742829Z" level=info msg="CreateContainer within sandbox \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\"" May 10 00:53:12.228722 env[1199]: time="2025-05-10T00:53:12.227845333Z" level=info msg="StartContainer for \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\"" May 10 00:53:12.252648 systemd[1]: Started cri-containerd-3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1.scope. May 10 00:53:12.267071 systemd[1]: cri-containerd-3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1.scope: Deactivated successfully. May 10 00:53:12.294688 env[1199]: time="2025-05-10T00:53:12.294535434Z" level=info msg="shim disconnected" id=3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1 May 10 00:53:12.295190 env[1199]: time="2025-05-10T00:53:12.295144908Z" level=warning msg="cleaning up after shim disconnected" id=3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1 namespace=k8s.io May 10 00:53:12.295360 env[1199]: time="2025-05-10T00:53:12.295330978Z" level=info msg="cleaning up dead shim" May 10 00:53:12.309707 env[1199]: time="2025-05-10T00:53:12.309634546Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:53:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:53:12.310220 env[1199]: time="2025-05-10T00:53:12.310014767Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" May 10 00:53:12.311179 env[1199]: time="2025-05-10T00:53:12.311110874Z" level=error msg="Failed to pipe stderr of container \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\"" error="reading from a closed fifo" May 10 00:53:12.313165 env[1199]: time="2025-05-10T00:53:12.313119443Z" level=error msg="Failed to pipe stdout of container \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\"" error="reading from a closed fifo" May 10 00:53:12.315568 env[1199]: time="2025-05-10T00:53:12.315196028Z" level=error msg="StartContainer for \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:53:12.316699 kubelet[1935]: E0510 00:53:12.316604 1935 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1" May 10 00:53:12.321524 kubelet[1935]: E0510 00:53:12.321483 1935 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 10 00:53:12.321524 kubelet[1935]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:53:12.321524 kubelet[1935]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:53:12.321524 kubelet[1935]: rm /hostbin/cilium-mount May 10 00:53:12.321524 kubelet[1935]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sdc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t6tpf_kube-system(90923b92-db26-42f9-9805-960d6e3551ab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:53:12.321524 kubelet[1935]: > logger="UnhandledError" May 10 00:53:12.322947 kubelet[1935]: E0510 00:53:12.322903 1935 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6tpf" podUID="90923b92-db26-42f9-9805-960d6e3551ab" May 10 00:53:12.697358 env[1199]: time="2025-05-10T00:53:12.697297539Z" level=info msg="CreateContainer within sandbox \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 10 00:53:12.721437 env[1199]: time="2025-05-10T00:53:12.721372944Z" level=info msg="CreateContainer within sandbox \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\"" May 10 00:53:12.723358 env[1199]: time="2025-05-10T00:53:12.723284319Z" level=info msg="StartContainer for \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\"" May 10 00:53:12.758320 systemd[1]: Started cri-containerd-3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee.scope. May 10 00:53:12.778625 systemd[1]: cri-containerd-3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee.scope: Deactivated successfully. May 10 00:53:12.790025 env[1199]: time="2025-05-10T00:53:12.789955623Z" level=info msg="shim disconnected" id=3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee May 10 00:53:12.790271 env[1199]: time="2025-05-10T00:53:12.790026729Z" level=warning msg="cleaning up after shim disconnected" id=3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee namespace=k8s.io May 10 00:53:12.790271 env[1199]: time="2025-05-10T00:53:12.790064397Z" level=info msg="cleaning up dead shim" May 10 00:53:12.800654 env[1199]: time="2025-05-10T00:53:12.800588057Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3758 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T00:53:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 00:53:12.801110 env[1199]: time="2025-05-10T00:53:12.800990524Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" May 10 00:53:12.802169 env[1199]: time="2025-05-10T00:53:12.802117355Z" level=error msg="Failed to pipe stderr of container \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\"" error="reading from a closed fifo" May 10 00:53:12.802169 env[1199]: time="2025-05-10T00:53:12.802118094Z" level=error msg="Failed to pipe stdout of container \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\"" error="reading from a closed fifo" May 10 00:53:12.803815 env[1199]: time="2025-05-10T00:53:12.803746614Z" level=error msg="StartContainer for \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 00:53:12.804265 kubelet[1935]: E0510 00:53:12.804206 1935 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee" May 10 00:53:12.804762 kubelet[1935]: E0510 00:53:12.804417 1935 kuberuntime_manager.go:1272] "Unhandled Error" err=< May 10 00:53:12.804762 kubelet[1935]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 00:53:12.804762 kubelet[1935]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 00:53:12.804762 kubelet[1935]: rm /hostbin/cilium-mount May 10 00:53:12.804762 kubelet[1935]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sdc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t6tpf_kube-system(90923b92-db26-42f9-9805-960d6e3551ab): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 00:53:12.804762 kubelet[1935]: > logger="UnhandledError" May 10 00:53:12.806362 kubelet[1935]: E0510 00:53:12.806314 1935 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t6tpf" podUID="90923b92-db26-42f9-9805-960d6e3551ab" May 10 00:53:12.874953 sshd[3650]: Accepted publickey for core from 139.178.68.195 port 45406 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:53:12.877196 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:12.884857 systemd[1]: Started session-23.scope. May 10 00:53:12.886457 systemd-logind[1189]: New session 23 of user core. May 10 00:53:13.297641 kubelet[1935]: E0510 00:53:13.297578 1935 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:53:13.698834 kubelet[1935]: I0510 00:53:13.698795 1935 scope.go:117] "RemoveContainer" containerID="3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1" May 10 00:53:13.699642 env[1199]: time="2025-05-10T00:53:13.699596297Z" level=info msg="StopPodSandbox for \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\"" May 10 00:53:13.700262 env[1199]: time="2025-05-10T00:53:13.700222734Z" level=info msg="Container to stop \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:13.700411 env[1199]: time="2025-05-10T00:53:13.700377005Z" level=info msg="Container to stop \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:53:13.703176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d-shm.mount: Deactivated successfully. May 10 00:53:13.711156 sshd[3650]: pam_unix(sshd:session): session closed for user core May 10 00:53:13.715432 env[1199]: time="2025-05-10T00:53:13.715383817Z" level=info msg="RemoveContainer for \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\"" May 10 00:53:13.718636 systemd[1]: sshd@22-10.244.24.230:22-139.178.68.195:45406.service: Deactivated successfully. May 10 00:53:13.719970 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:53:13.722419 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. May 10 00:53:13.724602 systemd-logind[1189]: Removed session 23. May 10 00:53:13.729722 systemd[1]: cri-containerd-ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d.scope: Deactivated successfully. May 10 00:53:13.732454 env[1199]: time="2025-05-10T00:53:13.732372216Z" level=info msg="RemoveContainer for \"3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1\" returns successfully" May 10 00:53:13.770194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d-rootfs.mount: Deactivated successfully. May 10 00:53:13.778164 env[1199]: time="2025-05-10T00:53:13.778093664Z" level=info msg="shim disconnected" id=ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d May 10 00:53:13.778895 env[1199]: time="2025-05-10T00:53:13.778862483Z" level=warning msg="cleaning up after shim disconnected" id=ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d namespace=k8s.io May 10 00:53:13.779184 env[1199]: time="2025-05-10T00:53:13.779110185Z" level=info msg="cleaning up dead shim" May 10 00:53:13.794592 env[1199]: time="2025-05-10T00:53:13.794499059Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" May 10 00:53:13.794981 env[1199]: time="2025-05-10T00:53:13.794941166Z" level=info msg="TearDown network for sandbox \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" successfully" May 10 00:53:13.795109 env[1199]: time="2025-05-10T00:53:13.794980330Z" level=info msg="StopPodSandbox for \"ff4c8ea62a8006c937fa17bcca5fc6bf1454d2d0a309de7dc3ab599fb7ceeb0d\" returns successfully" May 10 00:53:13.863275 systemd[1]: Started sshd@23-10.244.24.230:22-139.178.68.195:45408.service. May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.953967 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-cgroup\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955116 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-run\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955161 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-hubble-tls\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955203 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sdc9\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-kube-api-access-8sdc9\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955232 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-bpf-maps\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955272 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-cilium-ipsec-secrets\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.955306 kubelet[1935]: I0510 00:53:13.955298 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-kernel\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955323 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-net\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955355 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cni-path\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955392 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-xtables-lock\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955416 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-hostproc\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955447 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-lib-modules\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955481 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90923b92-db26-42f9-9805-960d6e3551ab-cilium-config-path\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955511 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-etc-cni-netd\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.955543 1935 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-clustermesh-secrets\") pod \"90923b92-db26-42f9-9805-960d6e3551ab\" (UID: \"90923b92-db26-42f9-9805-960d6e3551ab\") " May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.954142 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.956085 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.956297 kubelet[1935]: I0510 00:53:13.956155 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957220 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957285 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957321 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957368 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957400 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957811 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.958176 kubelet[1935]: I0510 00:53:13.957862 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:53:13.962934 systemd[1]: var-lib-kubelet-pods-90923b92\x2ddb26\x2d42f9\x2d9805\x2d960d6e3551ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:53:13.965881 kubelet[1935]: I0510 00:53:13.965835 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:13.968134 kubelet[1935]: I0510 00:53:13.968097 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90923b92-db26-42f9-9805-960d6e3551ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:53:13.971548 systemd[1]: var-lib-kubelet-pods-90923b92\x2ddb26\x2d42f9\x2d9805\x2d960d6e3551ab-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:53:13.973098 kubelet[1935]: I0510 00:53:13.973063 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:53:13.973700 kubelet[1935]: I0510 00:53:13.973668 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:13.975376 kubelet[1935]: I0510 00:53:13.975325 1935 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-kube-api-access-8sdc9" (OuterVolumeSpecName: "kube-api-access-8sdc9") pod "90923b92-db26-42f9-9805-960d6e3551ab" (UID: "90923b92-db26-42f9-9805-960d6e3551ab"). InnerVolumeSpecName "kube-api-access-8sdc9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:53:14.045981 systemd[1]: var-lib-kubelet-pods-90923b92\x2ddb26\x2d42f9\x2d9805\x2d960d6e3551ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8sdc9.mount: Deactivated successfully. May 10 00:53:14.046157 systemd[1]: var-lib-kubelet-pods-90923b92\x2ddb26\x2d42f9\x2d9805\x2d960d6e3551ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:53:14.056170 kubelet[1935]: I0510 00:53:14.056026 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-cgroup\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056337 kubelet[1935]: I0510 00:53:14.056187 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cilium-run\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056337 kubelet[1935]: I0510 00:53:14.056212 1935 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-hubble-tls\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056337 kubelet[1935]: I0510 00:53:14.056229 1935 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8sdc9\" (UniqueName: \"kubernetes.io/projected/90923b92-db26-42f9-9805-960d6e3551ab-kube-api-access-8sdc9\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056337 kubelet[1935]: I0510 00:53:14.056287 1935 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-bpf-maps\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056337 kubelet[1935]: I0510 00:53:14.056305 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-cilium-ipsec-secrets\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056344 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-kernel\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056364 1935 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-host-proc-sys-net\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056379 1935 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-cni-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056393 1935 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-xtables-lock\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056443 1935 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-hostproc\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056458 1935 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-lib-modules\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056474 1935 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90923b92-db26-42f9-9805-960d6e3551ab-cilium-config-path\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056513 1935 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90923b92-db26-42f9-9805-960d6e3551ab-etc-cni-netd\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.056662 kubelet[1935]: I0510 00:53:14.056546 1935 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90923b92-db26-42f9-9805-960d6e3551ab-clustermesh-secrets\") on node \"srv-3yk6k.gb1.brightbox.com\" DevicePath \"\"" May 10 00:53:14.167459 systemd[1]: Removed slice kubepods-burstable-pod90923b92_db26_42f9_9805_960d6e3551ab.slice. May 10 00:53:14.702703 kubelet[1935]: I0510 00:53:14.702663 1935 scope.go:117] "RemoveContainer" containerID="3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee" May 10 00:53:14.707088 env[1199]: time="2025-05-10T00:53:14.706779757Z" level=info msg="RemoveContainer for \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\"" May 10 00:53:14.712512 env[1199]: time="2025-05-10T00:53:14.712474032Z" level=info msg="RemoveContainer for \"3edc847e17248bdcf3819b49b68c06c13a154b1c2bdfe2f4e84b5becc7b1eaee\" returns successfully" May 10 00:53:14.773162 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 45408 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 00:53:14.775332 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:53:14.779544 kubelet[1935]: E0510 00:53:14.779499 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90923b92-db26-42f9-9805-960d6e3551ab" containerName="mount-cgroup" May 10 00:53:14.779653 kubelet[1935]: I0510 00:53:14.779570 1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="90923b92-db26-42f9-9805-960d6e3551ab" containerName="mount-cgroup" May 10 00:53:14.779653 kubelet[1935]: I0510 00:53:14.779590 1935 memory_manager.go:354] "RemoveStaleState removing state" podUID="90923b92-db26-42f9-9805-960d6e3551ab" containerName="mount-cgroup" May 10 00:53:14.779653 kubelet[1935]: E0510 00:53:14.779618 1935 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90923b92-db26-42f9-9805-960d6e3551ab" containerName="mount-cgroup" May 10 00:53:14.790075 systemd-logind[1189]: New session 24 of user core. May 10 00:53:14.793443 systemd[1]: Started session-24.scope. May 10 00:53:14.799398 systemd[1]: Created slice kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice. May 10 00:53:14.863376 kubelet[1935]: I0510 00:53:14.863311 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-cilium-run\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863386 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-bpf-maps\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863427 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f849db4-0112-4f38-84c2-cf43ceafd6d2-clustermesh-secrets\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863466 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f849db4-0112-4f38-84c2-cf43ceafd6d2-cilium-config-path\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863505 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-host-proc-sys-kernel\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863542 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k56w\" (UniqueName: \"kubernetes.io/projected/8f849db4-0112-4f38-84c2-cf43ceafd6d2-kube-api-access-6k56w\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.863611 kubelet[1935]: I0510 00:53:14.863583 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-host-proc-sys-net\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863613 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-lib-modules\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863645 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-xtables-lock\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863672 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-hostproc\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863718 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-etc-cni-netd\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863786 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-cilium-cgroup\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863851 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f849db4-0112-4f38-84c2-cf43ceafd6d2-cni-path\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863909 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f849db4-0112-4f38-84c2-cf43ceafd6d2-cilium-ipsec-secrets\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:14.864131 kubelet[1935]: I0510 00:53:14.863948 1935 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f849db4-0112-4f38-84c2-cf43ceafd6d2-hubble-tls\") pod \"cilium-fzsv8\" (UID: \"8f849db4-0112-4f38-84c2-cf43ceafd6d2\") " pod="kube-system/cilium-fzsv8" May 10 00:53:15.106829 env[1199]: time="2025-05-10T00:53:15.106717751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzsv8,Uid:8f849db4-0112-4f38-84c2-cf43ceafd6d2,Namespace:kube-system,Attempt:0,}" May 10 00:53:15.129935 env[1199]: time="2025-05-10T00:53:15.129815260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:53:15.130247 env[1199]: time="2025-05-10T00:53:15.129897045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:53:15.130247 env[1199]: time="2025-05-10T00:53:15.129914917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:53:15.130623 env[1199]: time="2025-05-10T00:53:15.130569122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e pid=3833 runtime=io.containerd.runc.v2 May 10 00:53:15.153342 systemd[1]: Started cri-containerd-f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e.scope. May 10 00:53:15.159788 systemd[1]: run-containerd-runc-k8s.io-f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e-runc.ERPfBp.mount: Deactivated successfully. May 10 00:53:15.201447 env[1199]: time="2025-05-10T00:53:15.201390700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzsv8,Uid:8f849db4-0112-4f38-84c2-cf43ceafd6d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\"" May 10 00:53:15.206803 env[1199]: time="2025-05-10T00:53:15.206763077Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:53:15.222667 env[1199]: time="2025-05-10T00:53:15.222610029Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e\"" May 10 00:53:15.224574 env[1199]: time="2025-05-10T00:53:15.224482214Z" level=info msg="StartContainer for \"ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e\"" May 10 00:53:15.247245 systemd[1]: Started cri-containerd-ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e.scope. May 10 00:53:15.299255 env[1199]: time="2025-05-10T00:53:15.299196457Z" level=info msg="StartContainer for \"ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e\" returns successfully" May 10 00:53:15.348574 systemd[1]: cri-containerd-ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e.scope: Deactivated successfully. May 10 00:53:15.390330 env[1199]: time="2025-05-10T00:53:15.390174815Z" level=info msg="shim disconnected" id=ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e May 10 00:53:15.390330 env[1199]: time="2025-05-10T00:53:15.390241740Z" level=warning msg="cleaning up after shim disconnected" id=ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e namespace=k8s.io May 10 00:53:15.390330 env[1199]: time="2025-05-10T00:53:15.390258955Z" level=info msg="cleaning up dead shim" May 10 00:53:15.413115 kubelet[1935]: W0510 00:53:15.412847 1935 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90923b92_db26_42f9_9805_960d6e3551ab.slice/cri-containerd-3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1.scope WatchSource:0}: container "3f9d0b2cd7f34de2ab135de9ac60ffd9922cd1404bf5f139bbbaa7ad742934d1" in namespace "k8s.io": not found May 10 00:53:15.418249 env[1199]: time="2025-05-10T00:53:15.418173035Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\n" May 10 00:53:15.710968 env[1199]: time="2025-05-10T00:53:15.710527508Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:53:15.732167 env[1199]: time="2025-05-10T00:53:15.732104217Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864\"" May 10 00:53:15.733360 env[1199]: time="2025-05-10T00:53:15.733324873Z" level=info msg="StartContainer for \"6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864\"" May 10 00:53:15.759114 systemd[1]: Started cri-containerd-6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864.scope. May 10 00:53:15.805295 env[1199]: time="2025-05-10T00:53:15.805233898Z" level=info msg="StartContainer for \"6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864\" returns successfully" May 10 00:53:15.825890 systemd[1]: cri-containerd-6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864.scope: Deactivated successfully. May 10 00:53:15.858008 env[1199]: time="2025-05-10T00:53:15.857934253Z" level=info msg="shim disconnected" id=6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864 May 10 00:53:15.858426 env[1199]: time="2025-05-10T00:53:15.858391868Z" level=warning msg="cleaning up after shim disconnected" id=6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864 namespace=k8s.io May 10 00:53:15.858557 env[1199]: time="2025-05-10T00:53:15.858528058Z" level=info msg="cleaning up dead shim" May 10 00:53:15.879586 env[1199]: time="2025-05-10T00:53:15.879476805Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\n" May 10 00:53:16.118067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314428635.mount: Deactivated successfully. May 10 00:53:16.156388 kubelet[1935]: I0510 00:53:16.156338 1935 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90923b92-db26-42f9-9805-960d6e3551ab" path="/var/lib/kubelet/pods/90923b92-db26-42f9-9805-960d6e3551ab/volumes" May 10 00:53:16.715092 env[1199]: time="2025-05-10T00:53:16.715011390Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:53:16.734600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525752811.mount: Deactivated successfully. May 10 00:53:16.744516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283670535.mount: Deactivated successfully. May 10 00:53:16.751068 env[1199]: time="2025-05-10T00:53:16.750982441Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65\"" May 10 00:53:16.759563 env[1199]: time="2025-05-10T00:53:16.759509895Z" level=info msg="StartContainer for \"b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65\"" May 10 00:53:16.792008 systemd[1]: Started cri-containerd-b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65.scope. May 10 00:53:16.840925 env[1199]: time="2025-05-10T00:53:16.840830142Z" level=info msg="StartContainer for \"b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65\" returns successfully" May 10 00:53:16.853764 systemd[1]: cri-containerd-b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65.scope: Deactivated successfully. May 10 00:53:16.888183 env[1199]: time="2025-05-10T00:53:16.888098701Z" level=info msg="shim disconnected" id=b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65 May 10 00:53:16.888183 env[1199]: time="2025-05-10T00:53:16.888171832Z" level=warning msg="cleaning up after shim disconnected" id=b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65 namespace=k8s.io May 10 00:53:16.888183 env[1199]: time="2025-05-10T00:53:16.888191901Z" level=info msg="cleaning up dead shim" May 10 00:53:16.898768 env[1199]: time="2025-05-10T00:53:16.898696300Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4041 runtime=io.containerd.runc.v2\n" May 10 00:53:17.722196 env[1199]: time="2025-05-10T00:53:17.721906004Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:53:17.740896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472985531.mount: Deactivated successfully. May 10 00:53:17.750753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830684877.mount: Deactivated successfully. May 10 00:53:17.757525 env[1199]: time="2025-05-10T00:53:17.757453794Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0\"" May 10 00:53:17.759258 env[1199]: time="2025-05-10T00:53:17.759218556Z" level=info msg="StartContainer for \"7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0\"" May 10 00:53:17.795552 systemd[1]: Started cri-containerd-7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0.scope. May 10 00:53:17.852008 systemd[1]: cri-containerd-7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0.scope: Deactivated successfully. May 10 00:53:17.857016 env[1199]: time="2025-05-10T00:53:17.854593764Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice/cri-containerd-7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0.scope/memory.events\": no such file or directory" May 10 00:53:17.860219 env[1199]: time="2025-05-10T00:53:17.860153850Z" level=info msg="StartContainer for \"7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0\" returns successfully" May 10 00:53:17.894190 env[1199]: time="2025-05-10T00:53:17.894096745Z" level=info msg="shim disconnected" id=7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0 May 10 00:53:17.894686 env[1199]: time="2025-05-10T00:53:17.894652419Z" level=warning msg="cleaning up after shim disconnected" id=7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0 namespace=k8s.io May 10 00:53:17.894874 env[1199]: time="2025-05-10T00:53:17.894841486Z" level=info msg="cleaning up dead shim" May 10 00:53:17.908176 env[1199]: time="2025-05-10T00:53:17.907990359Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\n" May 10 00:53:18.300283 kubelet[1935]: E0510 00:53:18.300156 1935 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:53:18.541142 kubelet[1935]: W0510 00:53:18.541079 1935 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice/cri-containerd-ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e.scope WatchSource:0}: task ca2ada09622bc6ceaad83e260eba098d06ef85db6a9044051529c123634ae36e not found: not found May 10 00:53:18.730102 env[1199]: time="2025-05-10T00:53:18.730005746Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:53:18.756095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221716287.mount: Deactivated successfully. May 10 00:53:18.766022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121702109.mount: Deactivated successfully. May 10 00:53:18.778390 env[1199]: time="2025-05-10T00:53:18.778292803Z" level=info msg="CreateContainer within sandbox \"f48898439ffa8ffdea0f8d531cd9a6141c6f3467ba1da12b1ed61886a32bc14e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734\"" May 10 00:53:18.780198 env[1199]: time="2025-05-10T00:53:18.779349628Z" level=info msg="StartContainer for \"e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734\"" May 10 00:53:18.811945 systemd[1]: Started cri-containerd-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734.scope. May 10 00:53:18.874216 env[1199]: time="2025-05-10T00:53:18.874118959Z" level=info msg="StartContainer for \"e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734\" returns successfully" May 10 00:53:19.786113 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:53:21.652806 kubelet[1935]: W0510 00:53:21.652718 1935 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice/cri-containerd-6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864.scope WatchSource:0}: task 6eb443b498fe18d04184e8184415cb70f1bed707eb412fc633240865baa1a864 not found: not found May 10 00:53:21.684528 systemd[1]: run-containerd-runc-k8s.io-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734-runc.gFlTdS.mount: Deactivated successfully. May 10 00:53:23.415451 systemd-networkd[1020]: lxc_health: Link UP May 10 00:53:23.434088 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:53:23.433524 systemd-networkd[1020]: lxc_health: Gained carrier May 10 00:53:23.988153 systemd[1]: run-containerd-runc-k8s.io-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734-runc.kqSrV3.mount: Deactivated successfully. May 10 00:53:24.577593 systemd-networkd[1020]: lxc_health: Gained IPv6LL May 10 00:53:24.771766 kubelet[1935]: W0510 00:53:24.771684 1935 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice/cri-containerd-b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65.scope WatchSource:0}: task b81508959bf6e44059b2585e098ecbda5c39f242fda9424e3374acf50d4fda65 not found: not found May 10 00:53:25.144576 kubelet[1935]: I0510 00:53:25.144470 1935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fzsv8" podStartSLOduration=11.144422938 podStartE2EDuration="11.144422938s" podCreationTimestamp="2025-05-10 00:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:53:19.770734797 +0000 UTC m=+161.916960419" watchObservedRunningTime="2025-05-10 00:53:25.144422938 +0000 UTC m=+167.290648558" May 10 00:53:26.246475 systemd[1]: run-containerd-runc-k8s.io-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734-runc.z7zRxX.mount: Deactivated successfully. May 10 00:53:27.883607 kubelet[1935]: W0510 00:53:27.883492 1935 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f849db4_0112_4f38_84c2_cf43ceafd6d2.slice/cri-containerd-7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0.scope WatchSource:0}: task 7db4f2b06770091eb9b2c156e702b985e6d26214fa2d48cd922bd91d70eee1c0 not found: not found May 10 00:53:28.577401 systemd[1]: run-containerd-runc-k8s.io-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734-runc.evdMeO.mount: Deactivated successfully. May 10 00:53:30.807311 systemd[1]: run-containerd-runc-k8s.io-e0a904a518d11f0a10d531237481817e672e08e92d561887779ed2b33a931734-runc.ieSsMI.mount: Deactivated successfully. May 10 00:53:31.033958 sshd[3815]: pam_unix(sshd:session): session closed for user core May 10 00:53:31.046819 systemd[1]: sshd@23-10.244.24.230:22-139.178.68.195:45408.service: Deactivated successfully. May 10 00:53:31.048174 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:53:31.049804 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. May 10 00:53:31.051974 systemd-logind[1189]: Removed session 24.