May 13 07:28:02.928742 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 07:28:02.928788 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:28:02.928812 kernel: BIOS-provided physical RAM map: May 13 07:28:02.928834 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 07:28:02.928850 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 07:28:02.928867 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 07:28:02.928887 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 07:28:02.928904 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 07:28:02.928920 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 07:28:02.928936 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 07:28:02.928952 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 07:28:02.928969 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 07:28:02.928989 kernel: NX (Execute Disable) protection: active May 13 07:28:02.929005 kernel: SMBIOS 3.0.0 present. May 13 07:28:02.929026 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 07:28:02.929044 kernel: Hypervisor detected: KVM May 13 07:28:02.929061 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 07:28:02.929078 kernel: kvm-clock: cpu 0, msr 107196001, primary cpu clock May 13 07:28:02.929100 kernel: kvm-clock: using sched offset of 3808653627 cycles May 13 07:28:02.929119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 07:28:02.929138 kernel: tsc: Detected 1996.249 MHz processor May 13 07:28:02.929157 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 07:28:02.929176 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 07:28:02.929195 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 07:28:02.929213 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 07:28:02.929231 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 07:28:02.929249 kernel: ACPI: Early table checksum verification disabled May 13 07:28:02.929271 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 07:28:02.929289 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:28:02.929308 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:28:02.929326 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:28:02.929344 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 07:28:02.929362 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:28:02.929380 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:28:02.931488 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 07:28:02.931516 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 07:28:02.931536 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 07:28:02.931554 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 07:28:02.931572 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 07:28:02.931591 kernel: No NUMA configuration found May 13 07:28:02.931616 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 07:28:02.931636 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] May 13 07:28:02.931658 kernel: Zone ranges: May 13 07:28:02.931677 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 07:28:02.931696 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 07:28:02.931715 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 07:28:02.931734 kernel: Movable zone start for each node May 13 07:28:02.931753 kernel: Early memory node ranges May 13 07:28:02.931771 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 07:28:02.931790 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 07:28:02.931812 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 07:28:02.931831 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 07:28:02.931850 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 07:28:02.931869 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 07:28:02.931887 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 07:28:02.931906 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 07:28:02.931925 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 07:28:02.931944 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 07:28:02.931963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 07:28:02.931985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 07:28:02.932004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 07:28:02.932023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 07:28:02.932042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 07:28:02.932060 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 07:28:02.932079 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 07:28:02.932098 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 07:28:02.932117 kernel: Booting paravirtualized kernel on KVM May 13 07:28:02.932181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 07:28:02.932215 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 13 07:28:02.932236 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 13 07:28:02.932257 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 13 07:28:02.932277 kernel: pcpu-alloc: [0] 0 1 May 13 07:28:02.932298 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 13 07:28:02.932319 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 07:28:02.932340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 07:28:02.932361 kernel: Policy zone: Normal May 13 07:28:02.932416 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:28:02.932446 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 07:28:02.932467 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 07:28:02.932489 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 07:28:02.932510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 07:28:02.932532 kernel: Memory: 3968288K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225224K reserved, 0K cma-reserved) May 13 07:28:02.932554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 07:28:02.932575 kernel: ftrace: allocating 34584 entries in 136 pages May 13 07:28:02.932596 kernel: ftrace: allocated 136 pages with 2 groups May 13 07:28:02.932623 kernel: rcu: Hierarchical RCU implementation. May 13 07:28:02.932646 kernel: rcu: RCU event tracing is enabled. May 13 07:28:02.932668 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 07:28:02.932689 kernel: Rude variant of Tasks RCU enabled. May 13 07:28:02.932710 kernel: Tracing variant of Tasks RCU enabled. May 13 07:28:02.932732 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 07:28:02.932753 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 07:28:02.932774 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 07:28:02.932795 kernel: Console: colour VGA+ 80x25 May 13 07:28:02.932821 kernel: printk: console [tty0] enabled May 13 07:28:02.932842 kernel: printk: console [ttyS0] enabled May 13 07:28:02.932863 kernel: ACPI: Core revision 20210730 May 13 07:28:02.932884 kernel: APIC: Switch to symmetric I/O mode setup May 13 07:28:02.932905 kernel: x2apic enabled May 13 07:28:02.932926 kernel: Switched APIC routing to physical x2apic. May 13 07:28:02.932947 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 07:28:02.932968 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 07:28:02.932990 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 07:28:02.933017 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 07:28:02.933038 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 07:28:02.933060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 07:28:02.933089 kernel: Spectre V2 : Mitigation: Retpolines May 13 07:28:02.933118 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 07:28:02.933148 kernel: Speculative Store Bypass: Vulnerable May 13 07:28:02.933178 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 07:28:02.933207 kernel: Freeing SMP alternatives memory: 32K May 13 07:28:02.933226 kernel: pid_max: default: 32768 minimum: 301 May 13 07:28:02.933250 kernel: LSM: Security Framework initializing May 13 07:28:02.933269 kernel: SELinux: Initializing. May 13 07:28:02.933288 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 07:28:02.933307 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 07:28:02.933327 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 07:28:02.933346 kernel: Performance Events: AMD PMU driver. May 13 07:28:02.933380 kernel: ... version: 0 May 13 07:28:02.934461 kernel: ... bit width: 48 May 13 07:28:02.934477 kernel: ... generic registers: 4 May 13 07:28:02.934492 kernel: ... value mask: 0000ffffffffffff May 13 07:28:02.934507 kernel: ... max period: 00007fffffffffff May 13 07:28:02.934522 kernel: ... fixed-purpose events: 0 May 13 07:28:02.934540 kernel: ... event mask: 000000000000000f May 13 07:28:02.934555 kernel: signal: max sigframe size: 1440 May 13 07:28:02.934570 kernel: rcu: Hierarchical SRCU implementation. May 13 07:28:02.934584 kernel: smp: Bringing up secondary CPUs ... May 13 07:28:02.934599 kernel: x86: Booting SMP configuration: May 13 07:28:02.934617 kernel: .... node #0, CPUs: #1 May 13 07:28:02.934631 kernel: kvm-clock: cpu 1, msr 107196041, secondary cpu clock May 13 07:28:02.934646 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 13 07:28:02.934661 kernel: smp: Brought up 1 node, 2 CPUs May 13 07:28:02.934676 kernel: smpboot: Max logical packages: 2 May 13 07:28:02.934690 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 07:28:02.934705 kernel: devtmpfs: initialized May 13 07:28:02.934720 kernel: x86/mm: Memory block size: 128MB May 13 07:28:02.934735 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 07:28:02.934754 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 07:28:02.934769 kernel: pinctrl core: initialized pinctrl subsystem May 13 07:28:02.934783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 07:28:02.934798 kernel: audit: initializing netlink subsys (disabled) May 13 07:28:02.934813 kernel: audit: type=2000 audit(1747121281.898:1): state=initialized audit_enabled=0 res=1 May 13 07:28:02.934828 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 07:28:02.934843 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 07:28:02.934857 kernel: cpuidle: using governor menu May 13 07:28:02.934872 kernel: ACPI: bus type PCI registered May 13 07:28:02.934891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 07:28:02.934906 kernel: dca service started, version 1.12.1 May 13 07:28:02.934921 kernel: PCI: Using configuration type 1 for base access May 13 07:28:02.934936 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 07:28:02.934951 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 07:28:02.934965 kernel: ACPI: Added _OSI(Module Device) May 13 07:28:02.934980 kernel: ACPI: Added _OSI(Processor Device) May 13 07:28:02.934995 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 07:28:02.935009 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 07:28:02.935028 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 07:28:02.935042 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 07:28:02.935057 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 07:28:02.935072 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 07:28:02.935086 kernel: ACPI: Interpreter enabled May 13 07:28:02.935101 kernel: ACPI: PM: (supports S0 S3 S5) May 13 07:28:02.935116 kernel: ACPI: Using IOAPIC for interrupt routing May 13 07:28:02.935131 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 07:28:02.935146 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 07:28:02.935163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 07:28:02.935412 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 07:28:02.935578 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 13 07:28:02.935602 kernel: acpiphp: Slot [3] registered May 13 07:28:02.935618 kernel: acpiphp: Slot [4] registered May 13 07:28:02.935633 kernel: acpiphp: Slot [5] registered May 13 07:28:02.935647 kernel: acpiphp: Slot [6] registered May 13 07:28:02.935662 kernel: acpiphp: Slot [7] registered May 13 07:28:02.935682 kernel: acpiphp: Slot [8] registered May 13 07:28:02.935697 kernel: acpiphp: Slot [9] registered May 13 07:28:02.935711 kernel: acpiphp: Slot [10] registered May 13 07:28:02.935726 kernel: acpiphp: Slot [11] registered May 13 07:28:02.935741 kernel: acpiphp: Slot [12] registered May 13 07:28:02.935756 kernel: acpiphp: Slot [13] registered May 13 07:28:02.935770 kernel: acpiphp: Slot [14] registered May 13 07:28:02.935785 kernel: acpiphp: Slot [15] registered May 13 07:28:02.935799 kernel: acpiphp: Slot [16] registered May 13 07:28:02.935817 kernel: acpiphp: Slot [17] registered May 13 07:28:02.935831 kernel: acpiphp: Slot [18] registered May 13 07:28:02.935846 kernel: acpiphp: Slot [19] registered May 13 07:28:02.935860 kernel: acpiphp: Slot [20] registered May 13 07:28:02.935875 kernel: acpiphp: Slot [21] registered May 13 07:28:02.935889 kernel: acpiphp: Slot [22] registered May 13 07:28:02.935904 kernel: acpiphp: Slot [23] registered May 13 07:28:02.935918 kernel: acpiphp: Slot [24] registered May 13 07:28:02.935933 kernel: acpiphp: Slot [25] registered May 13 07:28:02.935947 kernel: acpiphp: Slot [26] registered May 13 07:28:02.935965 kernel: acpiphp: Slot [27] registered May 13 07:28:02.935979 kernel: acpiphp: Slot [28] registered May 13 07:28:02.935994 kernel: acpiphp: Slot [29] registered May 13 07:28:02.936008 kernel: acpiphp: Slot [30] registered May 13 07:28:02.936023 kernel: acpiphp: Slot [31] registered May 13 07:28:02.936037 kernel: PCI host bridge to bus 0000:00 May 13 07:28:02.936221 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 07:28:02.936360 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 07:28:02.936533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 07:28:02.936668 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 07:28:02.936799 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 07:28:02.936930 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 07:28:02.937101 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 07:28:02.937271 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 07:28:02.937475 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 07:28:02.937636 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 07:28:02.937791 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 07:28:02.937916 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 07:28:02.937999 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 07:28:02.938079 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 07:28:02.938170 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 07:28:02.938257 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 07:28:02.938336 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 07:28:02.942469 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 07:28:02.942561 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 07:28:02.942650 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 07:28:02.942735 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 07:28:02.942817 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 07:28:02.942903 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 07:28:02.942995 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 07:28:02.943079 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 07:28:02.943161 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 07:28:02.943250 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 07:28:02.943354 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 07:28:02.944498 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 07:28:02.944595 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 07:28:02.944683 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 07:28:02.944771 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 07:28:02.944865 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 07:28:02.944952 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 07:28:02.945039 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 07:28:02.945136 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 07:28:02.945224 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 07:28:02.945311 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 07:28:02.945417 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 07:28:02.945430 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 07:28:02.945440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 07:28:02.945448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 07:28:02.945457 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 07:28:02.945469 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 07:28:02.945477 kernel: iommu: Default domain type: Translated May 13 07:28:02.945486 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 07:28:02.945572 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 07:28:02.945658 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 07:28:02.945744 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 07:28:02.945757 kernel: vgaarb: loaded May 13 07:28:02.945766 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 07:28:02.945775 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 07:28:02.945786 kernel: PTP clock support registered May 13 07:28:02.945795 kernel: PCI: Using ACPI for IRQ routing May 13 07:28:02.945803 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 07:28:02.945812 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 07:28:02.945820 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 07:28:02.945829 kernel: clocksource: Switched to clocksource kvm-clock May 13 07:28:02.945837 kernel: VFS: Disk quotas dquot_6.6.0 May 13 07:28:02.945845 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 07:28:02.945854 kernel: pnp: PnP ACPI init May 13 07:28:02.945943 kernel: pnp 00:03: [dma 2] May 13 07:28:02.945955 kernel: pnp: PnP ACPI: found 5 devices May 13 07:28:02.945963 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 07:28:02.945972 kernel: NET: Registered PF_INET protocol family May 13 07:28:02.945980 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 07:28:02.945988 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 07:28:02.945996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 07:28:02.946004 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 07:28:02.946015 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 07:28:02.946023 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 07:28:02.946031 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 07:28:02.946039 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 07:28:02.946047 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 07:28:02.946055 kernel: NET: Registered PF_XDP protocol family May 13 07:28:02.946126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 07:28:02.946197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 07:28:02.946267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 07:28:02.946341 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 07:28:02.950447 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 07:28:02.950539 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 07:28:02.950623 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 07:28:02.950704 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 13 07:28:02.950716 kernel: PCI: CLS 0 bytes, default 64 May 13 07:28:02.950725 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 07:28:02.950733 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 07:28:02.950745 kernel: Initialise system trusted keyrings May 13 07:28:02.950753 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 07:28:02.950761 kernel: Key type asymmetric registered May 13 07:28:02.950770 kernel: Asymmetric key parser 'x509' registered May 13 07:28:02.950777 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 07:28:02.950785 kernel: io scheduler mq-deadline registered May 13 07:28:02.950793 kernel: io scheduler kyber registered May 13 07:28:02.950801 kernel: io scheduler bfq registered May 13 07:28:02.950809 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 07:28:02.950819 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 07:28:02.950827 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 07:28:02.950835 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 07:28:02.950843 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 07:28:02.950851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 07:28:02.950859 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 07:28:02.950867 kernel: random: crng init done May 13 07:28:02.950875 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 07:28:02.950883 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 07:28:02.950893 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 07:28:02.950901 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 07:28:02.950981 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 07:28:02.951055 kernel: rtc_cmos 00:04: registered as rtc0 May 13 07:28:02.951128 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T07:28:02 UTC (1747121282) May 13 07:28:02.951200 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 07:28:02.951211 kernel: NET: Registered PF_INET6 protocol family May 13 07:28:02.951219 kernel: Segment Routing with IPv6 May 13 07:28:02.951230 kernel: In-situ OAM (IOAM) with IPv6 May 13 07:28:02.951238 kernel: NET: Registered PF_PACKET protocol family May 13 07:28:02.951246 kernel: Key type dns_resolver registered May 13 07:28:02.951254 kernel: IPI shorthand broadcast: enabled May 13 07:28:02.951262 kernel: sched_clock: Marking stable (796013398, 159766630)->(1015758485, -59978457) May 13 07:28:02.951271 kernel: registered taskstats version 1 May 13 07:28:02.951279 kernel: Loading compiled-in X.509 certificates May 13 07:28:02.951288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 07:28:02.951296 kernel: Key type .fscrypt registered May 13 07:28:02.951305 kernel: Key type fscrypt-provisioning registered May 13 07:28:02.951313 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 07:28:02.951322 kernel: ima: Allocated hash algorithm: sha1 May 13 07:28:02.951330 kernel: ima: No architecture policies found May 13 07:28:02.951338 kernel: clk: Disabling unused clocks May 13 07:28:02.951346 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 07:28:02.951353 kernel: Write protecting the kernel read-only data: 28672k May 13 07:28:02.951362 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 07:28:02.951372 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 07:28:02.951393 kernel: Run /init as init process May 13 07:28:02.951401 kernel: with arguments: May 13 07:28:02.951409 kernel: /init May 13 07:28:02.951417 kernel: with environment: May 13 07:28:02.951425 kernel: HOME=/ May 13 07:28:02.951432 kernel: TERM=linux May 13 07:28:02.951440 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 07:28:02.951451 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 07:28:02.951464 systemd[1]: Detected virtualization kvm. May 13 07:28:02.951473 systemd[1]: Detected architecture x86-64. May 13 07:28:02.951482 systemd[1]: Running in initrd. May 13 07:28:02.951490 systemd[1]: No hostname configured, using default hostname. May 13 07:28:02.951499 systemd[1]: Hostname set to . May 13 07:28:02.951508 systemd[1]: Initializing machine ID from VM UUID. May 13 07:28:02.951517 systemd[1]: Queued start job for default target initrd.target. May 13 07:28:02.951527 systemd[1]: Started systemd-ask-password-console.path. May 13 07:28:02.951535 systemd[1]: Reached target cryptsetup.target. May 13 07:28:02.951544 systemd[1]: Reached target paths.target. May 13 07:28:02.951552 systemd[1]: Reached target slices.target. May 13 07:28:02.951560 systemd[1]: Reached target swap.target. May 13 07:28:02.951568 systemd[1]: Reached target timers.target. May 13 07:28:02.951577 systemd[1]: Listening on iscsid.socket. May 13 07:28:02.951586 systemd[1]: Listening on iscsiuio.socket. May 13 07:28:02.951596 systemd[1]: Listening on systemd-journald-audit.socket. May 13 07:28:02.951612 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 07:28:02.951623 systemd[1]: Listening on systemd-journald.socket. May 13 07:28:02.951631 systemd[1]: Listening on systemd-networkd.socket. May 13 07:28:02.951640 systemd[1]: Listening on systemd-udevd-control.socket. May 13 07:28:02.951649 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 07:28:02.951659 systemd[1]: Reached target sockets.target. May 13 07:28:02.951668 systemd[1]: Starting kmod-static-nodes.service... May 13 07:28:02.951676 systemd[1]: Finished network-cleanup.service. May 13 07:28:02.951685 systemd[1]: Starting systemd-fsck-usr.service... May 13 07:28:02.951694 systemd[1]: Starting systemd-journald.service... May 13 07:28:02.951703 systemd[1]: Starting systemd-modules-load.service... May 13 07:28:02.951712 systemd[1]: Starting systemd-resolved.service... May 13 07:28:02.951721 systemd[1]: Starting systemd-vconsole-setup.service... May 13 07:28:02.951730 systemd[1]: Finished kmod-static-nodes.service. May 13 07:28:02.951740 systemd[1]: Finished systemd-fsck-usr.service. May 13 07:28:02.951751 systemd-journald[185]: Journal started May 13 07:28:02.951793 systemd-journald[185]: Runtime Journal (/run/log/journal/d2aa4c522b734243aebc0f44b03529f3) is 8.0M, max 78.4M, 70.4M free. May 13 07:28:02.910427 systemd-modules-load[186]: Inserted module 'overlay' May 13 07:28:02.979700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 07:28:02.979723 systemd[1]: Started systemd-journald.service. May 13 07:28:02.979737 kernel: audit: type=1130 audit(1747121282.972:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.959094 systemd-resolved[187]: Positive Trust Anchors: May 13 07:28:02.985195 kernel: audit: type=1130 audit(1747121282.979:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.959104 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 07:28:02.992104 kernel: audit: type=1130 audit(1747121282.985:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.992135 kernel: Bridge firewalling registered May 13 07:28:02.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.959139 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 07:28:02.999489 kernel: audit: type=1130 audit(1747121282.992:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.962075 systemd-resolved[187]: Defaulting to hostname 'linux'. May 13 07:28:02.980191 systemd[1]: Started systemd-resolved.service. May 13 07:28:02.985936 systemd[1]: Finished systemd-vconsole-setup.service. May 13 07:28:03.016677 kernel: audit: type=1130 audit(1747121283.009:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:02.991885 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 07:28:02.992754 systemd[1]: Reached target nss-lookup.target. May 13 07:28:03.000657 systemd[1]: Starting dracut-cmdline-ask.service... May 13 07:28:03.001796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 07:28:03.009405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 07:28:03.023437 kernel: SCSI subsystem initialized May 13 07:28:03.020751 systemd[1]: Finished dracut-cmdline-ask.service. May 13 07:28:03.023005 systemd[1]: Starting dracut-cmdline.service... May 13 07:28:03.029707 kernel: audit: type=1130 audit(1747121283.020:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.036688 dracut-cmdline[202]: dracut-dracut-053 May 13 07:28:03.039791 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:28:03.049840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 07:28:03.049876 kernel: device-mapper: uevent: version 1.0.3 May 13 07:28:03.052306 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 07:28:03.057488 systemd-modules-load[186]: Inserted module 'dm_multipath' May 13 07:28:03.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.058791 systemd[1]: Finished systemd-modules-load.service. May 13 07:28:03.067829 kernel: audit: type=1130 audit(1747121283.058:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.059949 systemd[1]: Starting systemd-sysctl.service... May 13 07:28:03.071502 systemd[1]: Finished systemd-sysctl.service. May 13 07:28:03.077747 kernel: audit: type=1130 audit(1747121283.071:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.116410 kernel: Loading iSCSI transport class v2.0-870. May 13 07:28:03.136404 kernel: iscsi: registered transport (tcp) May 13 07:28:03.162550 kernel: iscsi: registered transport (qla4xxx) May 13 07:28:03.162607 kernel: QLogic iSCSI HBA Driver May 13 07:28:03.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.210037 systemd[1]: Finished dracut-cmdline.service. May 13 07:28:03.222693 kernel: audit: type=1130 audit(1747121283.210:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.211428 systemd[1]: Starting dracut-pre-udev.service... May 13 07:28:03.277516 kernel: raid6: sse2x4 gen() 12974 MB/s May 13 07:28:03.295491 kernel: raid6: sse2x4 xor() 7172 MB/s May 13 07:28:03.313516 kernel: raid6: sse2x2 gen() 14072 MB/s May 13 07:28:03.331541 kernel: raid6: sse2x2 xor() 8621 MB/s May 13 07:28:03.349517 kernel: raid6: sse2x1 gen() 11027 MB/s May 13 07:28:03.371787 kernel: raid6: sse2x1 xor() 6902 MB/s May 13 07:28:03.371901 kernel: raid6: using algorithm sse2x2 gen() 14072 MB/s May 13 07:28:03.371930 kernel: raid6: .... xor() 8621 MB/s, rmw enabled May 13 07:28:03.373012 kernel: raid6: using ssse3x2 recovery algorithm May 13 07:28:03.389989 kernel: xor: measuring software checksum speed May 13 07:28:03.390094 kernel: prefetch64-sse : 16982 MB/sec May 13 07:28:03.390143 kernel: generic_sse : 15451 MB/sec May 13 07:28:03.391270 kernel: xor: using function: prefetch64-sse (16982 MB/sec) May 13 07:28:03.507480 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 07:28:03.522642 systemd[1]: Finished dracut-pre-udev.service. May 13 07:28:03.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.523000 audit: BPF prog-id=7 op=LOAD May 13 07:28:03.523000 audit: BPF prog-id=8 op=LOAD May 13 07:28:03.524145 systemd[1]: Starting systemd-udevd.service... May 13 07:28:03.537259 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 13 07:28:03.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.541941 systemd[1]: Started systemd-udevd.service. May 13 07:28:03.547120 systemd[1]: Starting dracut-pre-trigger.service... May 13 07:28:03.563891 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation May 13 07:28:03.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.611738 systemd[1]: Finished dracut-pre-trigger.service. May 13 07:28:03.614752 systemd[1]: Starting systemd-udev-trigger.service... May 13 07:28:03.659452 systemd[1]: Finished systemd-udev-trigger.service. May 13 07:28:03.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:03.721480 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 07:28:03.748351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 07:28:03.748371 kernel: GPT:17805311 != 20971519 May 13 07:28:03.748402 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 07:28:03.748415 kernel: GPT:17805311 != 20971519 May 13 07:28:03.748426 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 07:28:03.748443 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:28:03.778408 kernel: libata version 3.00 loaded. May 13 07:28:03.782406 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 13 07:28:03.789415 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 07:28:03.797319 kernel: scsi host0: ata_piix May 13 07:28:03.797471 kernel: scsi host1: ata_piix May 13 07:28:03.797590 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 07:28:03.797604 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 07:28:03.791196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 07:28:03.839187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 07:28:03.842720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 07:28:03.843313 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 07:28:03.848875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 07:28:03.851530 systemd[1]: Starting disk-uuid.service... May 13 07:28:03.866700 disk-uuid[472]: Primary Header is updated. May 13 07:28:03.866700 disk-uuid[472]: Secondary Entries is updated. May 13 07:28:03.866700 disk-uuid[472]: Secondary Header is updated. May 13 07:28:03.874429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:28:03.881457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:28:04.902298 disk-uuid[473]: The operation has completed successfully. May 13 07:28:04.904066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:28:04.966822 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 07:28:04.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:04.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:04.967031 systemd[1]: Finished disk-uuid.service. May 13 07:28:05.001670 systemd[1]: Starting verity-setup.service... May 13 07:28:05.019483 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 07:28:05.113150 systemd[1]: Found device dev-mapper-usr.device. May 13 07:28:05.117265 systemd[1]: Mounting sysusr-usr.mount... May 13 07:28:05.124328 systemd[1]: Finished verity-setup.service. May 13 07:28:05.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.251411 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 07:28:05.252591 systemd[1]: Mounted sysusr-usr.mount. May 13 07:28:05.254429 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 07:28:05.256109 systemd[1]: Starting ignition-setup.service... May 13 07:28:05.258900 systemd[1]: Starting parse-ip-for-networkd.service... May 13 07:28:05.274604 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:28:05.274658 kernel: BTRFS info (device vda6): using free space tree May 13 07:28:05.274671 kernel: BTRFS info (device vda6): has skinny extents May 13 07:28:05.290560 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 07:28:05.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.305470 systemd[1]: Finished ignition-setup.service. May 13 07:28:05.306750 systemd[1]: Starting ignition-fetch-offline.service... May 13 07:28:05.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.381000 audit: BPF prog-id=9 op=LOAD May 13 07:28:05.380129 systemd[1]: Finished parse-ip-for-networkd.service. May 13 07:28:05.382061 systemd[1]: Starting systemd-networkd.service... May 13 07:28:05.410554 systemd-networkd[644]: lo: Link UP May 13 07:28:05.410566 systemd-networkd[644]: lo: Gained carrier May 13 07:28:05.411275 systemd-networkd[644]: Enumeration completed May 13 07:28:05.411727 systemd-networkd[644]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 07:28:05.413490 systemd-networkd[644]: eth0: Link UP May 13 07:28:05.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.413495 systemd-networkd[644]: eth0: Gained carrier May 13 07:28:05.413638 systemd[1]: Started systemd-networkd.service. May 13 07:28:05.414795 systemd[1]: Reached target network.target. May 13 07:28:05.416041 systemd[1]: Starting iscsiuio.service... May 13 07:28:05.422526 systemd[1]: Started iscsiuio.service. May 13 07:28:05.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.423978 systemd[1]: Starting iscsid.service... May 13 07:28:05.433079 iscsid[654]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 07:28:05.434045 iscsid[654]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 07:28:05.434045 iscsid[654]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 07:28:05.434045 iscsid[654]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 07:28:05.434045 iscsid[654]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 07:28:05.434045 iscsid[654]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 07:28:05.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.435965 systemd-networkd[644]: eth0: DHCPv4 address 172.24.4.239/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 07:28:05.436587 systemd[1]: Started iscsid.service. May 13 07:28:05.439302 systemd[1]: Starting dracut-initqueue.service... May 13 07:28:05.461092 systemd[1]: Finished dracut-initqueue.service. May 13 07:28:05.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.461841 systemd[1]: Reached target remote-fs-pre.target. May 13 07:28:05.462763 systemd[1]: Reached target remote-cryptsetup.target. May 13 07:28:05.464720 systemd[1]: Reached target remote-fs.target. May 13 07:28:05.466490 systemd[1]: Starting dracut-pre-mount.service... May 13 07:28:05.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.475671 systemd[1]: Finished dracut-pre-mount.service. May 13 07:28:05.582563 ignition[560]: Ignition 2.14.0 May 13 07:28:05.583560 ignition[560]: Stage: fetch-offline May 13 07:28:05.583749 ignition[560]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:05.583798 ignition[560]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:05.586469 ignition[560]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:05.586720 ignition[560]: parsed url from cmdline: "" May 13 07:28:05.586734 ignition[560]: no config URL provided May 13 07:28:05.586752 ignition[560]: reading system config file "/usr/lib/ignition/user.ign" May 13 07:28:05.586781 ignition[560]: no config at "/usr/lib/ignition/user.ign" May 13 07:28:05.586797 ignition[560]: failed to fetch config: resource requires networking May 13 07:28:05.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:05.587756 ignition[560]: Ignition finished successfully May 13 07:28:05.589455 systemd[1]: Finished ignition-fetch-offline.service. May 13 07:28:05.590827 systemd[1]: Starting ignition-fetch.service... May 13 07:28:05.591205 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.239 May 13 07:28:05.599546 ignition[668]: Ignition 2.14.0 May 13 07:28:05.591227 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 13 07:28:05.599553 ignition[668]: Stage: fetch May 13 07:28:05.599655 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:05.599674 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:05.600576 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:05.600665 ignition[668]: parsed url from cmdline: "" May 13 07:28:05.600669 ignition[668]: no config URL provided May 13 07:28:05.600674 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 13 07:28:05.600682 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 13 07:28:05.606837 ignition[668]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 07:28:05.606865 ignition[668]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 07:28:05.612569 ignition[668]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 07:28:06.004291 ignition[668]: GET result: OK May 13 07:28:06.004591 ignition[668]: parsing config with SHA512: e847557865e1cc4b4487b519c38a91abb6303a872804963a8f5144d11665b6c53f212c26c8163128952630edcb9efd25206ba82f7f55583206198fd8bfbd24d4 May 13 07:28:06.024539 unknown[668]: fetched base config from "system" May 13 07:28:06.024573 unknown[668]: fetched base config from "system" May 13 07:28:06.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.025853 ignition[668]: fetch: fetch complete May 13 07:28:06.024598 unknown[668]: fetched user config from "openstack" May 13 07:28:06.025868 ignition[668]: fetch: fetch passed May 13 07:28:06.028808 systemd[1]: Finished ignition-fetch.service. May 13 07:28:06.025949 ignition[668]: Ignition finished successfully May 13 07:28:06.039370 systemd[1]: Starting ignition-kargs.service... May 13 07:28:06.066231 ignition[674]: Ignition 2.14.0 May 13 07:28:06.066263 ignition[674]: Stage: kargs May 13 07:28:06.066600 ignition[674]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:06.066643 ignition[674]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:06.068999 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:06.072138 ignition[674]: kargs: kargs passed May 13 07:28:06.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.074214 systemd[1]: Finished ignition-kargs.service. May 13 07:28:06.072253 ignition[674]: Ignition finished successfully May 13 07:28:06.078875 systemd[1]: Starting ignition-disks.service... May 13 07:28:06.097453 ignition[679]: Ignition 2.14.0 May 13 07:28:06.097485 ignition[679]: Stage: disks May 13 07:28:06.097738 ignition[679]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:06.097794 ignition[679]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:06.100496 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:06.103654 ignition[679]: disks: disks passed May 13 07:28:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.105735 systemd[1]: Finished ignition-disks.service. May 13 07:28:06.103773 ignition[679]: Ignition finished successfully May 13 07:28:06.107331 systemd[1]: Reached target initrd-root-device.target. May 13 07:28:06.108536 systemd[1]: Reached target local-fs-pre.target. May 13 07:28:06.109866 systemd[1]: Reached target local-fs.target. May 13 07:28:06.112338 systemd[1]: Reached target sysinit.target. May 13 07:28:06.114857 systemd[1]: Reached target basic.target. May 13 07:28:06.119095 systemd[1]: Starting systemd-fsck-root.service... May 13 07:28:06.150708 systemd-fsck[686]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 07:28:06.163990 systemd[1]: Finished systemd-fsck-root.service. May 13 07:28:06.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.166937 systemd[1]: Mounting sysroot.mount... May 13 07:28:06.190669 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 07:28:06.189615 systemd[1]: Mounted sysroot.mount. May 13 07:28:06.191797 systemd[1]: Reached target initrd-root-fs.target. May 13 07:28:06.196102 systemd[1]: Mounting sysroot-usr.mount... May 13 07:28:06.201461 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 07:28:06.205144 systemd[1]: Starting flatcar-openstack-hostname.service... May 13 07:28:06.207891 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 07:28:06.208735 systemd[1]: Reached target ignition-diskful.target. May 13 07:28:06.217323 systemd[1]: Mounted sysroot-usr.mount. May 13 07:28:06.226227 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 07:28:06.232577 systemd[1]: Starting initrd-setup-root.service... May 13 07:28:06.249488 initrd-setup-root[698]: cut: /sysroot/etc/passwd: No such file or directory May 13 07:28:06.261446 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (693) May 13 07:28:06.272392 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:28:06.272433 kernel: BTRFS info (device vda6): using free space tree May 13 07:28:06.272446 kernel: BTRFS info (device vda6): has skinny extents May 13 07:28:06.275104 initrd-setup-root[722]: cut: /sysroot/etc/group: No such file or directory May 13 07:28:06.288103 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 07:28:06.289543 initrd-setup-root[732]: cut: /sysroot/etc/shadow: No such file or directory May 13 07:28:06.296770 initrd-setup-root[740]: cut: /sysroot/etc/gshadow: No such file or directory May 13 07:28:06.486032 systemd-networkd[644]: eth0: Gained IPv6LL May 13 07:28:06.765782 systemd[1]: Finished initrd-setup-root.service. May 13 07:28:06.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.768906 systemd[1]: Starting ignition-mount.service... May 13 07:28:06.779104 systemd[1]: Starting sysroot-boot.service... May 13 07:28:06.791590 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 13 07:28:06.791822 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 13 07:28:06.826508 ignition[761]: INFO : Ignition 2.14.0 May 13 07:28:06.826508 ignition[761]: INFO : Stage: mount May 13 07:28:06.827820 ignition[761]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:06.827820 ignition[761]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:06.827820 ignition[761]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:06.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.832893 ignition[761]: INFO : mount: mount passed May 13 07:28:06.832893 ignition[761]: INFO : Ignition finished successfully May 13 07:28:06.829644 systemd[1]: Finished ignition-mount.service. May 13 07:28:06.841558 systemd[1]: Finished sysroot-boot.service. May 13 07:28:06.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.858358 coreos-metadata[692]: May 13 07:28:06.858 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 07:28:06.876596 coreos-metadata[692]: May 13 07:28:06.876 INFO Fetch successful May 13 07:28:06.877356 coreos-metadata[692]: May 13 07:28:06.877 INFO wrote hostname ci-3510-3-7-n-1ba5f14697.novalocal to /sysroot/etc/hostname May 13 07:28:06.881084 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 07:28:06.881240 systemd[1]: Finished flatcar-openstack-hostname.service. May 13 07:28:06.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:06.884329 systemd[1]: Starting ignition-files.service... May 13 07:28:06.893374 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 07:28:06.903459 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (770) May 13 07:28:06.907475 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:28:06.907515 kernel: BTRFS info (device vda6): using free space tree May 13 07:28:06.907539 kernel: BTRFS info (device vda6): has skinny extents May 13 07:28:06.918765 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 07:28:06.939880 ignition[789]: INFO : Ignition 2.14.0 May 13 07:28:06.939880 ignition[789]: INFO : Stage: files May 13 07:28:06.941127 ignition[789]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:06.941127 ignition[789]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:06.942859 ignition[789]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:06.945910 ignition[789]: DEBUG : files: compiled without relabeling support, skipping May 13 07:28:06.946910 ignition[789]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 07:28:06.946910 ignition[789]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 07:28:06.956046 ignition[789]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 07:28:06.956841 ignition[789]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 07:28:06.957967 unknown[789]: wrote ssh authorized keys file for user: core May 13 07:28:06.958657 ignition[789]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 07:28:06.959424 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 07:28:06.959424 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 07:28:08.064917 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 07:28:08.406551 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 07:28:08.408195 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 07:28:08.409160 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 07:28:09.136776 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 07:28:09.586500 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:28:09.617611 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:28:09.617611 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:28:09.617611 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 07:28:10.153452 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 07:28:12.603285 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:28:12.604687 ignition[789]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 13 07:28:12.605445 ignition[789]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 13 07:28:12.606180 ignition[789]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 13 07:28:12.607469 ignition[789]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 07:28:12.609102 ignition[789]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 07:28:12.609102 ignition[789]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 13 07:28:12.609102 ignition[789]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 07:28:12.615590 ignition[789]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 07:28:12.615590 ignition[789]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 13 07:28:12.615590 ignition[789]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 13 07:28:12.615590 ignition[789]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 07:28:12.615590 ignition[789]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 07:28:12.615590 ignition[789]: INFO : files: files passed May 13 07:28:12.615590 ignition[789]: INFO : Ignition finished successfully May 13 07:28:12.675173 kernel: kauditd_printk_skb: 27 callbacks suppressed May 13 07:28:12.675215 kernel: audit: type=1130 audit(1747121292.618:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.675259 kernel: audit: type=1130 audit(1747121292.641:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.675287 kernel: audit: type=1130 audit(1747121292.653:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.675312 kernel: audit: type=1131 audit(1747121292.653:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.616816 systemd[1]: Finished ignition-files.service. May 13 07:28:12.619726 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 07:28:12.678315 initrd-setup-root-after-ignition[812]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 07:28:12.634477 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 07:28:12.635119 systemd[1]: Starting ignition-quench.service... May 13 07:28:12.639644 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 07:28:12.641929 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 07:28:12.641998 systemd[1]: Finished ignition-quench.service. May 13 07:28:12.654295 systemd[1]: Reached target ignition-complete.target. May 13 07:28:12.676203 systemd[1]: Starting initrd-parse-etc.service... May 13 07:28:12.697370 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 07:28:12.697609 systemd[1]: Finished initrd-parse-etc.service. May 13 07:28:12.708483 kernel: audit: type=1130 audit(1747121292.698:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.708505 kernel: audit: type=1131 audit(1747121292.698:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.699236 systemd[1]: Reached target initrd-fs.target. May 13 07:28:12.709497 systemd[1]: Reached target initrd.target. May 13 07:28:12.710967 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 07:28:12.712577 systemd[1]: Starting dracut-pre-pivot.service... May 13 07:28:12.725312 systemd[1]: Finished dracut-pre-pivot.service. May 13 07:28:12.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.732325 systemd[1]: Starting initrd-cleanup.service... May 13 07:28:12.733903 kernel: audit: type=1130 audit(1747121292.727:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.744822 systemd[1]: Stopped target nss-lookup.target. May 13 07:28:12.745955 systemd[1]: Stopped target remote-cryptsetup.target. May 13 07:28:12.747050 systemd[1]: Stopped target timers.target. May 13 07:28:12.748067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 07:28:12.748197 systemd[1]: Stopped dracut-pre-pivot.service. May 13 07:28:12.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.749971 systemd[1]: Stopped target initrd.target. May 13 07:28:12.755815 kernel: audit: type=1131 audit(1747121292.749:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.755543 systemd[1]: Stopped target basic.target. May 13 07:28:12.757044 systemd[1]: Stopped target ignition-complete.target. May 13 07:28:12.758581 systemd[1]: Stopped target ignition-diskful.target. May 13 07:28:12.760047 systemd[1]: Stopped target initrd-root-device.target. May 13 07:28:12.761609 systemd[1]: Stopped target remote-fs.target. May 13 07:28:12.763021 systemd[1]: Stopped target remote-fs-pre.target. May 13 07:28:12.764607 systemd[1]: Stopped target sysinit.target. May 13 07:28:12.766004 systemd[1]: Stopped target local-fs.target. May 13 07:28:12.767377 systemd[1]: Stopped target local-fs-pre.target. May 13 07:28:12.768964 systemd[1]: Stopped target swap.target. May 13 07:28:12.770270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 07:28:12.776568 kernel: audit: type=1131 audit(1747121292.771:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.770544 systemd[1]: Stopped dracut-pre-mount.service. May 13 07:28:12.771937 systemd[1]: Stopped target cryptsetup.target. May 13 07:28:12.784190 kernel: audit: type=1131 audit(1747121292.778:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.777727 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 07:28:12.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.778009 systemd[1]: Stopped dracut-initqueue.service. May 13 07:28:12.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.779523 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 07:28:12.779760 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 07:28:12.785545 systemd[1]: ignition-files.service: Deactivated successfully. May 13 07:28:12.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.799337 iscsid[654]: iscsid shutting down. May 13 07:28:12.785777 systemd[1]: Stopped ignition-files.service. May 13 07:28:12.788578 systemd[1]: Stopping ignition-mount.service... May 13 07:28:12.789983 systemd[1]: Stopping iscsid.service... May 13 07:28:12.793812 systemd[1]: Stopping sysroot-boot.service... May 13 07:28:12.794257 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 07:28:12.794377 systemd[1]: Stopped systemd-udev-trigger.service. May 13 07:28:12.795033 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 07:28:12.795175 systemd[1]: Stopped dracut-pre-trigger.service. May 13 07:28:12.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.808688 systemd[1]: iscsid.service: Deactivated successfully. May 13 07:28:12.808782 systemd[1]: Stopped iscsid.service. May 13 07:28:12.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.811328 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 07:28:12.811422 systemd[1]: Finished initrd-cleanup.service. May 13 07:28:12.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.814289 ignition[827]: INFO : Ignition 2.14.0 May 13 07:28:12.814289 ignition[827]: INFO : Stage: umount May 13 07:28:12.813887 systemd[1]: Stopping iscsiuio.service... May 13 07:28:12.818178 ignition[827]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:28:12.818178 ignition[827]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:28:12.818178 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:28:12.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.817636 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 07:28:12.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.822174 ignition[827]: INFO : umount: umount passed May 13 07:28:12.822174 ignition[827]: INFO : Ignition finished successfully May 13 07:28:12.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.817717 systemd[1]: Stopped iscsiuio.service. May 13 07:28:12.819665 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 07:28:12.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.819735 systemd[1]: Stopped ignition-mount.service. May 13 07:28:12.820618 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 07:28:12.820657 systemd[1]: Stopped ignition-disks.service. May 13 07:28:12.821671 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 07:28:12.821706 systemd[1]: Stopped ignition-kargs.service. May 13 07:28:12.822576 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 07:28:12.822611 systemd[1]: Stopped ignition-fetch.service. May 13 07:28:12.823527 systemd[1]: Stopped target network.target. May 13 07:28:12.824551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 07:28:12.824590 systemd[1]: Stopped ignition-fetch-offline.service. May 13 07:28:12.825441 systemd[1]: Stopped target paths.target. May 13 07:28:12.826317 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 07:28:12.830423 systemd[1]: Stopped systemd-ask-password-console.path. May 13 07:28:12.831501 systemd[1]: Stopped target slices.target. May 13 07:28:12.836543 systemd[1]: Stopped target sockets.target. May 13 07:28:12.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.837420 systemd[1]: iscsid.socket: Deactivated successfully. May 13 07:28:12.837453 systemd[1]: Closed iscsid.socket. May 13 07:28:12.838445 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 07:28:12.838474 systemd[1]: Closed iscsiuio.socket. May 13 07:28:12.839330 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 07:28:12.839366 systemd[1]: Stopped ignition-setup.service. May 13 07:28:12.840741 systemd[1]: Stopping systemd-networkd.service... May 13 07:28:12.841702 systemd[1]: Stopping systemd-resolved.service... May 13 07:28:12.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.844425 systemd-networkd[644]: eth0: DHCPv6 lease lost May 13 07:28:12.845219 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 07:28:12.847000 audit: BPF prog-id=9 op=UNLOAD May 13 07:28:12.845305 systemd[1]: Stopped systemd-networkd.service. May 13 07:28:12.847275 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 07:28:12.847312 systemd[1]: Closed systemd-networkd.socket. May 13 07:28:12.849030 systemd[1]: Stopping network-cleanup.service... May 13 07:28:12.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.851003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 07:28:12.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.851049 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 07:28:12.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.851943 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 07:28:12.851979 systemd[1]: Stopped systemd-sysctl.service. May 13 07:28:12.853460 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 07:28:12.853501 systemd[1]: Stopped systemd-modules-load.service. May 13 07:28:12.854333 systemd[1]: Stopping systemd-udevd.service... May 13 07:28:12.856220 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 07:28:12.858832 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 07:28:12.858927 systemd[1]: Stopped systemd-resolved.service. May 13 07:28:12.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.860509 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 07:28:12.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.860619 systemd[1]: Stopped systemd-udevd.service. May 13 07:28:12.861000 audit: BPF prog-id=6 op=UNLOAD May 13 07:28:12.861966 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 07:28:12.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.862005 systemd[1]: Closed systemd-udevd-control.socket. May 13 07:28:12.862497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 07:28:12.862526 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 07:28:12.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.863250 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 07:28:12.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.863287 systemd[1]: Stopped dracut-pre-udev.service. May 13 07:28:12.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.866738 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 07:28:12.866781 systemd[1]: Stopped dracut-cmdline.service. May 13 07:28:12.867315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 07:28:12.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.867353 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 07:28:12.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.868456 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 07:28:12.869000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 07:28:12.869047 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 07:28:12.875651 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 07:28:12.875697 systemd[1]: Stopped kmod-static-nodes.service. May 13 07:28:12.876357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 07:28:12.876423 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 07:28:12.878082 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 07:28:12.878148 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 07:28:12.878759 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 07:28:12.878831 systemd[1]: Stopped network-cleanup.service. May 13 07:28:12.879538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 07:28:12.879604 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 07:28:12.908034 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 07:28:12.908167 systemd[1]: Stopped sysroot-boot.service. May 13 07:28:12.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.909265 systemd[1]: Reached target initrd-switch-root.target. May 13 07:28:12.910087 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 07:28:12.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:12.910128 systemd[1]: Stopped initrd-setup-root.service. May 13 07:28:12.911610 systemd[1]: Starting initrd-switch-root.service... May 13 07:28:12.929587 systemd[1]: Switching root. May 13 07:28:12.947311 systemd-journald[185]: Journal stopped May 13 07:28:17.245456 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 13 07:28:17.245515 kernel: SELinux: Class mctp_socket not defined in policy. May 13 07:28:17.245531 kernel: SELinux: Class anon_inode not defined in policy. May 13 07:28:17.245543 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 07:28:17.245556 kernel: SELinux: policy capability network_peer_controls=1 May 13 07:28:17.245568 kernel: SELinux: policy capability open_perms=1 May 13 07:28:17.245580 kernel: SELinux: policy capability extended_socket_class=1 May 13 07:28:17.245595 kernel: SELinux: policy capability always_check_network=0 May 13 07:28:17.245606 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 07:28:17.245622 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 07:28:17.245633 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 07:28:17.245647 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 07:28:17.245662 systemd[1]: Successfully loaded SELinux policy in 96.543ms. May 13 07:28:17.245677 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.243ms. May 13 07:28:17.245692 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 07:28:17.245705 systemd[1]: Detected virtualization kvm. May 13 07:28:17.245717 systemd[1]: Detected architecture x86-64. May 13 07:28:17.245730 systemd[1]: Detected first boot. May 13 07:28:17.245744 systemd[1]: Hostname set to . May 13 07:28:17.245757 systemd[1]: Initializing machine ID from VM UUID. May 13 07:28:17.245769 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 07:28:17.245781 systemd[1]: Populated /etc with preset unit settings. May 13 07:28:17.245794 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:28:17.245808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:28:17.245821 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:28:17.245839 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 07:28:17.245852 systemd[1]: Stopped initrd-switch-root.service. May 13 07:28:17.245864 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 07:28:17.245877 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 07:28:17.245889 systemd[1]: Created slice system-addon\x2drun.slice. May 13 07:28:17.245902 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 13 07:28:17.245914 systemd[1]: Created slice system-getty.slice. May 13 07:28:17.245928 systemd[1]: Created slice system-modprobe.slice. May 13 07:28:17.245940 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 07:28:17.245953 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 07:28:17.245965 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 07:28:17.245978 systemd[1]: Created slice user.slice. May 13 07:28:17.245990 systemd[1]: Started systemd-ask-password-console.path. May 13 07:28:17.246002 systemd[1]: Started systemd-ask-password-wall.path. May 13 07:28:17.246014 systemd[1]: Set up automount boot.automount. May 13 07:28:17.246029 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 07:28:17.246041 systemd[1]: Stopped target initrd-switch-root.target. May 13 07:28:17.246053 systemd[1]: Stopped target initrd-fs.target. May 13 07:28:17.246065 systemd[1]: Stopped target initrd-root-fs.target. May 13 07:28:17.246077 systemd[1]: Reached target integritysetup.target. May 13 07:28:17.246089 systemd[1]: Reached target remote-cryptsetup.target. May 13 07:28:17.246102 systemd[1]: Reached target remote-fs.target. May 13 07:28:17.246116 systemd[1]: Reached target slices.target. May 13 07:28:17.246128 systemd[1]: Reached target swap.target. May 13 07:28:17.246140 systemd[1]: Reached target torcx.target. May 13 07:28:17.246153 systemd[1]: Reached target veritysetup.target. May 13 07:28:17.246167 systemd[1]: Listening on systemd-coredump.socket. May 13 07:28:17.246179 systemd[1]: Listening on systemd-initctl.socket. May 13 07:28:17.246191 systemd[1]: Listening on systemd-networkd.socket. May 13 07:28:17.246203 systemd[1]: Listening on systemd-udevd-control.socket. May 13 07:28:17.246216 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 07:28:17.246227 systemd[1]: Listening on systemd-userdbd.socket. May 13 07:28:17.246241 systemd[1]: Mounting dev-hugepages.mount... May 13 07:28:17.246253 systemd[1]: Mounting dev-mqueue.mount... May 13 07:28:17.246266 systemd[1]: Mounting media.mount... May 13 07:28:17.246279 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:17.246291 systemd[1]: Mounting sys-kernel-debug.mount... May 13 07:28:17.246303 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 07:28:17.246315 systemd[1]: Mounting tmp.mount... May 13 07:28:17.246328 systemd[1]: Starting flatcar-tmpfiles.service... May 13 07:28:17.246340 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:28:17.246354 systemd[1]: Starting kmod-static-nodes.service... May 13 07:28:17.246367 systemd[1]: Starting modprobe@configfs.service... May 13 07:28:17.246393 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:28:17.246407 systemd[1]: Starting modprobe@drm.service... May 13 07:28:17.246419 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:28:17.246431 systemd[1]: Starting modprobe@fuse.service... May 13 07:28:17.246443 systemd[1]: Starting modprobe@loop.service... May 13 07:28:17.246456 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 07:28:17.246468 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 07:28:17.246483 systemd[1]: Stopped systemd-fsck-root.service. May 13 07:28:17.246495 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 07:28:17.246507 systemd[1]: Stopped systemd-fsck-usr.service. May 13 07:28:17.246523 systemd[1]: Stopped systemd-journald.service. May 13 07:28:17.246535 systemd[1]: Starting systemd-journald.service... May 13 07:28:17.246547 kernel: fuse: init (API version 7.34) May 13 07:28:17.246559 systemd[1]: Starting systemd-modules-load.service... May 13 07:28:17.246571 systemd[1]: Starting systemd-network-generator.service... May 13 07:28:17.246583 kernel: loop: module loaded May 13 07:28:17.246597 systemd[1]: Starting systemd-remount-fs.service... May 13 07:28:17.246609 systemd[1]: Starting systemd-udev-trigger.service... May 13 07:28:17.246622 systemd[1]: verity-setup.service: Deactivated successfully. May 13 07:28:17.246635 systemd[1]: Stopped verity-setup.service. May 13 07:28:17.246652 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:17.246665 systemd[1]: Mounted dev-hugepages.mount. May 13 07:28:17.246676 systemd[1]: Mounted dev-mqueue.mount. May 13 07:28:17.246690 systemd[1]: Mounted media.mount. May 13 07:28:17.246702 systemd[1]: Mounted sys-kernel-debug.mount. May 13 07:28:17.246717 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 07:28:17.246732 systemd-journald[948]: Journal started May 13 07:28:17.246776 systemd-journald[948]: Runtime Journal (/run/log/journal/d2aa4c522b734243aebc0f44b03529f3) is 8.0M, max 78.4M, 70.4M free. May 13 07:28:13.334000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 07:28:13.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 07:28:13.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 07:28:13.471000 audit: BPF prog-id=10 op=LOAD May 13 07:28:13.471000 audit: BPF prog-id=10 op=UNLOAD May 13 07:28:13.471000 audit: BPF prog-id=11 op=LOAD May 13 07:28:13.471000 audit: BPF prog-id=11 op=UNLOAD May 13 07:28:13.660000 audit[859]: AVC avc: denied { associate } for pid=859 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 07:28:13.660000 audit[859]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=842 pid=859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:28:13.660000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 07:28:13.662000 audit[859]: AVC avc: denied { associate } for pid=859 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 07:28:13.662000 audit[859]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=842 pid=859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:28:13.662000 audit: CWD cwd="/" May 13 07:28:13.662000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:13.662000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:13.662000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 07:28:16.982000 audit: BPF prog-id=12 op=LOAD May 13 07:28:16.982000 audit: BPF prog-id=3 op=UNLOAD May 13 07:28:16.982000 audit: BPF prog-id=13 op=LOAD May 13 07:28:16.982000 audit: BPF prog-id=14 op=LOAD May 13 07:28:16.982000 audit: BPF prog-id=4 op=UNLOAD May 13 07:28:16.982000 audit: BPF prog-id=5 op=UNLOAD May 13 07:28:16.983000 audit: BPF prog-id=15 op=LOAD May 13 07:28:16.983000 audit: BPF prog-id=12 op=UNLOAD May 13 07:28:16.983000 audit: BPF prog-id=16 op=LOAD May 13 07:28:16.983000 audit: BPF prog-id=17 op=LOAD May 13 07:28:16.983000 audit: BPF prog-id=13 op=UNLOAD May 13 07:28:16.983000 audit: BPF prog-id=14 op=UNLOAD May 13 07:28:16.984000 audit: BPF prog-id=18 op=LOAD May 13 07:28:16.984000 audit: BPF prog-id=15 op=UNLOAD May 13 07:28:16.984000 audit: BPF prog-id=19 op=LOAD May 13 07:28:16.984000 audit: BPF prog-id=20 op=LOAD May 13 07:28:16.984000 audit: BPF prog-id=16 op=UNLOAD May 13 07:28:16.984000 audit: BPF prog-id=17 op=UNLOAD May 13 07:28:16.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:16.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:16.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:16.999000 audit: BPF prog-id=18 op=UNLOAD May 13 07:28:17.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.202000 audit: BPF prog-id=21 op=LOAD May 13 07:28:17.202000 audit: BPF prog-id=22 op=LOAD May 13 07:28:17.202000 audit: BPF prog-id=23 op=LOAD May 13 07:28:17.202000 audit: BPF prog-id=19 op=UNLOAD May 13 07:28:17.202000 audit: BPF prog-id=20 op=UNLOAD May 13 07:28:17.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.242000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 07:28:17.242000 audit[948]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff92781f70 a2=4000 a3=7fff9278200c items=0 ppid=1 pid=948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:28:17.242000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 07:28:16.980722 systemd[1]: Queued start job for default target multi-user.target. May 13 07:28:13.654913 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:28:16.980734 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 07:28:13.656260 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 07:28:16.985197 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 07:28:13.656281 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 07:28:13.656313 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 07:28:17.248629 systemd[1]: Started systemd-journald.service. May 13 07:28:13.656324 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 07:28:13.656355 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 07:28:13.656370 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 07:28:13.656570 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 07:28:13.656611 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 07:28:17.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:13.656626 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 07:28:13.659488 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 07:28:17.249496 systemd[1]: Mounted tmp.mount. May 13 07:28:13.659527 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 07:28:13.659547 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 07:28:13.659564 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 07:28:17.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:13.659583 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 07:28:17.250206 systemd[1]: Finished flatcar-tmpfiles.service. May 13 07:28:13.659599 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 07:28:16.581739 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:28:16.582538 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:28:16.582660 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:28:17.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:16.582847 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:28:17.251656 systemd[1]: Finished kmod-static-nodes.service. May 13 07:28:16.582907 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 07:28:17.252544 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 07:28:16.582975 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:28:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 07:28:17.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.252948 systemd[1]: Finished modprobe@configfs.service. May 13 07:28:17.253674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:28:17.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.253969 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:28:17.254745 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 07:28:17.255056 systemd[1]: Finished modprobe@drm.service. May 13 07:28:17.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.255701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:28:17.255943 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:28:17.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.256762 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 07:28:17.257014 systemd[1]: Finished modprobe@fuse.service. May 13 07:28:17.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.257709 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:28:17.257963 systemd[1]: Finished modprobe@loop.service. May 13 07:28:17.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.258867 systemd[1]: Finished systemd-modules-load.service. May 13 07:28:17.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.259584 systemd[1]: Finished systemd-network-generator.service. May 13 07:28:17.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.260521 systemd[1]: Finished systemd-remount-fs.service. May 13 07:28:17.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.261469 systemd[1]: Reached target network-pre.target. May 13 07:28:17.263111 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 07:28:17.267117 systemd[1]: Mounting sys-kernel-config.mount... May 13 07:28:17.267742 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 07:28:17.269613 systemd[1]: Starting systemd-hwdb-update.service... May 13 07:28:17.271735 systemd[1]: Starting systemd-journal-flush.service... May 13 07:28:17.272887 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:28:17.274211 systemd[1]: Starting systemd-random-seed.service... May 13 07:28:17.275303 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:28:17.279416 systemd[1]: Starting systemd-sysctl.service... May 13 07:28:17.281152 systemd[1]: Starting systemd-sysusers.service... May 13 07:28:17.283627 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 07:28:17.284436 systemd[1]: Mounted sys-kernel-config.mount. May 13 07:28:17.288395 systemd-journald[948]: Time spent on flushing to /var/log/journal/d2aa4c522b734243aebc0f44b03529f3 is 24.727ms for 1117 entries. May 13 07:28:17.288395 systemd-journald[948]: System Journal (/var/log/journal/d2aa4c522b734243aebc0f44b03529f3) is 8.0M, max 584.8M, 576.8M free. May 13 07:28:17.357551 systemd-journald[948]: Received client request to flush runtime journal. May 13 07:28:17.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.297218 systemd[1]: Finished systemd-random-seed.service. May 13 07:28:17.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.297864 systemd[1]: Reached target first-boot-complete.target. May 13 07:28:17.316517 systemd[1]: Finished systemd-sysctl.service. May 13 07:28:17.342899 systemd[1]: Finished systemd-udev-trigger.service. May 13 07:28:17.344527 systemd[1]: Starting systemd-udev-settle.service... May 13 07:28:17.358677 systemd[1]: Finished systemd-journal-flush.service. May 13 07:28:17.364366 udevadm[968]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 07:28:17.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.364858 systemd[1]: Finished systemd-sysusers.service. May 13 07:28:17.366298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 07:28:17.406518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 07:28:17.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.901172 systemd[1]: Finished systemd-hwdb-update.service. May 13 07:28:17.917719 kernel: kauditd_printk_skb: 108 callbacks suppressed May 13 07:28:17.917847 kernel: audit: type=1130 audit(1747121297.901:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:17.905000 audit: BPF prog-id=24 op=LOAD May 13 07:28:17.918941 systemd[1]: Starting systemd-udevd.service... May 13 07:28:17.921996 kernel: audit: type=1334 audit(1747121297.905:148): prog-id=24 op=LOAD May 13 07:28:17.922130 kernel: audit: type=1334 audit(1747121297.917:149): prog-id=25 op=LOAD May 13 07:28:17.922184 kernel: audit: type=1334 audit(1747121297.917:150): prog-id=7 op=UNLOAD May 13 07:28:17.922233 kernel: audit: type=1334 audit(1747121297.917:151): prog-id=8 op=UNLOAD May 13 07:28:17.917000 audit: BPF prog-id=25 op=LOAD May 13 07:28:17.917000 audit: BPF prog-id=7 op=UNLOAD May 13 07:28:17.917000 audit: BPF prog-id=8 op=UNLOAD May 13 07:28:17.973642 systemd-udevd[971]: Using default interface naming scheme 'v252'. May 13 07:28:18.036323 systemd[1]: Started systemd-udevd.service. May 13 07:28:18.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.050455 kernel: audit: type=1130 audit(1747121298.037:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.058193 systemd[1]: Starting systemd-networkd.service... May 13 07:28:18.055000 audit: BPF prog-id=26 op=LOAD May 13 07:28:18.064507 kernel: audit: type=1334 audit(1747121298.055:153): prog-id=26 op=LOAD May 13 07:28:18.077000 audit: BPF prog-id=27 op=LOAD May 13 07:28:18.082437 kernel: audit: type=1334 audit(1747121298.077:154): prog-id=27 op=LOAD May 13 07:28:18.083555 systemd[1]: Starting systemd-userdbd.service... May 13 07:28:18.081000 audit: BPF prog-id=28 op=LOAD May 13 07:28:18.081000 audit: BPF prog-id=29 op=LOAD May 13 07:28:18.092503 kernel: audit: type=1334 audit(1747121298.081:155): prog-id=28 op=LOAD May 13 07:28:18.092561 kernel: audit: type=1334 audit(1747121298.081:156): prog-id=29 op=LOAD May 13 07:28:18.112834 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 07:28:18.141509 systemd[1]: Started systemd-userdbd.service. May 13 07:28:18.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.187865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 07:28:18.238680 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 07:28:18.236000 audit[981]: AVC avc: denied { confidentiality } for pid=981 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 07:28:18.249414 kernel: ACPI: button: Power Button [PWRF] May 13 07:28:18.236000 audit[981]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ef4debccd0 a1=338ac a2=7fa53547dbc5 a3=5 items=110 ppid=971 pid=981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:28:18.236000 audit: CWD cwd="/" May 13 07:28:18.236000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=1 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=2 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=3 name=(null) inode=13782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=4 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=5 name=(null) inode=13783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=6 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=7 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=8 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=9 name=(null) inode=13785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=10 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=11 name=(null) inode=13786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=12 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=13 name=(null) inode=13787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=14 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=15 name=(null) inode=13788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=16 name=(null) inode=13784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=17 name=(null) inode=13789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=18 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=19 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=20 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=21 name=(null) inode=13791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=22 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=23 name=(null) inode=13792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=24 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=25 name=(null) inode=13793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=26 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=27 name=(null) inode=13794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=28 name=(null) inode=13790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=29 name=(null) inode=13795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=30 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=31 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=32 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=33 name=(null) inode=13797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=34 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=35 name=(null) inode=13798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=36 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=37 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=38 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=39 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=40 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=41 name=(null) inode=13801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=42 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=43 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=44 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=45 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=46 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=47 name=(null) inode=13804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=48 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=49 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=50 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=51 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=52 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=53 name=(null) inode=13807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=55 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=56 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=57 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=58 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=59 name=(null) inode=13810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=60 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=61 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=62 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=63 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=64 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=65 name=(null) inode=13813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=66 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=67 name=(null) inode=13814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=68 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=69 name=(null) inode=13815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=70 name=(null) inode=13811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=71 name=(null) inode=13816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=72 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=73 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=74 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=75 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=76 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=77 name=(null) inode=13819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=78 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=79 name=(null) inode=13820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=80 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=81 name=(null) inode=13821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=82 name=(null) inode=13817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=83 name=(null) inode=13822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=84 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=85 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=86 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=87 name=(null) inode=13824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=88 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=89 name=(null) inode=13825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=90 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=91 name=(null) inode=13826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=92 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=93 name=(null) inode=13827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=94 name=(null) inode=13823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=95 name=(null) inode=13828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=96 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=97 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=98 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=99 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=100 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=101 name=(null) inode=13831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=102 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=103 name=(null) inode=13832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=104 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=105 name=(null) inode=13833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=106 name=(null) inode=13829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=107 name=(null) inode=13834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PATH item=109 name=(null) inode=13835 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:28:18.236000 audit: PROCTITLE proctitle="(udev-worker)" May 13 07:28:18.269403 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 07:28:18.294400 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 07:28:18.298408 kernel: mousedev: PS/2 mouse device common for all mice May 13 07:28:18.568185 systemd[1]: Finished systemd-udev-settle.service. May 13 07:28:18.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.571943 systemd[1]: Starting lvm2-activation-early.service... May 13 07:28:18.578536 systemd-networkd[987]: lo: Link UP May 13 07:28:18.578559 systemd-networkd[987]: lo: Gained carrier May 13 07:28:18.581047 systemd-networkd[987]: Enumeration completed May 13 07:28:18.581264 systemd[1]: Started systemd-networkd.service. May 13 07:28:18.581299 systemd-networkd[987]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 07:28:18.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.586899 systemd-networkd[987]: eth0: Link UP May 13 07:28:18.586918 systemd-networkd[987]: eth0: Gained carrier May 13 07:28:18.602628 systemd-networkd[987]: eth0: DHCPv4 address 172.24.4.239/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 07:28:18.619175 lvm[1005]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 07:28:18.657297 systemd[1]: Finished lvm2-activation-early.service. May 13 07:28:18.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.658776 systemd[1]: Reached target cryptsetup.target. May 13 07:28:18.662047 systemd[1]: Starting lvm2-activation.service... May 13 07:28:18.670900 lvm[1006]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 07:28:18.710176 systemd[1]: Finished lvm2-activation.service. May 13 07:28:18.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.711573 systemd[1]: Reached target local-fs-pre.target. May 13 07:28:18.712786 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 07:28:18.712849 systemd[1]: Reached target local-fs.target. May 13 07:28:18.713993 systemd[1]: Reached target machines.target. May 13 07:28:18.717594 systemd[1]: Starting ldconfig.service... May 13 07:28:18.720142 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:28:18.720234 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:18.722337 systemd[1]: Starting systemd-boot-update.service... May 13 07:28:18.726064 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 07:28:18.733940 systemd[1]: Starting systemd-machine-id-commit.service... May 13 07:28:18.742533 systemd[1]: Starting systemd-sysext.service... May 13 07:28:18.749944 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1008 (bootctl) May 13 07:28:18.752824 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 07:28:18.801801 systemd[1]: Unmounting usr-share-oem.mount... May 13 07:28:18.840353 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 07:28:18.840755 systemd[1]: Unmounted usr-share-oem.mount. May 13 07:28:18.863126 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 07:28:18.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:18.880461 kernel: loop0: detected capacity change from 0 to 218376 May 13 07:28:19.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.058532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 07:28:19.059838 systemd[1]: Finished systemd-machine-id-commit.service. May 13 07:28:19.120170 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 07:28:19.155627 kernel: loop1: detected capacity change from 0 to 218376 May 13 07:28:19.200116 (sd-sysext)[1022]: Using extensions 'kubernetes'. May 13 07:28:19.203201 (sd-sysext)[1022]: Merged extensions into '/usr'. May 13 07:28:19.249103 systemd-fsck[1019]: fsck.fat 4.2 (2021-01-31) May 13 07:28:19.249103 systemd-fsck[1019]: /dev/vda1: 790 files, 120692/258078 clusters May 13 07:28:19.250004 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:19.252116 systemd[1]: Mounting usr-share-oem.mount... May 13 07:28:19.253082 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:28:19.254495 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:28:19.257624 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:28:19.259051 systemd[1]: Starting modprobe@loop.service... May 13 07:28:19.261341 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:28:19.261509 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.261627 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:19.262616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:28:19.262738 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:28:19.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.263830 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:28:19.263941 systemd[1]: Finished modprobe@loop.service. May 13 07:28:19.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.265169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:28:19.265281 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:28:19.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.270037 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 07:28:19.270834 systemd[1]: Mounted usr-share-oem.mount. May 13 07:28:19.273659 systemd[1]: Finished systemd-sysext.service. May 13 07:28:19.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.276669 systemd[1]: Mounting boot.mount... May 13 07:28:19.278222 systemd[1]: Starting ensure-sysext.service... May 13 07:28:19.280817 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:28:19.280870 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:28:19.281777 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 07:28:19.288910 systemd[1]: Reloading. May 13 07:28:19.298600 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 07:28:19.302329 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 07:28:19.305708 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 07:28:19.365304 /usr/lib/systemd/system-generators/torcx-generator[1049]: time="2025-05-13T07:28:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:28:19.365334 /usr/lib/systemd/system-generators/torcx-generator[1049]: time="2025-05-13T07:28:19Z" level=info msg="torcx already run" May 13 07:28:19.473524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:28:19.474019 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:28:19.500362 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:28:19.564000 audit: BPF prog-id=30 op=LOAD May 13 07:28:19.564000 audit: BPF prog-id=26 op=UNLOAD May 13 07:28:19.566000 audit: BPF prog-id=31 op=LOAD May 13 07:28:19.566000 audit: BPF prog-id=32 op=LOAD May 13 07:28:19.567000 audit: BPF prog-id=24 op=UNLOAD May 13 07:28:19.567000 audit: BPF prog-id=25 op=UNLOAD May 13 07:28:19.567000 audit: BPF prog-id=33 op=LOAD May 13 07:28:19.567000 audit: BPF prog-id=27 op=UNLOAD May 13 07:28:19.568000 audit: BPF prog-id=34 op=LOAD May 13 07:28:19.568000 audit: BPF prog-id=35 op=LOAD May 13 07:28:19.568000 audit: BPF prog-id=28 op=UNLOAD May 13 07:28:19.568000 audit: BPF prog-id=29 op=UNLOAD May 13 07:28:19.569000 audit: BPF prog-id=36 op=LOAD May 13 07:28:19.569000 audit: BPF prog-id=21 op=UNLOAD May 13 07:28:19.569000 audit: BPF prog-id=37 op=LOAD May 13 07:28:19.569000 audit: BPF prog-id=38 op=LOAD May 13 07:28:19.569000 audit: BPF prog-id=22 op=UNLOAD May 13 07:28:19.570000 audit: BPF prog-id=23 op=UNLOAD May 13 07:28:19.577791 systemd[1]: Mounted boot.mount. May 13 07:28:19.594044 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:28:19.595255 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:28:19.596857 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:28:19.598832 systemd[1]: Starting modprobe@loop.service... May 13 07:28:19.600489 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:28:19.600613 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.601402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:28:19.601525 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:28:19.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.603297 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:28:19.603423 systemd[1]: Finished modprobe@loop.service. May 13 07:28:19.604431 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:28:19.606292 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:28:19.607783 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:28:19.610785 systemd[1]: Starting modprobe@loop.service... May 13 07:28:19.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.612509 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:28:19.612630 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.613314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:28:19.613482 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:28:19.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.615610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:28:19.615723 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:28:19.616837 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:28:19.620295 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:28:19.620713 systemd[1]: Finished modprobe@loop.service. May 13 07:28:19.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.621905 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:28:19.623184 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:28:19.626468 systemd[1]: Starting modprobe@drm.service... May 13 07:28:19.628137 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:28:19.630728 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:28:19.630868 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.632330 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 07:28:19.634148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:28:19.634266 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:28:19.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.635462 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 07:28:19.635568 systemd[1]: Finished modprobe@drm.service. May 13 07:28:19.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.636731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:28:19.636857 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:28:19.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.638069 systemd[1]: Finished systemd-boot-update.service. May 13 07:28:19.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.639260 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:28:19.640236 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:28:19.640516 systemd[1]: Finished ensure-sysext.service. May 13 07:28:19.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.690443 ldconfig[1007]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 07:28:19.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.700499 systemd[1]: Finished ldconfig.service. May 13 07:28:19.722085 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 07:28:19.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.723804 systemd[1]: Starting audit-rules.service... May 13 07:28:19.725229 systemd[1]: Starting clean-ca-certificates.service... May 13 07:28:19.727705 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 07:28:19.731000 audit: BPF prog-id=39 op=LOAD May 13 07:28:19.733227 systemd[1]: Starting systemd-resolved.service... May 13 07:28:19.734000 audit: BPF prog-id=40 op=LOAD May 13 07:28:19.735199 systemd[1]: Starting systemd-timesyncd.service... May 13 07:28:19.738506 systemd[1]: Starting systemd-update-utmp.service... May 13 07:28:19.748187 systemd[1]: Finished clean-ca-certificates.service. May 13 07:28:19.748837 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 07:28:19.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.751000 audit[1116]: SYSTEM_BOOT pid=1116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 07:28:19.753569 systemd[1]: Finished systemd-update-utmp.service. May 13 07:28:19.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.778473 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 07:28:19.780166 systemd[1]: Starting systemd-update-done.service... May 13 07:28:19.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.787436 systemd[1]: Finished systemd-update-done.service. May 13 07:28:19.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:28:19.805255 augenrules[1127]: No rules May 13 07:28:19.804000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 07:28:19.804000 audit[1127]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffbb8c5290 a2=420 a3=0 items=0 ppid=1106 pid=1127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:28:19.804000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 07:28:19.805923 systemd[1]: Finished audit-rules.service. May 13 07:28:19.813478 systemd[1]: Started systemd-timesyncd.service. May 13 07:28:19.814025 systemd[1]: Reached target time-set.target. May 13 07:28:19.825055 systemd-resolved[1112]: Positive Trust Anchors: May 13 07:28:19.825310 systemd-resolved[1112]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 07:28:19.825417 systemd-resolved[1112]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 07:28:19.832812 systemd-resolved[1112]: Using system hostname 'ci-3510-3-7-n-1ba5f14697.novalocal'. May 13 07:28:19.834235 systemd[1]: Started systemd-resolved.service. May 13 07:28:19.834745 systemd[1]: Reached target network.target. May 13 07:28:19.835176 systemd[1]: Reached target nss-lookup.target. May 13 07:28:19.835640 systemd[1]: Reached target sysinit.target. May 13 07:28:19.836198 systemd[1]: Started motdgen.path. May 13 07:28:19.836696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 07:28:19.837355 systemd[1]: Started logrotate.timer. May 13 07:28:19.837925 systemd[1]: Started mdadm.timer. May 13 07:28:19.838353 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 07:28:19.838819 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 07:28:19.838848 systemd[1]: Reached target paths.target. May 13 07:28:19.839272 systemd[1]: Reached target timers.target. May 13 07:28:19.839992 systemd[1]: Listening on dbus.socket. May 13 07:28:19.841464 systemd[1]: Starting docker.socket... May 13 07:28:19.844828 systemd[1]: Listening on sshd.socket. May 13 07:28:19.845360 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.845826 systemd[1]: Listening on docker.socket. May 13 07:28:19.846320 systemd[1]: Reached target sockets.target. May 13 07:28:19.846778 systemd[1]: Reached target basic.target. May 13 07:28:19.847226 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:19.847261 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 07:28:19.847283 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 07:28:19.848165 systemd[1]: Starting containerd.service... May 13 07:28:19.850261 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 13 07:28:19.851655 systemd[1]: Starting dbus.service... May 13 07:28:19.854005 systemd[1]: Starting enable-oem-cloudinit.service... May 13 07:28:19.858235 systemd[1]: Starting extend-filesystems.service... May 13 07:28:19.859341 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 07:28:19.862518 systemd[1]: Starting motdgen.service... May 13 07:28:19.865010 jq[1140]: false May 13 07:28:19.865762 systemd[1]: Starting prepare-helm.service... May 13 07:28:19.867581 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 07:28:19.870510 systemd[1]: Starting sshd-keygen.service... May 13 07:28:19.876563 systemd[1]: Starting systemd-logind.service... May 13 07:28:19.877068 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:28:19.877140 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 07:28:19.877590 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 07:28:19.878182 systemd[1]: Starting update-engine.service... May 13 07:28:19.879625 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 07:28:19.880451 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:28:19.882829 jq[1150]: true May 13 07:28:19.884885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 07:28:19.885062 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 07:28:19.886003 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 07:28:19.886150 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 07:28:19.894942 systemd[1]: Created slice system-sshd.slice. May 13 07:28:19.900258 jq[1153]: true May 13 07:28:19.911466 tar[1152]: linux-amd64/LICENSE May 13 07:28:19.911701 tar[1152]: linux-amd64/helm May 13 07:28:19.941856 extend-filesystems[1141]: Found loop1 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda May 13 07:28:19.941856 extend-filesystems[1141]: Found vda1 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda2 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda3 May 13 07:28:19.941856 extend-filesystems[1141]: Found usr May 13 07:28:19.941856 extend-filesystems[1141]: Found vda4 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda6 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda7 May 13 07:28:19.941856 extend-filesystems[1141]: Found vda9 May 13 07:28:19.941856 extend-filesystems[1141]: Checking size of /dev/vda9 May 13 07:28:19.949375 systemd[1]: motdgen.service: Deactivated successfully. May 13 07:28:19.949577 systemd[1]: Finished motdgen.service. May 13 07:28:19.953938 dbus-daemon[1137]: [system] SELinux support is enabled May 13 07:28:19.954069 systemd[1]: Started dbus.service. May 13 07:28:19.956518 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 07:28:19.956544 systemd[1]: Reached target system-config.target. May 13 07:28:19.957076 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 07:28:19.957097 systemd[1]: Reached target user-config.target. May 13 07:28:19.972360 systemd-timesyncd[1113]: Contacted time server 155.248.196.28:123 (0.flatcar.pool.ntp.org). May 13 07:28:19.972440 systemd-timesyncd[1113]: Initial clock synchronization to Tue 2025-05-13 07:28:20.117194 UTC. May 13 07:28:19.978344 extend-filesystems[1141]: Resized partition /dev/vda9 May 13 07:28:19.992410 extend-filesystems[1192]: resize2fs 1.46.5 (30-Dec-2021) May 13 07:28:20.000689 env[1155]: time="2025-05-13T07:28:20.000647186Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 07:28:20.013560 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 07:28:20.020442 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 07:28:20.074265 update_engine[1149]: I0513 07:28:20.022933 1149 main.cc:92] Flatcar Update Engine starting May 13 07:28:20.074265 update_engine[1149]: I0513 07:28:20.038125 1149 update_check_scheduler.cc:74] Next update check in 11m58s May 13 07:28:20.074544 env[1155]: time="2025-05-13T07:28:20.042498736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 07:28:20.074544 env[1155]: time="2025-05-13T07:28:20.072867491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.074544 env[1155]: time="2025-05-13T07:28:20.074367838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 07:28:20.074544 env[1155]: time="2025-05-13T07:28:20.074407436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.032796 systemd[1]: Started update-engine.service. May 13 07:28:20.074754 env[1155]: time="2025-05-13T07:28:20.074576527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 07:28:20.074754 env[1155]: time="2025-05-13T07:28:20.074597561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.074754 env[1155]: time="2025-05-13T07:28:20.074611658Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 07:28:20.074754 env[1155]: time="2025-05-13T07:28:20.074622889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.074754 env[1155]: time="2025-05-13T07:28:20.074698840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.035079 systemd[1]: Started locksmithd.service. May 13 07:28:20.076705 env[1155]: time="2025-05-13T07:28:20.074914630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 07:28:20.076705 env[1155]: time="2025-05-13T07:28:20.075027221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 07:28:20.076705 env[1155]: time="2025-05-13T07:28:20.075044909Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 07:28:20.076705 env[1155]: time="2025-05-13T07:28:20.075094269Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 07:28:20.076705 env[1155]: time="2025-05-13T07:28:20.075108192Z" level=info msg="metadata content store policy set" policy=shared May 13 07:28:20.077592 extend-filesystems[1192]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 07:28:20.077592 extend-filesystems[1192]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 07:28:20.077592 extend-filesystems[1192]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 07:28:20.075828 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 07:28:20.081445 bash[1188]: Updated "/home/core/.ssh/authorized_keys" May 13 07:28:20.081558 extend-filesystems[1141]: Resized filesystem in /dev/vda9 May 13 07:28:20.075993 systemd[1]: Finished extend-filesystems.service. May 13 07:28:20.080359 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 07:28:20.083233 systemd-logind[1146]: Watching system buttons on /dev/input/event1 (Power Button) May 13 07:28:20.083559 systemd-logind[1146]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 07:28:20.085570 systemd-logind[1146]: New seat seat0. May 13 07:28:20.087689 env[1155]: time="2025-05-13T07:28:20.087630990Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 07:28:20.087689 env[1155]: time="2025-05-13T07:28:20.087662733Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 07:28:20.087689 env[1155]: time="2025-05-13T07:28:20.087678574Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087707339Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087723875Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087739583Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087753945Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087768970Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087788 env[1155]: time="2025-05-13T07:28:20.087783374Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087923 env[1155]: time="2025-05-13T07:28:20.087797532Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087923 env[1155]: time="2025-05-13T07:28:20.087811506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 07:28:20.087923 env[1155]: time="2025-05-13T07:28:20.087825818Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 07:28:20.087923 env[1155]: time="2025-05-13T07:28:20.087910634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 07:28:20.088012 env[1155]: time="2025-05-13T07:28:20.087993114Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088311173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088343100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088357982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088419266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088436434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088450337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088462954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088476439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088489935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088503216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088516068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088540 env[1155]: time="2025-05-13T07:28:20.088531134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088651183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088669655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088683732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088697636Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088713314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088726676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088746352Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 07:28:20.088821 env[1155]: time="2025-05-13T07:28:20.088791254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 07:28:20.089490 env[1155]: time="2025-05-13T07:28:20.089006483Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 07:28:20.089490 env[1155]: time="2025-05-13T07:28:20.089140731Z" level=info msg="Connect containerd service" May 13 07:28:20.089490 env[1155]: time="2025-05-13T07:28:20.089171311Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 07:28:20.092549 env[1155]: time="2025-05-13T07:28:20.089743758Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 07:28:20.092549 env[1155]: time="2025-05-13T07:28:20.089932822Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 07:28:20.092549 env[1155]: time="2025-05-13T07:28:20.089972339Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 07:28:20.092549 env[1155]: time="2025-05-13T07:28:20.090017231Z" level=info msg="containerd successfully booted in 0.098028s" May 13 07:28:20.090069 systemd[1]: Started containerd.service. May 13 07:28:20.093167 env[1155]: time="2025-05-13T07:28:20.092904525Z" level=info msg="Start subscribing containerd event" May 13 07:28:20.093167 env[1155]: time="2025-05-13T07:28:20.092957466Z" level=info msg="Start recovering state" May 13 07:28:20.093167 env[1155]: time="2025-05-13T07:28:20.093150172Z" level=info msg="Start event monitor" May 13 07:28:20.093258 env[1155]: time="2025-05-13T07:28:20.093172531Z" level=info msg="Start snapshots syncer" May 13 07:28:20.093258 env[1155]: time="2025-05-13T07:28:20.093184026Z" level=info msg="Start cni network conf syncer for default" May 13 07:28:20.093306 env[1155]: time="2025-05-13T07:28:20.093269281Z" level=info msg="Start streaming server" May 13 07:28:20.095584 systemd[1]: Started systemd-logind.service. May 13 07:28:20.573526 systemd-networkd[987]: eth0: Gained IPv6LL May 13 07:28:20.578268 tar[1152]: linux-amd64/README.md May 13 07:28:20.576167 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 07:28:20.577179 systemd[1]: Reached target network-online.target. May 13 07:28:20.579631 systemd[1]: Starting kubelet.service... May 13 07:28:20.585121 systemd[1]: Finished prepare-helm.service. May 13 07:28:20.660281 locksmithd[1196]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 07:28:21.374092 sshd_keygen[1176]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 07:28:21.395043 systemd[1]: Finished sshd-keygen.service. May 13 07:28:21.397059 systemd[1]: Starting issuegen.service... May 13 07:28:21.398678 systemd[1]: Started sshd@0-172.24.4.239:22-172.24.4.1:36118.service. May 13 07:28:21.407737 systemd[1]: issuegen.service: Deactivated successfully. May 13 07:28:21.407893 systemd[1]: Finished issuegen.service. May 13 07:28:21.414276 systemd[1]: Starting systemd-user-sessions.service... May 13 07:28:21.422765 systemd[1]: Finished systemd-user-sessions.service. May 13 07:28:21.424702 systemd[1]: Started getty@tty1.service. May 13 07:28:21.426466 systemd[1]: Started serial-getty@ttyS0.service. May 13 07:28:21.427094 systemd[1]: Reached target getty.target. May 13 07:28:22.250729 systemd[1]: Started kubelet.service. May 13 07:28:22.674359 sshd[1217]: Accepted publickey for core from 172.24.4.1 port 36118 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:22.677940 sshd[1217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:22.706317 systemd[1]: Created slice user-500.slice. May 13 07:28:22.710595 systemd[1]: Starting user-runtime-dir@500.service... May 13 07:28:22.720134 systemd-logind[1146]: New session 1 of user core. May 13 07:28:22.729587 systemd[1]: Finished user-runtime-dir@500.service. May 13 07:28:22.731729 systemd[1]: Starting user@500.service... May 13 07:28:22.736978 (systemd)[1233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:22.821973 systemd[1233]: Queued start job for default target default.target. May 13 07:28:22.822700 systemd[1233]: Reached target paths.target. May 13 07:28:22.822807 systemd[1233]: Reached target sockets.target. May 13 07:28:22.822900 systemd[1233]: Reached target timers.target. May 13 07:28:22.822996 systemd[1233]: Reached target basic.target. May 13 07:28:22.823106 systemd[1233]: Reached target default.target. May 13 07:28:22.823236 systemd[1233]: Startup finished in 79ms. May 13 07:28:22.823978 systemd[1]: Started user@500.service. May 13 07:28:22.827601 systemd[1]: Started session-1.scope. May 13 07:28:23.318129 systemd[1]: Started sshd@1-172.24.4.239:22-172.24.4.1:36132.service. May 13 07:28:23.396584 kubelet[1226]: E0513 07:28:23.396499 1226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:28:23.399929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:28:23.400264 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:28:23.400792 systemd[1]: kubelet.service: Consumed 1.721s CPU time. May 13 07:28:24.403896 sshd[1243]: Accepted publickey for core from 172.24.4.1 port 36132 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:24.407194 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:24.417710 systemd-logind[1146]: New session 2 of user core. May 13 07:28:24.418569 systemd[1]: Started session-2.scope. May 13 07:28:25.159440 sshd[1243]: pam_unix(sshd:session): session closed for user core May 13 07:28:25.165306 systemd[1]: Started sshd@2-172.24.4.239:22-172.24.4.1:34592.service. May 13 07:28:25.169161 systemd[1]: sshd@1-172.24.4.239:22-172.24.4.1:36132.service: Deactivated successfully. May 13 07:28:25.170808 systemd[1]: session-2.scope: Deactivated successfully. May 13 07:28:25.174177 systemd-logind[1146]: Session 2 logged out. Waiting for processes to exit. May 13 07:28:25.176920 systemd-logind[1146]: Removed session 2. May 13 07:28:26.402724 sshd[1248]: Accepted publickey for core from 172.24.4.1 port 34592 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:26.405273 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:26.414512 systemd-logind[1146]: New session 3 of user core. May 13 07:28:26.416436 systemd[1]: Started session-3.scope. May 13 07:28:26.969929 coreos-metadata[1136]: May 13 07:28:26.969 WARN failed to locate config-drive, using the metadata service API instead May 13 07:28:27.063280 coreos-metadata[1136]: May 13 07:28:27.063 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 07:28:27.151658 sshd[1248]: pam_unix(sshd:session): session closed for user core May 13 07:28:27.156604 systemd[1]: sshd@2-172.24.4.239:22-172.24.4.1:34592.service: Deactivated successfully. May 13 07:28:27.158152 systemd[1]: session-3.scope: Deactivated successfully. May 13 07:28:27.159486 systemd-logind[1146]: Session 3 logged out. Waiting for processes to exit. May 13 07:28:27.161600 systemd-logind[1146]: Removed session 3. May 13 07:28:27.363748 coreos-metadata[1136]: May 13 07:28:27.362 INFO Fetch successful May 13 07:28:27.364074 coreos-metadata[1136]: May 13 07:28:27.363 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 07:28:27.380061 coreos-metadata[1136]: May 13 07:28:27.379 INFO Fetch successful May 13 07:28:27.385570 unknown[1136]: wrote ssh authorized keys file for user: core May 13 07:28:27.415235 update-ssh-keys[1256]: Updated "/home/core/.ssh/authorized_keys" May 13 07:28:27.416721 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 13 07:28:27.417616 systemd[1]: Reached target multi-user.target. May 13 07:28:27.420460 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 07:28:27.437116 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 07:28:27.437494 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 07:28:27.438518 systemd[1]: Startup finished in 969ms (kernel) + 10.523s (initrd) + 14.229s (userspace) = 25.723s. May 13 07:28:33.631997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 07:28:33.632515 systemd[1]: Stopped kubelet.service. May 13 07:28:33.632596 systemd[1]: kubelet.service: Consumed 1.721s CPU time. May 13 07:28:33.635339 systemd[1]: Starting kubelet.service... May 13 07:28:33.925843 systemd[1]: Started kubelet.service. May 13 07:28:34.015239 kubelet[1262]: E0513 07:28:34.015130 1262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:28:34.021876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:28:34.022156 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:28:37.203112 systemd[1]: Started sshd@3-172.24.4.239:22-172.24.4.1:49766.service. May 13 07:28:38.746254 sshd[1268]: Accepted publickey for core from 172.24.4.1 port 49766 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:38.749157 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:38.759507 systemd-logind[1146]: New session 4 of user core. May 13 07:28:38.760328 systemd[1]: Started session-4.scope. May 13 07:28:39.276379 sshd[1268]: pam_unix(sshd:session): session closed for user core May 13 07:28:39.280959 systemd[1]: Started sshd@4-172.24.4.239:22-172.24.4.1:49772.service. May 13 07:28:39.288552 systemd[1]: sshd@3-172.24.4.239:22-172.24.4.1:49766.service: Deactivated successfully. May 13 07:28:39.290914 systemd[1]: session-4.scope: Deactivated successfully. May 13 07:28:39.296731 systemd-logind[1146]: Session 4 logged out. Waiting for processes to exit. May 13 07:28:39.298551 systemd-logind[1146]: Removed session 4. May 13 07:28:40.644819 sshd[1273]: Accepted publickey for core from 172.24.4.1 port 49772 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:40.647368 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:40.657582 systemd-logind[1146]: New session 5 of user core. May 13 07:28:40.658225 systemd[1]: Started session-5.scope. May 13 07:28:41.273262 sshd[1273]: pam_unix(sshd:session): session closed for user core May 13 07:28:41.279863 systemd[1]: sshd@4-172.24.4.239:22-172.24.4.1:49772.service: Deactivated successfully. May 13 07:28:41.281144 systemd[1]: session-5.scope: Deactivated successfully. May 13 07:28:41.283035 systemd-logind[1146]: Session 5 logged out. Waiting for processes to exit. May 13 07:28:41.285810 systemd[1]: Started sshd@5-172.24.4.239:22-172.24.4.1:49788.service. May 13 07:28:41.288569 systemd-logind[1146]: Removed session 5. May 13 07:28:42.801694 sshd[1280]: Accepted publickey for core from 172.24.4.1 port 49788 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:42.804327 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:42.815133 systemd-logind[1146]: New session 6 of user core. May 13 07:28:42.815288 systemd[1]: Started session-6.scope. May 13 07:28:43.595019 sshd[1280]: pam_unix(sshd:session): session closed for user core May 13 07:28:43.600924 systemd[1]: Started sshd@6-172.24.4.239:22-172.24.4.1:41270.service. May 13 07:28:43.602156 systemd[1]: sshd@5-172.24.4.239:22-172.24.4.1:49788.service: Deactivated successfully. May 13 07:28:43.606627 systemd[1]: session-6.scope: Deactivated successfully. May 13 07:28:43.608878 systemd-logind[1146]: Session 6 logged out. Waiting for processes to exit. May 13 07:28:43.612218 systemd-logind[1146]: Removed session 6. May 13 07:28:44.131989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 07:28:44.132683 systemd[1]: Stopped kubelet.service. May 13 07:28:44.135866 systemd[1]: Starting kubelet.service... May 13 07:28:44.391010 systemd[1]: Started kubelet.service. May 13 07:28:44.605900 kubelet[1292]: E0513 07:28:44.605789 1292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:28:44.607639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:28:44.607758 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:28:45.214811 sshd[1285]: Accepted publickey for core from 172.24.4.1 port 41270 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:28:45.217340 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:28:45.227614 systemd-logind[1146]: New session 7 of user core. May 13 07:28:45.228268 systemd[1]: Started session-7.scope. May 13 07:28:45.811668 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 07:28:45.812190 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 07:28:45.863999 systemd[1]: Starting docker.service... May 13 07:28:45.926594 env[1308]: time="2025-05-13T07:28:45.926508524Z" level=info msg="Starting up" May 13 07:28:45.930295 env[1308]: time="2025-05-13T07:28:45.930237993Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 07:28:45.933475 env[1308]: time="2025-05-13T07:28:45.933433928Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 07:28:45.933729 env[1308]: time="2025-05-13T07:28:45.933683426Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 07:28:45.933916 env[1308]: time="2025-05-13T07:28:45.933867038Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 07:28:45.950248 env[1308]: time="2025-05-13T07:28:45.950205458Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 07:28:45.950248 env[1308]: time="2025-05-13T07:28:45.950228666Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 07:28:45.950248 env[1308]: time="2025-05-13T07:28:45.950245900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 07:28:45.950248 env[1308]: time="2025-05-13T07:28:45.950256607Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 07:28:46.020778 env[1308]: time="2025-05-13T07:28:46.020658959Z" level=info msg="Loading containers: start." May 13 07:28:46.222452 kernel: Initializing XFRM netlink socket May 13 07:28:46.268240 env[1308]: time="2025-05-13T07:28:46.268182212Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 07:28:46.367741 systemd-networkd[987]: docker0: Link UP May 13 07:28:46.387931 env[1308]: time="2025-05-13T07:28:46.387877289Z" level=info msg="Loading containers: done." May 13 07:28:46.407011 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3778848476-merged.mount: Deactivated successfully. May 13 07:28:46.412234 env[1308]: time="2025-05-13T07:28:46.412170696Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 07:28:46.412470 env[1308]: time="2025-05-13T07:28:46.412347657Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 07:28:46.412470 env[1308]: time="2025-05-13T07:28:46.412451941Z" level=info msg="Daemon has completed initialization" May 13 07:28:46.446933 systemd[1]: Started docker.service. May 13 07:28:46.464260 env[1308]: time="2025-05-13T07:28:46.464174126Z" level=info msg="API listen on /run/docker.sock" May 13 07:28:48.370520 env[1155]: time="2025-05-13T07:28:48.370447866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 07:28:49.422800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867375007.mount: Deactivated successfully. May 13 07:28:51.776284 env[1155]: time="2025-05-13T07:28:51.776241811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:51.781443 env[1155]: time="2025-05-13T07:28:51.781331092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:51.787881 env[1155]: time="2025-05-13T07:28:51.787825003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:51.790997 env[1155]: time="2025-05-13T07:28:51.790943941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:51.792003 env[1155]: time="2025-05-13T07:28:51.791970783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 07:28:51.793343 env[1155]: time="2025-05-13T07:28:51.793282040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 07:28:54.257148 env[1155]: time="2025-05-13T07:28:54.257065719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:54.260593 env[1155]: time="2025-05-13T07:28:54.260535997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:54.263831 env[1155]: time="2025-05-13T07:28:54.263779496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:54.267013 env[1155]: time="2025-05-13T07:28:54.266955072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:54.267905 env[1155]: time="2025-05-13T07:28:54.267856132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 07:28:54.268471 env[1155]: time="2025-05-13T07:28:54.268413799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 07:28:54.632077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 07:28:54.632578 systemd[1]: Stopped kubelet.service. May 13 07:28:54.635468 systemd[1]: Starting kubelet.service... May 13 07:28:54.943346 systemd[1]: Started kubelet.service. May 13 07:28:55.150027 kubelet[1435]: E0513 07:28:55.149950 1435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:28:55.153562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:28:55.153857 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:28:57.020217 env[1155]: time="2025-05-13T07:28:57.020128702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:57.023321 env[1155]: time="2025-05-13T07:28:57.023267977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:57.026656 env[1155]: time="2025-05-13T07:28:57.026597578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:57.030809 env[1155]: time="2025-05-13T07:28:57.030751567Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:57.032860 env[1155]: time="2025-05-13T07:28:57.032809362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 07:28:57.033653 env[1155]: time="2025-05-13T07:28:57.033561240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 07:28:58.507648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518757571.mount: Deactivated successfully. May 13 07:28:59.817216 env[1155]: time="2025-05-13T07:28:59.817094332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:59.854003 env[1155]: time="2025-05-13T07:28:59.853906127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:59.877981 env[1155]: time="2025-05-13T07:28:59.877914740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:59.894796 env[1155]: time="2025-05-13T07:28:59.894701595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:28:59.895661 env[1155]: time="2025-05-13T07:28:59.895540868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 07:28:59.898101 env[1155]: time="2025-05-13T07:28:59.898011958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 07:29:00.572285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686941373.mount: Deactivated successfully. May 13 07:29:02.180738 env[1155]: time="2025-05-13T07:29:02.180640481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.183751 env[1155]: time="2025-05-13T07:29:02.183698105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.194905 env[1155]: time="2025-05-13T07:29:02.194693920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.197476 env[1155]: time="2025-05-13T07:29:02.197444191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.199526 env[1155]: time="2025-05-13T07:29:02.199457700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 07:29:02.200047 env[1155]: time="2025-05-13T07:29:02.200024728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 07:29:02.742745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272740923.mount: Deactivated successfully. May 13 07:29:02.756820 env[1155]: time="2025-05-13T07:29:02.756711140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.760742 env[1155]: time="2025-05-13T07:29:02.760654448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.765573 env[1155]: time="2025-05-13T07:29:02.765479091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.771583 env[1155]: time="2025-05-13T07:29:02.771519818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 07:29:02.771848 env[1155]: time="2025-05-13T07:29:02.771544177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:02.773101 env[1155]: time="2025-05-13T07:29:02.772997842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 07:29:03.400033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260897878.mount: Deactivated successfully. May 13 07:29:05.381770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 07:29:05.382152 systemd[1]: Stopped kubelet.service. May 13 07:29:05.384550 systemd[1]: Starting kubelet.service... May 13 07:29:05.473869 systemd[1]: Started kubelet.service. May 13 07:29:05.734565 update_engine[1149]: I0513 07:29:05.734426 1149 update_attempter.cc:509] Updating boot flags... May 13 07:29:05.768278 kubelet[1445]: E0513 07:29:05.768226 1445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:29:05.771200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:29:05.771333 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:29:07.681243 env[1155]: time="2025-05-13T07:29:07.681135057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:07.732161 env[1155]: time="2025-05-13T07:29:07.732093969Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:07.768159 env[1155]: time="2025-05-13T07:29:07.768095468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:07.796706 env[1155]: time="2025-05-13T07:29:07.796586937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:07.799374 env[1155]: time="2025-05-13T07:29:07.799314120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 07:29:12.156709 systemd[1]: Stopped kubelet.service. May 13 07:29:12.159509 systemd[1]: Starting kubelet.service... May 13 07:29:12.200374 systemd[1]: Reloading. May 13 07:29:12.308576 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2025-05-13T07:29:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:29:12.308608 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2025-05-13T07:29:12Z" level=info msg="torcx already run" May 13 07:29:12.406428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:29:12.406448 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:29:12.431886 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:29:12.536993 systemd[1]: Stopping kubelet.service... May 13 07:29:12.537924 systemd[1]: kubelet.service: Deactivated successfully. May 13 07:29:12.538112 systemd[1]: Stopped kubelet.service. May 13 07:29:12.540405 systemd[1]: Starting kubelet.service... May 13 07:29:12.810035 systemd[1]: Started kubelet.service. May 13 07:29:12.896644 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:29:12.896644 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 07:29:12.896644 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:29:12.897363 kubelet[1563]: I0513 07:29:12.896697 1563 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 07:29:13.395273 kubelet[1563]: I0513 07:29:13.395221 1563 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 07:29:13.395598 kubelet[1563]: I0513 07:29:13.395571 1563 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 07:29:13.396320 kubelet[1563]: I0513 07:29:13.396287 1563 server.go:954] "Client rotation is on, will bootstrap in background" May 13 07:29:13.459432 kubelet[1563]: E0513 07:29:13.459311 1563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:13.467295 kubelet[1563]: I0513 07:29:13.467228 1563 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 07:29:13.492787 kubelet[1563]: E0513 07:29:13.492621 1563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 07:29:13.493017 kubelet[1563]: I0513 07:29:13.492818 1563 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 07:29:13.499775 kubelet[1563]: I0513 07:29:13.499735 1563 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 07:29:13.500579 kubelet[1563]: I0513 07:29:13.500518 1563 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 07:29:13.501101 kubelet[1563]: I0513 07:29:13.500715 1563 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-1ba5f14697.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 07:29:13.501491 kubelet[1563]: I0513 07:29:13.501463 1563 topology_manager.go:138] "Creating topology manager with none policy" May 13 07:29:13.501716 kubelet[1563]: I0513 07:29:13.501693 1563 container_manager_linux.go:304] "Creating device plugin manager" May 13 07:29:13.502076 kubelet[1563]: I0513 07:29:13.502047 1563 state_mem.go:36] "Initialized new in-memory state store" May 13 07:29:13.512448 kubelet[1563]: I0513 07:29:13.512378 1563 kubelet.go:446] "Attempting to sync node with API server" May 13 07:29:13.512770 kubelet[1563]: I0513 07:29:13.512742 1563 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 07:29:13.512958 kubelet[1563]: I0513 07:29:13.512934 1563 kubelet.go:352] "Adding apiserver pod source" May 13 07:29:13.513137 kubelet[1563]: I0513 07:29:13.513098 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 07:29:13.548203 kubelet[1563]: W0513 07:29:13.548088 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-1ba5f14697.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:13.548487 kubelet[1563]: E0513 07:29:13.548218 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-1ba5f14697.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:13.548487 kubelet[1563]: I0513 07:29:13.548450 1563 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 07:29:13.549527 kubelet[1563]: I0513 07:29:13.549466 1563 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 07:29:13.549670 kubelet[1563]: W0513 07:29:13.549571 1563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 07:29:13.558098 kubelet[1563]: W0513 07:29:13.558015 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:13.558380 kubelet[1563]: E0513 07:29:13.558295 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:13.560898 kubelet[1563]: I0513 07:29:13.560814 1563 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 07:29:13.561029 kubelet[1563]: I0513 07:29:13.560918 1563 server.go:1287] "Started kubelet" May 13 07:29:13.561276 kubelet[1563]: I0513 07:29:13.561221 1563 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 07:29:13.563622 kubelet[1563]: I0513 07:29:13.563588 1563 server.go:490] "Adding debug handlers to kubelet server" May 13 07:29:13.574231 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 07:29:13.574508 kubelet[1563]: E0513 07:29:13.569453 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.239:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.239:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-n-1ba5f14697.novalocal.183f059d4a7ea49b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-n-1ba5f14697.novalocal,UID:ci-3510-3-7-n-1ba5f14697.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-n-1ba5f14697.novalocal,},FirstTimestamp:2025-05-13 07:29:13.560859803 +0000 UTC m=+0.741634095,LastTimestamp:2025-05-13 07:29:13.560859803 +0000 UTC m=+0.741634095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-n-1ba5f14697.novalocal,}" May 13 07:29:13.574508 kubelet[1563]: I0513 07:29:13.572430 1563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 07:29:13.574508 kubelet[1563]: I0513 07:29:13.572839 1563 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 07:29:13.576250 kubelet[1563]: I0513 07:29:13.575551 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 07:29:13.576809 kubelet[1563]: I0513 07:29:13.576770 1563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 07:29:13.581865 kubelet[1563]: E0513 07:29:13.581756 1563 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:13.582100 kubelet[1563]: I0513 07:29:13.582071 1563 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 07:29:13.582702 kubelet[1563]: I0513 07:29:13.582667 1563 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 07:29:13.582974 kubelet[1563]: I0513 07:29:13.582950 1563 reconciler.go:26] "Reconciler: start to sync state" May 13 07:29:13.584616 kubelet[1563]: E0513 07:29:13.584552 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-1ba5f14697.novalocal?timeout=10s\": dial tcp 172.24.4.239:6443: connect: connection refused" interval="200ms" May 13 07:29:13.584983 kubelet[1563]: W0513 07:29:13.584913 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:13.585241 kubelet[1563]: E0513 07:29:13.585198 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:13.587261 kubelet[1563]: I0513 07:29:13.585752 1563 factory.go:221] Registration of the systemd container factory successfully May 13 07:29:13.587475 kubelet[1563]: I0513 07:29:13.587352 1563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 07:29:13.590323 kubelet[1563]: I0513 07:29:13.590276 1563 factory.go:221] Registration of the containerd container factory successfully May 13 07:29:13.623752 kubelet[1563]: I0513 07:29:13.623694 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 07:29:13.626103 kubelet[1563]: I0513 07:29:13.626065 1563 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 07:29:13.626103 kubelet[1563]: I0513 07:29:13.626081 1563 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 07:29:13.626103 kubelet[1563]: I0513 07:29:13.626098 1563 state_mem.go:36] "Initialized new in-memory state store" May 13 07:29:13.626693 kubelet[1563]: I0513 07:29:13.626650 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 07:29:13.626693 kubelet[1563]: I0513 07:29:13.626674 1563 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 07:29:13.626826 kubelet[1563]: I0513 07:29:13.626701 1563 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 07:29:13.626826 kubelet[1563]: I0513 07:29:13.626711 1563 kubelet.go:2388] "Starting kubelet main sync loop" May 13 07:29:13.626826 kubelet[1563]: E0513 07:29:13.626766 1563 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 07:29:13.631498 kubelet[1563]: I0513 07:29:13.631482 1563 policy_none.go:49] "None policy: Start" May 13 07:29:13.631597 kubelet[1563]: W0513 07:29:13.631557 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:13.631639 kubelet[1563]: E0513 07:29:13.631620 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:13.631698 kubelet[1563]: I0513 07:29:13.631686 1563 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 07:29:13.631797 kubelet[1563]: I0513 07:29:13.631788 1563 state_mem.go:35] "Initializing new in-memory state store" May 13 07:29:13.639520 systemd[1]: Created slice kubepods.slice. May 13 07:29:13.644477 systemd[1]: Created slice kubepods-besteffort.slice. May 13 07:29:13.654002 systemd[1]: Created slice kubepods-burstable.slice. May 13 07:29:13.657889 kubelet[1563]: I0513 07:29:13.657839 1563 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 07:29:13.658189 kubelet[1563]: I0513 07:29:13.658168 1563 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 07:29:13.658251 kubelet[1563]: I0513 07:29:13.658195 1563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 07:29:13.659349 kubelet[1563]: I0513 07:29:13.659196 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 07:29:13.661482 kubelet[1563]: E0513 07:29:13.661431 1563 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 07:29:13.661617 kubelet[1563]: E0513 07:29:13.661605 1563 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:13.750099 systemd[1]: Created slice kubepods-burstable-pod78ea5717b577e133517eac851ecfab66.slice. May 13 07:29:13.762155 kubelet[1563]: I0513 07:29:13.761950 1563 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.764787 kubelet[1563]: E0513 07:29:13.764740 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.766264 kubelet[1563]: E0513 07:29:13.765239 1563 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.239:6443/api/v1/nodes\": dial tcp 172.24.4.239:6443: connect: connection refused" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.772870 systemd[1]: Created slice kubepods-burstable-poda481d603e7a9d1ad1934ab42c05cd396.slice. May 13 07:29:13.783636 kubelet[1563]: E0513 07:29:13.783135 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.784619 kubelet[1563]: I0513 07:29:13.784552 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.784766 kubelet[1563]: I0513 07:29:13.784665 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.787721 kubelet[1563]: E0513 07:29:13.785904 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-1ba5f14697.novalocal?timeout=10s\": dial tcp 172.24.4.239:6443: connect: connection refused" interval="400ms" May 13 07:29:13.790170 systemd[1]: Created slice kubepods-burstable-pod4f1a971a5ca58891908c664948007c3e.slice. May 13 07:29:13.795455 kubelet[1563]: E0513 07:29:13.795298 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.885641 kubelet[1563]: I0513 07:29:13.885574 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.886010 kubelet[1563]: I0513 07:29:13.885973 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.886276 kubelet[1563]: I0513 07:29:13.886228 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.886571 kubelet[1563]: I0513 07:29:13.886533 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78ea5717b577e133517eac851ecfab66-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"78ea5717b577e133517eac851ecfab66\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.886882 kubelet[1563]: I0513 07:29:13.886832 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.887124 kubelet[1563]: I0513 07:29:13.887086 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.887354 kubelet[1563]: I0513 07:29:13.887315 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.969943 kubelet[1563]: I0513 07:29:13.969803 1563 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:13.971452 kubelet[1563]: E0513 07:29:13.971346 1563 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.239:6443/api/v1/nodes\": dial tcp 172.24.4.239:6443: connect: connection refused" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:14.067596 env[1155]: time="2025-05-13T07:29:14.067460503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:78ea5717b577e133517eac851ecfab66,Namespace:kube-system,Attempt:0,}" May 13 07:29:14.085890 env[1155]: time="2025-05-13T07:29:14.085636762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:a481d603e7a9d1ad1934ab42c05cd396,Namespace:kube-system,Attempt:0,}" May 13 07:29:14.097892 env[1155]: time="2025-05-13T07:29:14.097540227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:4f1a971a5ca58891908c664948007c3e,Namespace:kube-system,Attempt:0,}" May 13 07:29:14.187885 kubelet[1563]: E0513 07:29:14.187785 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-1ba5f14697.novalocal?timeout=10s\": dial tcp 172.24.4.239:6443: connect: connection refused" interval="800ms" May 13 07:29:14.375787 kubelet[1563]: I0513 07:29:14.375280 1563 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:14.376289 kubelet[1563]: E0513 07:29:14.376208 1563 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.239:6443/api/v1/nodes\": dial tcp 172.24.4.239:6443: connect: connection refused" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:14.623735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742985349.mount: Deactivated successfully. May 13 07:29:14.648831 env[1155]: time="2025-05-13T07:29:14.648704865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.655154 env[1155]: time="2025-05-13T07:29:14.655036602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.658222 env[1155]: time="2025-05-13T07:29:14.658161024Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.663695 env[1155]: time="2025-05-13T07:29:14.663633829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.666604 env[1155]: time="2025-05-13T07:29:14.666543069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.671961 env[1155]: time="2025-05-13T07:29:14.671906709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.683152 env[1155]: time="2025-05-13T07:29:14.683085304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.689090 env[1155]: time="2025-05-13T07:29:14.689036916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.695296 env[1155]: time="2025-05-13T07:29:14.695207197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.698476 env[1155]: time="2025-05-13T07:29:14.698372680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.702238 env[1155]: time="2025-05-13T07:29:14.702096327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.704477 env[1155]: time="2025-05-13T07:29:14.704430200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:14.744857 kubelet[1563]: W0513 07:29:14.744687 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-1ba5f14697.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:14.744857 kubelet[1563]: E0513 07:29:14.744796 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-1ba5f14697.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:14.751492 env[1155]: time="2025-05-13T07:29:14.751420218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:14.751492 env[1155]: time="2025-05-13T07:29:14.751490466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:14.751724 env[1155]: time="2025-05-13T07:29:14.751519212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:14.751724 env[1155]: time="2025-05-13T07:29:14.751641742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e0fa72c9fd8087bf33425c9c86bf98cc41048d36dd0dc595c57dfc7dea4f375 pid=1614 runtime=io.containerd.runc.v2 May 13 07:29:14.753721 env[1155]: time="2025-05-13T07:29:14.753571914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:14.753721 env[1155]: time="2025-05-13T07:29:14.753601482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:14.753721 env[1155]: time="2025-05-13T07:29:14.753613826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:14.753962 env[1155]: time="2025-05-13T07:29:14.753733861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e31973a78c33212bfba64ead6fb607d3dde090b13e49643b8b5ff4a7e09d1ee pid=1601 runtime=io.containerd.runc.v2 May 13 07:29:14.766208 env[1155]: time="2025-05-13T07:29:14.765917475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:14.766208 env[1155]: time="2025-05-13T07:29:14.766044374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:14.766208 env[1155]: time="2025-05-13T07:29:14.766076947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:14.766440 env[1155]: time="2025-05-13T07:29:14.766296789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa978b67b400cda743562053f1005e215a1dd6dcebb62930441cbf057028379e pid=1640 runtime=io.containerd.runc.v2 May 13 07:29:14.771681 systemd[1]: Started cri-containerd-5e0fa72c9fd8087bf33425c9c86bf98cc41048d36dd0dc595c57dfc7dea4f375.scope. May 13 07:29:14.790352 kubelet[1563]: W0513 07:29:14.790223 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:14.790352 kubelet[1563]: E0513 07:29:14.790304 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:14.794691 systemd[1]: Started cri-containerd-fa978b67b400cda743562053f1005e215a1dd6dcebb62930441cbf057028379e.scope. May 13 07:29:14.799785 kubelet[1563]: W0513 07:29:14.797566 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:14.799785 kubelet[1563]: E0513 07:29:14.797613 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:14.805027 systemd[1]: Started cri-containerd-4e31973a78c33212bfba64ead6fb607d3dde090b13e49643b8b5ff4a7e09d1ee.scope. May 13 07:29:14.826899 env[1155]: time="2025-05-13T07:29:14.826847940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:78ea5717b577e133517eac851ecfab66,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e0fa72c9fd8087bf33425c9c86bf98cc41048d36dd0dc595c57dfc7dea4f375\"" May 13 07:29:14.831229 env[1155]: time="2025-05-13T07:29:14.831186120Z" level=info msg="CreateContainer within sandbox \"5e0fa72c9fd8087bf33425c9c86bf98cc41048d36dd0dc595c57dfc7dea4f375\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 07:29:14.857522 env[1155]: time="2025-05-13T07:29:14.857477258Z" level=info msg="CreateContainer within sandbox \"5e0fa72c9fd8087bf33425c9c86bf98cc41048d36dd0dc595c57dfc7dea4f375\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51aaabced1cf721986c789e75d2102a589acd58d2732deefcb1800231eef0f99\"" May 13 07:29:14.859609 env[1155]: time="2025-05-13T07:29:14.859580991Z" level=info msg="StartContainer for \"51aaabced1cf721986c789e75d2102a589acd58d2732deefcb1800231eef0f99\"" May 13 07:29:14.881833 systemd[1]: Started cri-containerd-51aaabced1cf721986c789e75d2102a589acd58d2732deefcb1800231eef0f99.scope. May 13 07:29:14.886332 env[1155]: time="2025-05-13T07:29:14.886294947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:4f1a971a5ca58891908c664948007c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e31973a78c33212bfba64ead6fb607d3dde090b13e49643b8b5ff4a7e09d1ee\"" May 13 07:29:14.890025 env[1155]: time="2025-05-13T07:29:14.889979758Z" level=info msg="CreateContainer within sandbox \"4e31973a78c33212bfba64ead6fb607d3dde090b13e49643b8b5ff4a7e09d1ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 07:29:14.911848 env[1155]: time="2025-05-13T07:29:14.911751991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal,Uid:a481d603e7a9d1ad1934ab42c05cd396,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa978b67b400cda743562053f1005e215a1dd6dcebb62930441cbf057028379e\"" May 13 07:29:14.919609 env[1155]: time="2025-05-13T07:29:14.919567406Z" level=info msg="CreateContainer within sandbox \"fa978b67b400cda743562053f1005e215a1dd6dcebb62930441cbf057028379e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 07:29:14.920999 env[1155]: time="2025-05-13T07:29:14.920953171Z" level=info msg="CreateContainer within sandbox \"4e31973a78c33212bfba64ead6fb607d3dde090b13e49643b8b5ff4a7e09d1ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6937ceb5d6a56093fcfb8ffd8a33ca6e0245b10681b190f5be4fdbcc995fa0fd\"" May 13 07:29:14.921581 env[1155]: time="2025-05-13T07:29:14.921557666Z" level=info msg="StartContainer for \"6937ceb5d6a56093fcfb8ffd8a33ca6e0245b10681b190f5be4fdbcc995fa0fd\"" May 13 07:29:14.941525 systemd[1]: Started cri-containerd-6937ceb5d6a56093fcfb8ffd8a33ca6e0245b10681b190f5be4fdbcc995fa0fd.scope. May 13 07:29:14.946913 env[1155]: time="2025-05-13T07:29:14.946873052Z" level=info msg="CreateContainer within sandbox \"fa978b67b400cda743562053f1005e215a1dd6dcebb62930441cbf057028379e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9cf3610bcd25962c9607a4aada6c57a55ba1841e216805c36213b7ab1f091403\"" May 13 07:29:14.947533 env[1155]: time="2025-05-13T07:29:14.947512093Z" level=info msg="StartContainer for \"9cf3610bcd25962c9607a4aada6c57a55ba1841e216805c36213b7ab1f091403\"" May 13 07:29:14.989434 kubelet[1563]: E0513 07:29:14.989323 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-1ba5f14697.novalocal?timeout=10s\": dial tcp 172.24.4.239:6443: connect: connection refused" interval="1.6s" May 13 07:29:15.003891 systemd[1]: Started cri-containerd-9cf3610bcd25962c9607a4aada6c57a55ba1841e216805c36213b7ab1f091403.scope. May 13 07:29:15.034852 kubelet[1563]: W0513 07:29:15.034800 1563 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.239:6443: connect: connection refused May 13 07:29:15.035031 kubelet[1563]: E0513 07:29:15.035012 1563 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.239:6443: connect: connection refused" logger="UnhandledError" May 13 07:29:15.107882 env[1155]: time="2025-05-13T07:29:15.107798087Z" level=info msg="StartContainer for \"51aaabced1cf721986c789e75d2102a589acd58d2732deefcb1800231eef0f99\" returns successfully" May 13 07:29:15.130684 env[1155]: time="2025-05-13T07:29:15.124408317Z" level=info msg="StartContainer for \"9cf3610bcd25962c9607a4aada6c57a55ba1841e216805c36213b7ab1f091403\" returns successfully" May 13 07:29:15.130684 env[1155]: time="2025-05-13T07:29:15.129534664Z" level=info msg="StartContainer for \"6937ceb5d6a56093fcfb8ffd8a33ca6e0245b10681b190f5be4fdbcc995fa0fd\" returns successfully" May 13 07:29:15.178265 kubelet[1563]: I0513 07:29:15.177886 1563 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:15.178265 kubelet[1563]: E0513 07:29:15.178170 1563 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.239:6443/api/v1/nodes\": dial tcp 172.24.4.239:6443: connect: connection refused" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:15.638130 kubelet[1563]: E0513 07:29:15.638107 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:15.641427 kubelet[1563]: E0513 07:29:15.641405 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:15.643447 kubelet[1563]: E0513 07:29:15.643432 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:16.653204 kubelet[1563]: E0513 07:29:16.653173 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:16.656223 kubelet[1563]: E0513 07:29:16.656209 1563 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:16.780035 kubelet[1563]: I0513 07:29:16.780007 1563 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.248982 kubelet[1563]: E0513 07:29:17.248953 1563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.338972 kubelet[1563]: I0513 07:29:17.338947 1563 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.339144 kubelet[1563]: E0513 07:29:17.339129 1563 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-3510-3-7-n-1ba5f14697.novalocal\": node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:17.346248 kubelet[1563]: E0513 07:29:17.346225 1563 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:17.446761 kubelet[1563]: E0513 07:29:17.446727 1563 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:17.551795 kubelet[1563]: I0513 07:29:17.551653 1563 apiserver.go:52] "Watching apiserver" May 13 07:29:17.583691 kubelet[1563]: I0513 07:29:17.583652 1563 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 07:29:17.583926 kubelet[1563]: I0513 07:29:17.583896 1563 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.595821 kubelet[1563]: E0513 07:29:17.595686 1563 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.595821 kubelet[1563]: I0513 07:29:17.595790 1563 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.599911 kubelet[1563]: E0513 07:29:17.599865 1563 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.600097 kubelet[1563]: I0513 07:29:17.600074 1563 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:17.605932 kubelet[1563]: E0513 07:29:17.605862 1563 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:19.926509 systemd[1]: Reloading. May 13 07:29:20.080513 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-05-13T07:29:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:29:20.080545 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-05-13T07:29:20Z" level=info msg="torcx already run" May 13 07:29:20.205159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:29:20.205516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:29:20.228220 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:29:20.357585 systemd[1]: Stopping kubelet.service... May 13 07:29:20.366851 systemd[1]: kubelet.service: Deactivated successfully. May 13 07:29:20.367032 systemd[1]: Stopped kubelet.service. May 13 07:29:20.367080 systemd[1]: kubelet.service: Consumed 1.292s CPU time. May 13 07:29:20.369293 systemd[1]: Starting kubelet.service... May 13 07:29:20.539740 systemd[1]: Started kubelet.service. May 13 07:29:20.618770 kubelet[1904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:29:20.619093 kubelet[1904]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 07:29:20.619152 kubelet[1904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:29:20.619433 kubelet[1904]: I0513 07:29:20.619407 1904 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 07:29:20.633413 kubelet[1904]: I0513 07:29:20.633370 1904 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 07:29:20.633547 kubelet[1904]: I0513 07:29:20.633535 1904 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 07:29:20.633899 kubelet[1904]: I0513 07:29:20.633886 1904 server.go:954] "Client rotation is on, will bootstrap in background" May 13 07:29:20.635264 kubelet[1904]: I0513 07:29:20.635249 1904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 07:29:20.641350 kubelet[1904]: I0513 07:29:20.641330 1904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 07:29:20.647723 kubelet[1904]: E0513 07:29:20.647698 1904 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 07:29:20.647824 kubelet[1904]: I0513 07:29:20.647811 1904 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 07:29:20.651302 kubelet[1904]: I0513 07:29:20.651282 1904 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 07:29:20.651971 kubelet[1904]: I0513 07:29:20.651944 1904 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 07:29:20.652329 kubelet[1904]: I0513 07:29:20.652039 1904 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-1ba5f14697.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 07:29:20.652492 kubelet[1904]: I0513 07:29:20.652480 1904 topology_manager.go:138] "Creating topology manager with none policy" May 13 07:29:20.652556 kubelet[1904]: I0513 07:29:20.652548 1904 container_manager_linux.go:304] "Creating device plugin manager" May 13 07:29:20.652650 kubelet[1904]: I0513 07:29:20.652640 1904 state_mem.go:36] "Initialized new in-memory state store" May 13 07:29:20.652829 kubelet[1904]: I0513 07:29:20.652818 1904 kubelet.go:446] "Attempting to sync node with API server" May 13 07:29:20.652899 kubelet[1904]: I0513 07:29:20.652889 1904 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 07:29:20.652969 kubelet[1904]: I0513 07:29:20.652961 1904 kubelet.go:352] "Adding apiserver pod source" May 13 07:29:20.653033 kubelet[1904]: I0513 07:29:20.653024 1904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 07:29:20.659225 kubelet[1904]: I0513 07:29:20.659183 1904 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 07:29:20.660805 kubelet[1904]: I0513 07:29:20.660768 1904 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 07:29:20.663197 kubelet[1904]: I0513 07:29:20.663167 1904 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 07:29:20.663256 kubelet[1904]: I0513 07:29:20.663237 1904 server.go:1287] "Started kubelet" May 13 07:29:20.666289 kubelet[1904]: I0513 07:29:20.666268 1904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 07:29:20.671047 kubelet[1904]: I0513 07:29:20.671026 1904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 07:29:20.671810 kubelet[1904]: I0513 07:29:20.671753 1904 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 07:29:20.672607 kubelet[1904]: I0513 07:29:20.672595 1904 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 07:29:20.672863 kubelet[1904]: E0513 07:29:20.672846 1904 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-1ba5f14697.novalocal\" not found" May 13 07:29:20.673970 kubelet[1904]: I0513 07:29:20.673932 1904 server.go:490] "Adding debug handlers to kubelet server" May 13 07:29:20.674592 kubelet[1904]: I0513 07:29:20.674577 1904 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 07:29:20.674761 kubelet[1904]: I0513 07:29:20.674750 1904 reconciler.go:26] "Reconciler: start to sync state" May 13 07:29:20.676180 kubelet[1904]: I0513 07:29:20.676160 1904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 07:29:20.677075 kubelet[1904]: I0513 07:29:20.677062 1904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 07:29:20.677160 kubelet[1904]: I0513 07:29:20.677150 1904 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 07:29:20.677228 kubelet[1904]: I0513 07:29:20.677219 1904 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 07:29:20.677290 kubelet[1904]: I0513 07:29:20.677282 1904 kubelet.go:2388] "Starting kubelet main sync loop" May 13 07:29:20.677459 kubelet[1904]: E0513 07:29:20.677371 1904 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 07:29:20.678627 kubelet[1904]: I0513 07:29:20.678586 1904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 07:29:20.678857 kubelet[1904]: I0513 07:29:20.678843 1904 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 07:29:20.679800 kubelet[1904]: I0513 07:29:20.679785 1904 factory.go:221] Registration of the systemd container factory successfully May 13 07:29:20.679950 kubelet[1904]: I0513 07:29:20.679934 1904 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 07:29:20.689253 kubelet[1904]: I0513 07:29:20.689234 1904 factory.go:221] Registration of the containerd container factory successfully May 13 07:29:20.743196 kubelet[1904]: I0513 07:29:20.743175 1904 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 07:29:20.743343 kubelet[1904]: I0513 07:29:20.743331 1904 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 07:29:20.743443 kubelet[1904]: I0513 07:29:20.743434 1904 state_mem.go:36] "Initialized new in-memory state store" May 13 07:29:20.743643 kubelet[1904]: I0513 07:29:20.743629 1904 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 07:29:20.743724 kubelet[1904]: I0513 07:29:20.743700 1904 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 07:29:20.743783 kubelet[1904]: I0513 07:29:20.743775 1904 policy_none.go:49] "None policy: Start" May 13 07:29:20.743843 kubelet[1904]: I0513 07:29:20.743835 1904 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 07:29:20.743904 kubelet[1904]: I0513 07:29:20.743896 1904 state_mem.go:35] "Initializing new in-memory state store" May 13 07:29:20.744064 kubelet[1904]: I0513 07:29:20.744053 1904 state_mem.go:75] "Updated machine memory state" May 13 07:29:20.747518 kubelet[1904]: I0513 07:29:20.747501 1904 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 07:29:20.747808 kubelet[1904]: I0513 07:29:20.747716 1904 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 07:29:20.747918 kubelet[1904]: I0513 07:29:20.747874 1904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 07:29:20.750975 kubelet[1904]: I0513 07:29:20.750696 1904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 07:29:20.753105 kubelet[1904]: E0513 07:29:20.752575 1904 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 07:29:20.778663 kubelet[1904]: I0513 07:29:20.778624 1904 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.782411 kubelet[1904]: I0513 07:29:20.779089 1904 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.782411 kubelet[1904]: I0513 07:29:20.779548 1904 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.785805 kubelet[1904]: W0513 07:29:20.785785 1904 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 07:29:20.786112 kubelet[1904]: W0513 07:29:20.786086 1904 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 07:29:20.790501 kubelet[1904]: W0513 07:29:20.790477 1904 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 07:29:20.854916 kubelet[1904]: I0513 07:29:20.854802 1904 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.869623 kubelet[1904]: I0513 07:29:20.869533 1904 kubelet_node_status.go:125] "Node was previously registered" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.869850 kubelet[1904]: I0513 07:29:20.869673 1904 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.876278 kubelet[1904]: I0513 07:29:20.876233 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.876641 kubelet[1904]: I0513 07:29:20.876578 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.876903 kubelet[1904]: I0513 07:29:20.876839 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.877145 kubelet[1904]: I0513 07:29:20.877087 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78ea5717b577e133517eac851ecfab66-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"78ea5717b577e133517eac851ecfab66\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.877418 kubelet[1904]: I0513 07:29:20.877330 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.877742 kubelet[1904]: I0513 07:29:20.877618 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.878001 kubelet[1904]: I0513 07:29:20.877940 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.878250 kubelet[1904]: I0513 07:29:20.878190 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1a971a5ca58891908c664948007c3e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"4f1a971a5ca58891908c664948007c3e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.878515 kubelet[1904]: I0513 07:29:20.878480 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a481d603e7a9d1ad1934ab42c05cd396-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal\" (UID: \"a481d603e7a9d1ad1934ab42c05cd396\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" May 13 07:29:20.923486 sudo[1935]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 07:29:20.924669 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 07:29:21.641337 sudo[1935]: pam_unix(sudo:session): session closed for user root May 13 07:29:21.664595 kubelet[1904]: I0513 07:29:21.664557 1904 apiserver.go:52] "Watching apiserver" May 13 07:29:21.774592 kubelet[1904]: I0513 07:29:21.774525 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-n-1ba5f14697.novalocal" podStartSLOduration=1.774506593 podStartE2EDuration="1.774506593s" podCreationTimestamp="2025-05-13 07:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:21.760129035 +0000 UTC m=+1.207899326" watchObservedRunningTime="2025-05-13 07:29:21.774506593 +0000 UTC m=+1.222276874" May 13 07:29:21.775190 kubelet[1904]: I0513 07:29:21.775068 1904 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 07:29:21.788300 kubelet[1904]: I0513 07:29:21.788249 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-1ba5f14697.novalocal" podStartSLOduration=1.7882302129999998 podStartE2EDuration="1.788230213s" podCreationTimestamp="2025-05-13 07:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:21.775922398 +0000 UTC m=+1.223692679" watchObservedRunningTime="2025-05-13 07:29:21.788230213 +0000 UTC m=+1.236000494" May 13 07:29:21.801307 kubelet[1904]: I0513 07:29:21.801257 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-n-1ba5f14697.novalocal" podStartSLOduration=1.801241381 podStartE2EDuration="1.801241381s" podCreationTimestamp="2025-05-13 07:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:21.788509113 +0000 UTC m=+1.236279404" watchObservedRunningTime="2025-05-13 07:29:21.801241381 +0000 UTC m=+1.249011652" May 13 07:29:24.059947 sudo[1298]: pam_unix(sudo:session): session closed for user root May 13 07:29:24.308252 sshd[1285]: pam_unix(sshd:session): session closed for user core May 13 07:29:24.315086 systemd-logind[1146]: Session 7 logged out. Waiting for processes to exit. May 13 07:29:24.315270 systemd[1]: sshd@6-172.24.4.239:22-172.24.4.1:41270.service: Deactivated successfully. May 13 07:29:24.316813 systemd[1]: session-7.scope: Deactivated successfully. May 13 07:29:24.317083 systemd[1]: session-7.scope: Consumed 7.474s CPU time. May 13 07:29:24.319648 systemd-logind[1146]: Removed session 7. May 13 07:29:26.115550 kubelet[1904]: I0513 07:29:26.115522 1904 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 07:29:26.116470 env[1155]: time="2025-05-13T07:29:26.116374813Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 07:29:26.116844 kubelet[1904]: I0513 07:29:26.116824 1904 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 07:29:27.005449 systemd[1]: Created slice kubepods-besteffort-pod4b59dbe0_1935_4a42_89b3_20879f4d6cdb.slice. May 13 07:29:27.015819 kubelet[1904]: I0513 07:29:27.015751 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b59dbe0-1935-4a42-89b3-20879f4d6cdb-lib-modules\") pod \"kube-proxy-bkscl\" (UID: \"4b59dbe0-1935-4a42-89b3-20879f4d6cdb\") " pod="kube-system/kube-proxy-bkscl" May 13 07:29:27.016154 kubelet[1904]: I0513 07:29:27.016097 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4b59dbe0-1935-4a42-89b3-20879f4d6cdb-kube-proxy\") pod \"kube-proxy-bkscl\" (UID: \"4b59dbe0-1935-4a42-89b3-20879f4d6cdb\") " pod="kube-system/kube-proxy-bkscl" May 13 07:29:27.016376 kubelet[1904]: I0513 07:29:27.016332 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b59dbe0-1935-4a42-89b3-20879f4d6cdb-xtables-lock\") pod \"kube-proxy-bkscl\" (UID: \"4b59dbe0-1935-4a42-89b3-20879f4d6cdb\") " pod="kube-system/kube-proxy-bkscl" May 13 07:29:27.016625 kubelet[1904]: I0513 07:29:27.016559 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvvg\" (UniqueName: \"kubernetes.io/projected/4b59dbe0-1935-4a42-89b3-20879f4d6cdb-kube-api-access-7hvvg\") pod \"kube-proxy-bkscl\" (UID: \"4b59dbe0-1935-4a42-89b3-20879f4d6cdb\") " pod="kube-system/kube-proxy-bkscl" May 13 07:29:27.033876 systemd[1]: Created slice kubepods-burstable-pod93afe149_6ef0_456d_88a5_c61458570676.slice. May 13 07:29:27.117287 kubelet[1904]: I0513 07:29:27.117219 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93afe149-6ef0-456d-88a5-c61458570676-clustermesh-secrets\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117319 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-etc-cni-netd\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117367 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-lib-modules\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117473 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-cgroup\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117547 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-kernel\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117588 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cni-path\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117690 kubelet[1904]: I0513 07:29:27.117634 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93afe149-6ef0-456d-88a5-c61458570676-cilium-config-path\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117674 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-hubble-tls\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117719 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9ws\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-kube-api-access-rr9ws\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117769 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-bpf-maps\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117834 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-xtables-lock\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117875 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-net\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.117969 kubelet[1904]: I0513 07:29:27.117920 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-hostproc\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.118205 kubelet[1904]: I0513 07:29:27.117995 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-run\") pod \"cilium-2tv7w\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " pod="kube-system/cilium-2tv7w" May 13 07:29:27.132823 systemd[1]: Created slice kubepods-besteffort-pod2514124f_a339_4011_9375_a8eeca905934.slice. May 13 07:29:27.147071 kubelet[1904]: I0513 07:29:27.147024 1904 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 07:29:27.218616 kubelet[1904]: I0513 07:29:27.218582 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvkn\" (UniqueName: \"kubernetes.io/projected/2514124f-a339-4011-9375-a8eeca905934-kube-api-access-lgvkn\") pod \"cilium-operator-6c4d7847fc-gdvp4\" (UID: \"2514124f-a339-4011-9375-a8eeca905934\") " pod="kube-system/cilium-operator-6c4d7847fc-gdvp4" May 13 07:29:27.218917 kubelet[1904]: I0513 07:29:27.218901 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2514124f-a339-4011-9375-a8eeca905934-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gdvp4\" (UID: \"2514124f-a339-4011-9375-a8eeca905934\") " pod="kube-system/cilium-operator-6c4d7847fc-gdvp4" May 13 07:29:27.321529 env[1155]: time="2025-05-13T07:29:27.317520097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkscl,Uid:4b59dbe0-1935-4a42-89b3-20879f4d6cdb,Namespace:kube-system,Attempt:0,}" May 13 07:29:27.338264 env[1155]: time="2025-05-13T07:29:27.338174876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tv7w,Uid:93afe149-6ef0-456d-88a5-c61458570676,Namespace:kube-system,Attempt:0,}" May 13 07:29:27.375431 env[1155]: time="2025-05-13T07:29:27.375198755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:27.375897 env[1155]: time="2025-05-13T07:29:27.375742303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:27.376160 env[1155]: time="2025-05-13T07:29:27.375839921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:27.380712 env[1155]: time="2025-05-13T07:29:27.377054832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1312f65c054c6f4dc2a5a09930c9e13cd5eae72ce6c60548da8a4b633d8b5307 pid=1985 runtime=io.containerd.runc.v2 May 13 07:29:27.401230 env[1155]: time="2025-05-13T07:29:27.400876245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:27.401230 env[1155]: time="2025-05-13T07:29:27.400972551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:27.401230 env[1155]: time="2025-05-13T07:29:27.401006105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:27.401472 env[1155]: time="2025-05-13T07:29:27.401281697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef pid=2005 runtime=io.containerd.runc.v2 May 13 07:29:27.410332 systemd[1]: Started cri-containerd-1312f65c054c6f4dc2a5a09930c9e13cd5eae72ce6c60548da8a4b633d8b5307.scope. May 13 07:29:27.420558 systemd[1]: Started cri-containerd-e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef.scope. May 13 07:29:27.439705 env[1155]: time="2025-05-13T07:29:27.439653814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gdvp4,Uid:2514124f-a339-4011-9375-a8eeca905934,Namespace:kube-system,Attempt:0,}" May 13 07:29:27.463146 env[1155]: time="2025-05-13T07:29:27.463098742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkscl,Uid:4b59dbe0-1935-4a42-89b3-20879f4d6cdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1312f65c054c6f4dc2a5a09930c9e13cd5eae72ce6c60548da8a4b633d8b5307\"" May 13 07:29:27.466752 env[1155]: time="2025-05-13T07:29:27.466367152Z" level=info msg="CreateContainer within sandbox \"1312f65c054c6f4dc2a5a09930c9e13cd5eae72ce6c60548da8a4b633d8b5307\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 07:29:27.479572 env[1155]: time="2025-05-13T07:29:27.479531674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tv7w,Uid:93afe149-6ef0-456d-88a5-c61458570676,Namespace:kube-system,Attempt:0,} returns sandbox id \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\"" May 13 07:29:27.481712 env[1155]: time="2025-05-13T07:29:27.481669666Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 07:29:27.655374 env[1155]: time="2025-05-13T07:29:27.649605253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:27.655374 env[1155]: time="2025-05-13T07:29:27.649678553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:27.655374 env[1155]: time="2025-05-13T07:29:27.649710656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:27.655374 env[1155]: time="2025-05-13T07:29:27.649939316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29 pid=2066 runtime=io.containerd.runc.v2 May 13 07:29:27.657286 env[1155]: time="2025-05-13T07:29:27.657213536Z" level=info msg="CreateContainer within sandbox \"1312f65c054c6f4dc2a5a09930c9e13cd5eae72ce6c60548da8a4b633d8b5307\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfa163661b6a6aa26de80ddae93b6d97d7e33671930fcaa753919cb81e4a1926\"" May 13 07:29:27.664915 env[1155]: time="2025-05-13T07:29:27.664851618Z" level=info msg="StartContainer for \"bfa163661b6a6aa26de80ddae93b6d97d7e33671930fcaa753919cb81e4a1926\"" May 13 07:29:27.702334 systemd[1]: Started cri-containerd-6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29.scope. May 13 07:29:27.715959 systemd[1]: Started cri-containerd-bfa163661b6a6aa26de80ddae93b6d97d7e33671930fcaa753919cb81e4a1926.scope. May 13 07:29:27.762594 env[1155]: time="2025-05-13T07:29:27.762548157Z" level=info msg="StartContainer for \"bfa163661b6a6aa26de80ddae93b6d97d7e33671930fcaa753919cb81e4a1926\" returns successfully" May 13 07:29:27.787935 env[1155]: time="2025-05-13T07:29:27.787881124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gdvp4,Uid:2514124f-a339-4011-9375-a8eeca905934,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\"" May 13 07:29:28.779519 kubelet[1904]: I0513 07:29:28.779312 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkscl" podStartSLOduration=2.779239214 podStartE2EDuration="2.779239214s" podCreationTimestamp="2025-05-13 07:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:28.775195066 +0000 UTC m=+8.222965407" watchObservedRunningTime="2025-05-13 07:29:28.779239214 +0000 UTC m=+8.227009545" May 13 07:29:37.626414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107687656.mount: Deactivated successfully. May 13 07:29:42.302849 env[1155]: time="2025-05-13T07:29:42.302764702Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:42.306524 env[1155]: time="2025-05-13T07:29:42.306452393Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:42.310748 env[1155]: time="2025-05-13T07:29:42.310679667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:42.312628 env[1155]: time="2025-05-13T07:29:42.312562557Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 07:29:42.318915 env[1155]: time="2025-05-13T07:29:42.318234824Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 07:29:42.323500 env[1155]: time="2025-05-13T07:29:42.323377668Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:29:42.354067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968310459.mount: Deactivated successfully. May 13 07:29:42.364642 env[1155]: time="2025-05-13T07:29:42.364544799Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\"" May 13 07:29:42.367898 env[1155]: time="2025-05-13T07:29:42.366318190Z" level=info msg="StartContainer for \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\"" May 13 07:29:42.405728 systemd[1]: Started cri-containerd-2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7.scope. May 13 07:29:42.448318 env[1155]: time="2025-05-13T07:29:42.448167784Z" level=info msg="StartContainer for \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\" returns successfully" May 13 07:29:42.450415 systemd[1]: cri-containerd-2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7.scope: Deactivated successfully. May 13 07:29:42.979376 env[1155]: time="2025-05-13T07:29:42.979227408Z" level=info msg="shim disconnected" id=2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7 May 13 07:29:42.979376 env[1155]: time="2025-05-13T07:29:42.979351525Z" level=warning msg="cleaning up after shim disconnected" id=2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7 namespace=k8s.io May 13 07:29:42.979376 env[1155]: time="2025-05-13T07:29:42.979376983Z" level=info msg="cleaning up dead shim" May 13 07:29:42.999208 env[1155]: time="2025-05-13T07:29:42.999120839Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:29:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2311 runtime=io.containerd.runc.v2\n" May 13 07:29:43.345885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7-rootfs.mount: Deactivated successfully. May 13 07:29:43.817085 env[1155]: time="2025-05-13T07:29:43.817010852Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 07:29:43.861089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898137396.mount: Deactivated successfully. May 13 07:29:43.885366 env[1155]: time="2025-05-13T07:29:43.885325875Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\"" May 13 07:29:43.886105 env[1155]: time="2025-05-13T07:29:43.886084646Z" level=info msg="StartContainer for \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\"" May 13 07:29:43.909152 systemd[1]: Started cri-containerd-3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751.scope. May 13 07:29:43.950334 env[1155]: time="2025-05-13T07:29:43.950297688Z" level=info msg="StartContainer for \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\" returns successfully" May 13 07:29:43.974949 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 07:29:43.975308 systemd[1]: Stopped systemd-sysctl.service. May 13 07:29:43.975512 systemd[1]: Stopping systemd-sysctl.service... May 13 07:29:43.977256 systemd[1]: Starting systemd-sysctl.service... May 13 07:29:43.980906 systemd[1]: cri-containerd-3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751.scope: Deactivated successfully. May 13 07:29:43.986425 systemd[1]: Finished systemd-sysctl.service. May 13 07:29:44.009003 env[1155]: time="2025-05-13T07:29:44.008963231Z" level=info msg="shim disconnected" id=3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751 May 13 07:29:44.009201 env[1155]: time="2025-05-13T07:29:44.009182540Z" level=warning msg="cleaning up after shim disconnected" id=3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751 namespace=k8s.io May 13 07:29:44.009292 env[1155]: time="2025-05-13T07:29:44.009277522Z" level=info msg="cleaning up dead shim" May 13 07:29:44.019071 env[1155]: time="2025-05-13T07:29:44.019036392Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:29:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2376 runtime=io.containerd.runc.v2\n" May 13 07:29:44.348472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751-rootfs.mount: Deactivated successfully. May 13 07:29:44.818717 env[1155]: time="2025-05-13T07:29:44.818617448Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 07:29:44.881795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165023266.mount: Deactivated successfully. May 13 07:29:44.895319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645034081.mount: Deactivated successfully. May 13 07:29:44.912644 env[1155]: time="2025-05-13T07:29:44.912569179Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\"" May 13 07:29:44.914432 env[1155]: time="2025-05-13T07:29:44.913108680Z" level=info msg="StartContainer for \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\"" May 13 07:29:44.932809 systemd[1]: Started cri-containerd-500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da.scope. May 13 07:29:44.967176 systemd[1]: cri-containerd-500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da.scope: Deactivated successfully. May 13 07:29:44.971102 env[1155]: time="2025-05-13T07:29:44.970913691Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93afe149_6ef0_456d_88a5_c61458570676.slice/cri-containerd-500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da.scope/memory.events\": no such file or directory" May 13 07:29:44.975922 env[1155]: time="2025-05-13T07:29:44.975878880Z" level=info msg="StartContainer for \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\" returns successfully" May 13 07:29:45.072135 env[1155]: time="2025-05-13T07:29:45.071998968Z" level=info msg="shim disconnected" id=500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da May 13 07:29:45.072135 env[1155]: time="2025-05-13T07:29:45.072090383Z" level=warning msg="cleaning up after shim disconnected" id=500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da namespace=k8s.io May 13 07:29:45.072977 env[1155]: time="2025-05-13T07:29:45.072114950Z" level=info msg="cleaning up dead shim" May 13 07:29:45.086358 env[1155]: time="2025-05-13T07:29:45.086267871Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:29:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" May 13 07:29:45.818287 env[1155]: time="2025-05-13T07:29:45.818001250Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 07:29:45.867508 env[1155]: time="2025-05-13T07:29:45.867447892Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\"" May 13 07:29:45.868618 env[1155]: time="2025-05-13T07:29:45.868595334Z" level=info msg="StartContainer for \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\"" May 13 07:29:45.925289 systemd[1]: Started cri-containerd-5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd.scope. May 13 07:29:45.952881 systemd[1]: cri-containerd-5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd.scope: Deactivated successfully. May 13 07:29:45.962672 env[1155]: time="2025-05-13T07:29:45.962634995Z" level=info msg="StartContainer for \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\" returns successfully" May 13 07:29:45.964422 env[1155]: time="2025-05-13T07:29:45.956211943Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93afe149_6ef0_456d_88a5_c61458570676.slice/cri-containerd-5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd.scope/memory.events\": no such file or directory" May 13 07:29:46.157193 env[1155]: time="2025-05-13T07:29:46.157066569Z" level=info msg="shim disconnected" id=5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd May 13 07:29:46.157193 env[1155]: time="2025-05-13T07:29:46.157154216Z" level=warning msg="cleaning up after shim disconnected" id=5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd namespace=k8s.io May 13 07:29:46.157193 env[1155]: time="2025-05-13T07:29:46.157180096Z" level=info msg="cleaning up dead shim" May 13 07:29:46.180757 env[1155]: time="2025-05-13T07:29:46.180676797Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:29:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n" May 13 07:29:46.346235 systemd[1]: run-containerd-runc-k8s.io-5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd-runc.g3woT9.mount: Deactivated successfully. May 13 07:29:46.346332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd-rootfs.mount: Deactivated successfully. May 13 07:29:46.480143 env[1155]: time="2025-05-13T07:29:46.479796468Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:46.482799 env[1155]: time="2025-05-13T07:29:46.482748817Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:46.488668 env[1155]: time="2025-05-13T07:29:46.488642004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:29:46.489736 env[1155]: time="2025-05-13T07:29:46.489320299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 07:29:46.496815 env[1155]: time="2025-05-13T07:29:46.496769098Z" level=info msg="CreateContainer within sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 07:29:46.516311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861575726.mount: Deactivated successfully. May 13 07:29:46.523643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955317608.mount: Deactivated successfully. May 13 07:29:46.532740 env[1155]: time="2025-05-13T07:29:46.532702223Z" level=info msg="CreateContainer within sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\"" May 13 07:29:46.534260 env[1155]: time="2025-05-13T07:29:46.534213460Z" level=info msg="StartContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\"" May 13 07:29:46.556250 systemd[1]: Started cri-containerd-9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378.scope. May 13 07:29:46.593227 env[1155]: time="2025-05-13T07:29:46.593190814Z" level=info msg="StartContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" returns successfully" May 13 07:29:46.822985 env[1155]: time="2025-05-13T07:29:46.822587947Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 07:29:46.843727 env[1155]: time="2025-05-13T07:29:46.843663082Z" level=info msg="CreateContainer within sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\"" May 13 07:29:46.844377 env[1155]: time="2025-05-13T07:29:46.844346468Z" level=info msg="StartContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\"" May 13 07:29:46.866500 systemd[1]: Started cri-containerd-f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd.scope. May 13 07:29:46.957678 env[1155]: time="2025-05-13T07:29:46.957636023Z" level=info msg="StartContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" returns successfully" May 13 07:29:46.978328 kubelet[1904]: I0513 07:29:46.978179 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gdvp4" podStartSLOduration=1.277766224 podStartE2EDuration="19.978148433s" podCreationTimestamp="2025-05-13 07:29:27 +0000 UTC" firstStartedPulling="2025-05-13 07:29:27.791238945 +0000 UTC m=+7.239009226" lastFinishedPulling="2025-05-13 07:29:46.491621104 +0000 UTC m=+25.939391435" observedRunningTime="2025-05-13 07:29:46.977789088 +0000 UTC m=+26.425559369" watchObservedRunningTime="2025-05-13 07:29:46.978148433 +0000 UTC m=+26.425918715" May 13 07:29:47.103731 kubelet[1904]: I0513 07:29:47.102655 1904 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 07:29:47.367826 systemd[1]: Created slice kubepods-burstable-podbf093a7a_a8a6_4d33_9d2c_ba1d28506c8a.slice. May 13 07:29:47.377992 systemd[1]: Created slice kubepods-burstable-podb1be76ed_07bc_413b_97ec_75a469affd39.slice. May 13 07:29:47.496121 kubelet[1904]: I0513 07:29:47.496074 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5wb\" (UniqueName: \"kubernetes.io/projected/b1be76ed-07bc-413b-97ec-75a469affd39-kube-api-access-hr5wb\") pod \"coredns-668d6bf9bc-h4hq2\" (UID: \"b1be76ed-07bc-413b-97ec-75a469affd39\") " pod="kube-system/coredns-668d6bf9bc-h4hq2" May 13 07:29:47.496332 kubelet[1904]: I0513 07:29:47.496315 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a-config-volume\") pod \"coredns-668d6bf9bc-m67k6\" (UID: \"bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a\") " pod="kube-system/coredns-668d6bf9bc-m67k6" May 13 07:29:47.496475 kubelet[1904]: I0513 07:29:47.496459 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58p6\" (UniqueName: \"kubernetes.io/projected/bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a-kube-api-access-z58p6\") pod \"coredns-668d6bf9bc-m67k6\" (UID: \"bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a\") " pod="kube-system/coredns-668d6bf9bc-m67k6" May 13 07:29:47.496590 kubelet[1904]: I0513 07:29:47.496575 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1be76ed-07bc-413b-97ec-75a469affd39-config-volume\") pod \"coredns-668d6bf9bc-h4hq2\" (UID: \"b1be76ed-07bc-413b-97ec-75a469affd39\") " pod="kube-system/coredns-668d6bf9bc-h4hq2" May 13 07:29:47.671716 env[1155]: time="2025-05-13T07:29:47.671348280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m67k6,Uid:bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a,Namespace:kube-system,Attempt:0,}" May 13 07:29:47.682092 env[1155]: time="2025-05-13T07:29:47.681845666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h4hq2,Uid:b1be76ed-07bc-413b-97ec-75a469affd39,Namespace:kube-system,Attempt:0,}" May 13 07:29:47.850820 kubelet[1904]: I0513 07:29:47.850758 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2tv7w" podStartSLOduration=7.016544349 podStartE2EDuration="21.85074201s" podCreationTimestamp="2025-05-13 07:29:26 +0000 UTC" firstStartedPulling="2025-05-13 07:29:27.481067094 +0000 UTC m=+6.928837365" lastFinishedPulling="2025-05-13 07:29:42.315264705 +0000 UTC m=+21.763035026" observedRunningTime="2025-05-13 07:29:47.849163987 +0000 UTC m=+27.296934278" watchObservedRunningTime="2025-05-13 07:29:47.85074201 +0000 UTC m=+27.298512292" May 13 07:29:50.303575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 07:29:50.305172 systemd-networkd[987]: cilium_host: Link UP May 13 07:29:50.307971 systemd-networkd[987]: cilium_net: Link UP May 13 07:29:50.308027 systemd-networkd[987]: cilium_net: Gained carrier May 13 07:29:50.309355 systemd-networkd[987]: cilium_host: Gained carrier May 13 07:29:50.423013 systemd-networkd[987]: cilium_vxlan: Link UP May 13 07:29:50.423060 systemd-networkd[987]: cilium_vxlan: Gained carrier May 13 07:29:50.740413 kernel: NET: Registered PF_ALG protocol family May 13 07:29:50.869634 systemd-networkd[987]: cilium_net: Gained IPv6LL May 13 07:29:50.869919 systemd-networkd[987]: cilium_host: Gained IPv6LL May 13 07:29:51.692302 systemd-networkd[987]: lxc_health: Link UP May 13 07:29:51.692794 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 07:29:51.692722 systemd-networkd[987]: lxc_health: Gained carrier May 13 07:29:52.259422 systemd-networkd[987]: lxcd34ca8f8f218: Link UP May 13 07:29:52.268176 systemd-networkd[987]: lxcc7174412c9c8: Link UP May 13 07:29:52.273294 kernel: eth0: renamed from tmp44d6b May 13 07:29:52.294405 kernel: eth0: renamed from tmp38146 May 13 07:29:52.277693 systemd-networkd[987]: cilium_vxlan: Gained IPv6LL May 13 07:29:52.302099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd34ca8f8f218: link becomes ready May 13 07:29:52.302197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc7174412c9c8: link becomes ready May 13 07:29:52.305975 systemd-networkd[987]: lxcd34ca8f8f218: Gained carrier May 13 07:29:52.306369 systemd-networkd[987]: lxcc7174412c9c8: Gained carrier May 13 07:29:53.367586 systemd-networkd[987]: lxc_health: Gained IPv6LL May 13 07:29:54.041300 systemd-networkd[987]: lxcc7174412c9c8: Gained IPv6LL May 13 07:29:54.325810 systemd-networkd[987]: lxcd34ca8f8f218: Gained IPv6LL May 13 07:29:56.673163 env[1155]: time="2025-05-13T07:29:56.672956738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:56.673163 env[1155]: time="2025-05-13T07:29:56.673003707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:56.673163 env[1155]: time="2025-05-13T07:29:56.673023035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:56.675840 env[1155]: time="2025-05-13T07:29:56.675779890Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015 pid=3079 runtime=io.containerd.runc.v2 May 13 07:29:56.700339 systemd[1]: Started cri-containerd-3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015.scope. May 13 07:29:56.705408 systemd[1]: run-containerd-runc-k8s.io-3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015-runc.kLl3TC.mount: Deactivated successfully. May 13 07:29:56.724466 env[1155]: time="2025-05-13T07:29:56.723931831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:29:56.724466 env[1155]: time="2025-05-13T07:29:56.723963401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:29:56.724466 env[1155]: time="2025-05-13T07:29:56.723976165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:29:56.724466 env[1155]: time="2025-05-13T07:29:56.724091725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44d6bb8201f3c939adcb2cede84e76b9bb0ee3f9dfb28332aa91082beb605ba8 pid=3104 runtime=io.containerd.runc.v2 May 13 07:29:56.729801 kubelet[1904]: I0513 07:29:56.729761 1904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 07:29:56.749260 systemd[1]: Started cri-containerd-44d6bb8201f3c939adcb2cede84e76b9bb0ee3f9dfb28332aa91082beb605ba8.scope. May 13 07:29:56.837634 env[1155]: time="2025-05-13T07:29:56.837593074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h4hq2,Uid:b1be76ed-07bc-413b-97ec-75a469affd39,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d6bb8201f3c939adcb2cede84e76b9bb0ee3f9dfb28332aa91082beb605ba8\"" May 13 07:29:56.841619 env[1155]: time="2025-05-13T07:29:56.841577261Z" level=info msg="CreateContainer within sandbox \"44d6bb8201f3c939adcb2cede84e76b9bb0ee3f9dfb28332aa91082beb605ba8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 07:29:56.862098 env[1155]: time="2025-05-13T07:29:56.862048065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m67k6,Uid:bf093a7a-a8a6-4d33-9d2c-ba1d28506c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015\"" May 13 07:29:56.870804 env[1155]: time="2025-05-13T07:29:56.870766321Z" level=info msg="CreateContainer within sandbox \"3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 07:29:56.872402 env[1155]: time="2025-05-13T07:29:56.872354206Z" level=info msg="CreateContainer within sandbox \"44d6bb8201f3c939adcb2cede84e76b9bb0ee3f9dfb28332aa91082beb605ba8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"389fb3a9cde80fbc6a876d7bb3601fae2e0e27911c46fa6a53d2c8917fdb47ff\"" May 13 07:29:56.872961 env[1155]: time="2025-05-13T07:29:56.872930461Z" level=info msg="StartContainer for \"389fb3a9cde80fbc6a876d7bb3601fae2e0e27911c46fa6a53d2c8917fdb47ff\"" May 13 07:29:56.894812 env[1155]: time="2025-05-13T07:29:56.894771217Z" level=info msg="CreateContainer within sandbox \"3814631055887a29518edc14808a4c53c9f42bd30b149fce31909169b75a8015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59fa6346b1dcbae73fd1d6f7cfaaaa50f7355d8e42dd4c7e28dfff199ee1067b\"" May 13 07:29:56.897069 env[1155]: time="2025-05-13T07:29:56.897035828Z" level=info msg="StartContainer for \"59fa6346b1dcbae73fd1d6f7cfaaaa50f7355d8e42dd4c7e28dfff199ee1067b\"" May 13 07:29:56.903028 systemd[1]: Started cri-containerd-389fb3a9cde80fbc6a876d7bb3601fae2e0e27911c46fa6a53d2c8917fdb47ff.scope. May 13 07:29:56.928428 systemd[1]: Started cri-containerd-59fa6346b1dcbae73fd1d6f7cfaaaa50f7355d8e42dd4c7e28dfff199ee1067b.scope. May 13 07:29:56.970619 env[1155]: time="2025-05-13T07:29:56.970569850Z" level=info msg="StartContainer for \"389fb3a9cde80fbc6a876d7bb3601fae2e0e27911c46fa6a53d2c8917fdb47ff\" returns successfully" May 13 07:29:56.985223 env[1155]: time="2025-05-13T07:29:56.985181437Z" level=info msg="StartContainer for \"59fa6346b1dcbae73fd1d6f7cfaaaa50f7355d8e42dd4c7e28dfff199ee1067b\" returns successfully" May 13 07:29:57.910449 kubelet[1904]: I0513 07:29:57.910294 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h4hq2" podStartSLOduration=30.910261361 podStartE2EDuration="30.910261361s" podCreationTimestamp="2025-05-13 07:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:57.882703266 +0000 UTC m=+37.330473618" watchObservedRunningTime="2025-05-13 07:29:57.910261361 +0000 UTC m=+37.358031702" May 13 07:29:57.955840 kubelet[1904]: I0513 07:29:57.955788 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m67k6" podStartSLOduration=30.955752392 podStartE2EDuration="30.955752392s" podCreationTimestamp="2025-05-13 07:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:29:57.909992619 +0000 UTC m=+37.357763000" watchObservedRunningTime="2025-05-13 07:29:57.955752392 +0000 UTC m=+37.403522673" May 13 07:33:17.947556 systemd[1]: Started sshd@7-172.24.4.239:22-172.24.4.1:50320.service. May 13 07:33:19.468176 sshd[3261]: Accepted publickey for core from 172.24.4.1 port 50320 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:19.472833 sshd[3261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:19.498584 systemd-logind[1146]: New session 8 of user core. May 13 07:33:19.505796 systemd[1]: Started session-8.scope. May 13 07:33:20.207718 sshd[3261]: pam_unix(sshd:session): session closed for user core May 13 07:33:20.215928 systemd[1]: sshd@7-172.24.4.239:22-172.24.4.1:50320.service: Deactivated successfully. May 13 07:33:20.218776 systemd[1]: session-8.scope: Deactivated successfully. May 13 07:33:20.220674 systemd-logind[1146]: Session 8 logged out. Waiting for processes to exit. May 13 07:33:20.226132 systemd-logind[1146]: Removed session 8. May 13 07:33:25.229316 systemd[1]: Started sshd@8-172.24.4.239:22-172.24.4.1:40630.service. May 13 07:33:26.730883 sshd[3279]: Accepted publickey for core from 172.24.4.1 port 40630 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:26.744410 sshd[3279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:26.763092 systemd-logind[1146]: New session 9 of user core. May 13 07:33:26.764245 systemd[1]: Started session-9.scope. May 13 07:33:27.459591 sshd[3279]: pam_unix(sshd:session): session closed for user core May 13 07:33:27.464155 systemd[1]: sshd@8-172.24.4.239:22-172.24.4.1:40630.service: Deactivated successfully. May 13 07:33:27.465998 systemd[1]: session-9.scope: Deactivated successfully. May 13 07:33:27.468786 systemd-logind[1146]: Session 9 logged out. Waiting for processes to exit. May 13 07:33:27.473183 systemd-logind[1146]: Removed session 9. May 13 07:33:32.477241 systemd[1]: Started sshd@9-172.24.4.239:22-172.24.4.1:40644.service. May 13 07:33:33.756180 sshd[3294]: Accepted publickey for core from 172.24.4.1 port 40644 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:33.761485 sshd[3294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:33.774589 systemd-logind[1146]: New session 10 of user core. May 13 07:33:33.780235 systemd[1]: Started session-10.scope. May 13 07:33:34.499710 sshd[3294]: pam_unix(sshd:session): session closed for user core May 13 07:33:34.508491 systemd[1]: sshd@9-172.24.4.239:22-172.24.4.1:40644.service: Deactivated successfully. May 13 07:33:34.511676 systemd[1]: session-10.scope: Deactivated successfully. May 13 07:33:34.516729 systemd-logind[1146]: Session 10 logged out. Waiting for processes to exit. May 13 07:33:34.522920 systemd-logind[1146]: Removed session 10. May 13 07:33:39.520851 systemd[1]: Started sshd@10-172.24.4.239:22-172.24.4.1:48132.service. May 13 07:33:40.712459 sshd[3306]: Accepted publickey for core from 172.24.4.1 port 48132 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:40.717169 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:40.732308 systemd-logind[1146]: New session 11 of user core. May 13 07:33:40.736651 systemd[1]: Started session-11.scope. May 13 07:33:41.634836 sshd[3306]: pam_unix(sshd:session): session closed for user core May 13 07:33:41.648483 systemd[1]: Started sshd@11-172.24.4.239:22-172.24.4.1:48134.service. May 13 07:33:41.650936 systemd[1]: sshd@10-172.24.4.239:22-172.24.4.1:48132.service: Deactivated successfully. May 13 07:33:41.659063 systemd[1]: session-11.scope: Deactivated successfully. May 13 07:33:41.662925 systemd-logind[1146]: Session 11 logged out. Waiting for processes to exit. May 13 07:33:41.667379 systemd-logind[1146]: Removed session 11. May 13 07:33:42.827459 sshd[3317]: Accepted publickey for core from 172.24.4.1 port 48134 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:42.830966 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:42.844028 systemd-logind[1146]: New session 12 of user core. May 13 07:33:42.845802 systemd[1]: Started session-12.scope. May 13 07:33:43.818237 sshd[3317]: pam_unix(sshd:session): session closed for user core May 13 07:33:43.826089 systemd[1]: sshd@11-172.24.4.239:22-172.24.4.1:48134.service: Deactivated successfully. May 13 07:33:43.828281 systemd[1]: session-12.scope: Deactivated successfully. May 13 07:33:43.830667 systemd-logind[1146]: Session 12 logged out. Waiting for processes to exit. May 13 07:33:43.838524 systemd[1]: Started sshd@12-172.24.4.239:22-172.24.4.1:47944.service. May 13 07:33:43.843886 systemd-logind[1146]: Removed session 12. May 13 07:33:45.064967 sshd[3327]: Accepted publickey for core from 172.24.4.1 port 47944 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:45.068814 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:45.081075 systemd-logind[1146]: New session 13 of user core. May 13 07:33:45.082028 systemd[1]: Started session-13.scope. May 13 07:33:45.815688 sshd[3327]: pam_unix(sshd:session): session closed for user core May 13 07:33:45.821620 systemd-logind[1146]: Session 13 logged out. Waiting for processes to exit. May 13 07:33:45.821968 systemd[1]: sshd@12-172.24.4.239:22-172.24.4.1:47944.service: Deactivated successfully. May 13 07:33:45.823886 systemd[1]: session-13.scope: Deactivated successfully. May 13 07:33:45.825896 systemd-logind[1146]: Removed session 13. May 13 07:33:50.828673 systemd[1]: Started sshd@13-172.24.4.239:22-172.24.4.1:47956.service. May 13 07:33:52.065828 sshd[3339]: Accepted publickey for core from 172.24.4.1 port 47956 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:52.069332 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:52.081838 systemd-logind[1146]: New session 14 of user core. May 13 07:33:52.100865 systemd[1]: Started session-14.scope. May 13 07:33:52.852551 sshd[3339]: pam_unix(sshd:session): session closed for user core May 13 07:33:52.860927 systemd-logind[1146]: Session 14 logged out. Waiting for processes to exit. May 13 07:33:52.861834 systemd[1]: sshd@13-172.24.4.239:22-172.24.4.1:47956.service: Deactivated successfully. May 13 07:33:52.868169 systemd[1]: session-14.scope: Deactivated successfully. May 13 07:33:52.870769 systemd-logind[1146]: Removed session 14. May 13 07:33:57.864492 systemd[1]: Started sshd@14-172.24.4.239:22-172.24.4.1:39024.service. May 13 07:33:59.073267 sshd[3351]: Accepted publickey for core from 172.24.4.1 port 39024 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:59.076536 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:59.093070 systemd-logind[1146]: New session 15 of user core. May 13 07:33:59.094768 systemd[1]: Started session-15.scope. May 13 07:33:59.808527 sshd[3351]: pam_unix(sshd:session): session closed for user core May 13 07:33:59.818545 systemd[1]: Started sshd@15-172.24.4.239:22-172.24.4.1:39034.service. May 13 07:33:59.820041 systemd[1]: sshd@14-172.24.4.239:22-172.24.4.1:39024.service: Deactivated successfully. May 13 07:33:59.823835 systemd[1]: session-15.scope: Deactivated successfully. May 13 07:33:59.829044 systemd-logind[1146]: Session 15 logged out. Waiting for processes to exit. May 13 07:33:59.832073 systemd-logind[1146]: Removed session 15. May 13 07:34:01.271526 sshd[3364]: Accepted publickey for core from 172.24.4.1 port 39034 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:01.274645 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:01.286333 systemd-logind[1146]: New session 16 of user core. May 13 07:34:01.287449 systemd[1]: Started session-16.scope. May 13 07:34:02.055046 sshd[3364]: pam_unix(sshd:session): session closed for user core May 13 07:34:02.061767 systemd[1]: sshd@15-172.24.4.239:22-172.24.4.1:39034.service: Deactivated successfully. May 13 07:34:02.063446 systemd[1]: session-16.scope: Deactivated successfully. May 13 07:34:02.065653 systemd-logind[1146]: Session 16 logged out. Waiting for processes to exit. May 13 07:34:02.068940 systemd[1]: Started sshd@16-172.24.4.239:22-172.24.4.1:39048.service. May 13 07:34:02.073209 systemd-logind[1146]: Removed session 16. May 13 07:34:03.443853 sshd[3374]: Accepted publickey for core from 172.24.4.1 port 39048 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:03.447301 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:03.460516 systemd-logind[1146]: New session 17 of user core. May 13 07:34:03.460913 systemd[1]: Started session-17.scope. May 13 07:34:05.469276 sshd[3374]: pam_unix(sshd:session): session closed for user core May 13 07:34:05.479504 systemd[1]: Started sshd@17-172.24.4.239:22-172.24.4.1:33780.service. May 13 07:34:05.481002 systemd[1]: sshd@16-172.24.4.239:22-172.24.4.1:39048.service: Deactivated successfully. May 13 07:34:05.485705 systemd[1]: session-17.scope: Deactivated successfully. May 13 07:34:05.489915 systemd-logind[1146]: Session 17 logged out. Waiting for processes to exit. May 13 07:34:05.493106 systemd-logind[1146]: Removed session 17. May 13 07:34:06.879145 sshd[3390]: Accepted publickey for core from 172.24.4.1 port 33780 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:06.882230 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:06.895452 systemd-logind[1146]: New session 18 of user core. May 13 07:34:06.897762 systemd[1]: Started session-18.scope. May 13 07:34:07.940811 sshd[3390]: pam_unix(sshd:session): session closed for user core May 13 07:34:07.953935 systemd[1]: sshd@17-172.24.4.239:22-172.24.4.1:33780.service: Deactivated successfully. May 13 07:34:07.958964 systemd[1]: session-18.scope: Deactivated successfully. May 13 07:34:07.961608 systemd-logind[1146]: Session 18 logged out. Waiting for processes to exit. May 13 07:34:07.966097 systemd[1]: Started sshd@18-172.24.4.239:22-172.24.4.1:33794.service. May 13 07:34:07.972366 systemd-logind[1146]: Removed session 18. May 13 07:34:09.142241 sshd[3400]: Accepted publickey for core from 172.24.4.1 port 33794 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:09.145616 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:09.157474 systemd-logind[1146]: New session 19 of user core. May 13 07:34:09.158366 systemd[1]: Started session-19.scope. May 13 07:34:10.152714 sshd[3400]: pam_unix(sshd:session): session closed for user core May 13 07:34:10.158954 systemd[1]: sshd@18-172.24.4.239:22-172.24.4.1:33794.service: Deactivated successfully. May 13 07:34:10.161167 systemd[1]: session-19.scope: Deactivated successfully. May 13 07:34:10.162993 systemd-logind[1146]: Session 19 logged out. Waiting for processes to exit. May 13 07:34:10.166225 systemd-logind[1146]: Removed session 19. May 13 07:34:15.163010 systemd[1]: Started sshd@19-172.24.4.239:22-172.24.4.1:36512.service. May 13 07:34:16.413284 sshd[3414]: Accepted publickey for core from 172.24.4.1 port 36512 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:16.417100 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:16.449667 systemd-logind[1146]: New session 20 of user core. May 13 07:34:16.450649 systemd[1]: Started session-20.scope. May 13 07:34:17.142937 sshd[3414]: pam_unix(sshd:session): session closed for user core May 13 07:34:17.148854 systemd[1]: sshd@19-172.24.4.239:22-172.24.4.1:36512.service: Deactivated successfully. May 13 07:34:17.150599 systemd[1]: session-20.scope: Deactivated successfully. May 13 07:34:17.152233 systemd-logind[1146]: Session 20 logged out. Waiting for processes to exit. May 13 07:34:17.155825 systemd-logind[1146]: Removed session 20. May 13 07:34:22.155905 systemd[1]: Started sshd@20-172.24.4.239:22-172.24.4.1:36520.service. May 13 07:34:23.579902 sshd[3428]: Accepted publickey for core from 172.24.4.1 port 36520 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:23.583259 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:23.594998 systemd-logind[1146]: New session 21 of user core. May 13 07:34:23.596334 systemd[1]: Started session-21.scope. May 13 07:34:24.335061 sshd[3428]: pam_unix(sshd:session): session closed for user core May 13 07:34:24.340910 systemd[1]: sshd@20-172.24.4.239:22-172.24.4.1:36520.service: Deactivated successfully. May 13 07:34:24.342672 systemd[1]: session-21.scope: Deactivated successfully. May 13 07:34:24.343977 systemd-logind[1146]: Session 21 logged out. Waiting for processes to exit. May 13 07:34:24.345515 systemd-logind[1146]: Removed session 21. May 13 07:34:29.363713 systemd[1]: Started sshd@21-172.24.4.239:22-172.24.4.1:48944.service. May 13 07:34:30.396791 sshd[3442]: Accepted publickey for core from 172.24.4.1 port 48944 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:30.401147 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:30.414586 systemd-logind[1146]: New session 22 of user core. May 13 07:34:30.417257 systemd[1]: Started session-22.scope. May 13 07:34:31.142273 sshd[3442]: pam_unix(sshd:session): session closed for user core May 13 07:34:31.157722 systemd[1]: Started sshd@22-172.24.4.239:22-172.24.4.1:48958.service. May 13 07:34:31.159699 systemd[1]: sshd@21-172.24.4.239:22-172.24.4.1:48944.service: Deactivated successfully. May 13 07:34:31.169267 systemd[1]: session-22.scope: Deactivated successfully. May 13 07:34:31.175912 systemd-logind[1146]: Session 22 logged out. Waiting for processes to exit. May 13 07:34:31.180202 systemd-logind[1146]: Removed session 22. May 13 07:34:32.356233 sshd[3453]: Accepted publickey for core from 172.24.4.1 port 48958 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:32.360283 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:32.371746 systemd-logind[1146]: New session 23 of user core. May 13 07:34:32.372313 systemd[1]: Started session-23.scope. May 13 07:34:34.537903 env[1155]: time="2025-05-13T07:34:34.537561765Z" level=info msg="StopContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" with timeout 30 (s)" May 13 07:34:34.541881 env[1155]: time="2025-05-13T07:34:34.541782279Z" level=info msg="Stop container \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" with signal terminated" May 13 07:34:34.596677 systemd[1]: cri-containerd-9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378.scope: Deactivated successfully. May 13 07:34:34.597052 systemd[1]: cri-containerd-9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378.scope: Consumed 1.180s CPU time. May 13 07:34:34.652445 env[1155]: time="2025-05-13T07:34:34.652216977Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 07:34:34.655515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378-rootfs.mount: Deactivated successfully. May 13 07:34:34.661577 env[1155]: time="2025-05-13T07:34:34.661536581Z" level=info msg="StopContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" with timeout 2 (s)" May 13 07:34:34.662228 env[1155]: time="2025-05-13T07:34:34.662176639Z" level=info msg="Stop container \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" with signal terminated" May 13 07:34:34.674299 systemd-networkd[987]: lxc_health: Link DOWN May 13 07:34:34.674307 systemd-networkd[987]: lxc_health: Lost carrier May 13 07:34:34.702627 env[1155]: time="2025-05-13T07:34:34.700244886Z" level=info msg="shim disconnected" id=9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378 May 13 07:34:34.702627 env[1155]: time="2025-05-13T07:34:34.700880214Z" level=warning msg="cleaning up after shim disconnected" id=9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378 namespace=k8s.io May 13 07:34:34.702627 env[1155]: time="2025-05-13T07:34:34.700904590Z" level=info msg="cleaning up dead shim" May 13 07:34:34.710577 systemd[1]: cri-containerd-f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd.scope: Deactivated successfully. May 13 07:34:34.710852 systemd[1]: cri-containerd-f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd.scope: Consumed 10.060s CPU time. May 13 07:34:34.735583 env[1155]: time="2025-05-13T07:34:34.735520954Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3510 runtime=io.containerd.runc.v2\n" May 13 07:34:34.750261 env[1155]: time="2025-05-13T07:34:34.750200994Z" level=info msg="StopContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" returns successfully" May 13 07:34:34.751581 env[1155]: time="2025-05-13T07:34:34.751534499Z" level=info msg="StopPodSandbox for \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\"" May 13 07:34:34.752078 env[1155]: time="2025-05-13T07:34:34.752033290Z" level=info msg="Container to stop \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.756732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29-shm.mount: Deactivated successfully. May 13 07:34:34.762513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd-rootfs.mount: Deactivated successfully. May 13 07:34:34.771155 systemd[1]: cri-containerd-6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29.scope: Deactivated successfully. May 13 07:34:34.792293 env[1155]: time="2025-05-13T07:34:34.792145524Z" level=info msg="shim disconnected" id=f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd May 13 07:34:34.792646 env[1155]: time="2025-05-13T07:34:34.792608047Z" level=warning msg="cleaning up after shim disconnected" id=f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd namespace=k8s.io May 13 07:34:34.792800 env[1155]: time="2025-05-13T07:34:34.792780983Z" level=info msg="cleaning up dead shim" May 13 07:34:34.801738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29-rootfs.mount: Deactivated successfully. May 13 07:34:34.827113 env[1155]: time="2025-05-13T07:34:34.827049281Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3554 runtime=io.containerd.runc.v2\n" May 13 07:34:34.836066 env[1155]: time="2025-05-13T07:34:34.835989208Z" level=info msg="shim disconnected" id=6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29 May 13 07:34:34.836509 env[1155]: time="2025-05-13T07:34:34.836483581Z" level=warning msg="cleaning up after shim disconnected" id=6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29 namespace=k8s.io May 13 07:34:34.836656 env[1155]: time="2025-05-13T07:34:34.836636970Z" level=info msg="cleaning up dead shim" May 13 07:34:34.837234 env[1155]: time="2025-05-13T07:34:34.837205594Z" level=info msg="StopContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" returns successfully" May 13 07:34:34.837962 env[1155]: time="2025-05-13T07:34:34.837936042Z" level=info msg="StopPodSandbox for \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\"" May 13 07:34:34.838144 env[1155]: time="2025-05-13T07:34:34.838118586Z" level=info msg="Container to stop \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.838242 env[1155]: time="2025-05-13T07:34:34.838221440Z" level=info msg="Container to stop \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.838338 env[1155]: time="2025-05-13T07:34:34.838316209Z" level=info msg="Container to stop \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.838502 env[1155]: time="2025-05-13T07:34:34.838468566Z" level=info msg="Container to stop \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.838629 env[1155]: time="2025-05-13T07:34:34.838608220Z" level=info msg="Container to stop \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:34.851623 systemd[1]: cri-containerd-e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef.scope: Deactivated successfully. May 13 07:34:34.861938 env[1155]: time="2025-05-13T07:34:34.861879819Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3568 runtime=io.containerd.runc.v2\n" May 13 07:34:34.862600 env[1155]: time="2025-05-13T07:34:34.862568619Z" level=info msg="TearDown network for sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" successfully" May 13 07:34:34.862744 env[1155]: time="2025-05-13T07:34:34.862716737Z" level=info msg="StopPodSandbox for \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" returns successfully" May 13 07:34:34.892624 kubelet[1904]: I0513 07:34:34.891859 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgvkn\" (UniqueName: \"kubernetes.io/projected/2514124f-a339-4011-9375-a8eeca905934-kube-api-access-lgvkn\") pod \"2514124f-a339-4011-9375-a8eeca905934\" (UID: \"2514124f-a339-4011-9375-a8eeca905934\") " May 13 07:34:34.892624 kubelet[1904]: I0513 07:34:34.891958 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2514124f-a339-4011-9375-a8eeca905934-cilium-config-path\") pod \"2514124f-a339-4011-9375-a8eeca905934\" (UID: \"2514124f-a339-4011-9375-a8eeca905934\") " May 13 07:34:34.897797 kubelet[1904]: I0513 07:34:34.897760 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2514124f-a339-4011-9375-a8eeca905934-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2514124f-a339-4011-9375-a8eeca905934" (UID: "2514124f-a339-4011-9375-a8eeca905934"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 07:34:34.909115 kubelet[1904]: I0513 07:34:34.908689 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2514124f-a339-4011-9375-a8eeca905934-kube-api-access-lgvkn" (OuterVolumeSpecName: "kube-api-access-lgvkn") pod "2514124f-a339-4011-9375-a8eeca905934" (UID: "2514124f-a339-4011-9375-a8eeca905934"). InnerVolumeSpecName "kube-api-access-lgvkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:34.911276 env[1155]: time="2025-05-13T07:34:34.909725464Z" level=info msg="shim disconnected" id=e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef May 13 07:34:34.911276 env[1155]: time="2025-05-13T07:34:34.909797920Z" level=warning msg="cleaning up after shim disconnected" id=e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef namespace=k8s.io May 13 07:34:34.911276 env[1155]: time="2025-05-13T07:34:34.909814112Z" level=info msg="cleaning up dead shim" May 13 07:34:34.922644 env[1155]: time="2025-05-13T07:34:34.922594014Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3600 runtime=io.containerd.runc.v2\n" May 13 07:34:34.923228 env[1155]: time="2025-05-13T07:34:34.923191071Z" level=info msg="TearDown network for sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" successfully" May 13 07:34:34.923409 env[1155]: time="2025-05-13T07:34:34.923356794Z" level=info msg="StopPodSandbox for \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" returns successfully" May 13 07:34:34.992824 kubelet[1904]: I0513 07:34:34.992781 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-hubble-tls\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.993048 kubelet[1904]: I0513 07:34:34.993029 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-xtables-lock\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.993224 kubelet[1904]: I0513 07:34:34.993197 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-net\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.993401 kubelet[1904]: I0513 07:34:34.993362 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-hostproc\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.993664 kubelet[1904]: I0513 07:34:34.993639 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93afe149-6ef0-456d-88a5-c61458570676-clustermesh-secrets\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994267 kubelet[1904]: I0513 07:34:34.994234 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-cgroup\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994399 kubelet[1904]: I0513 07:34:34.994276 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cni-path\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994399 kubelet[1904]: I0513 07:34:34.994302 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93afe149-6ef0-456d-88a5-c61458570676-cilium-config-path\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994399 kubelet[1904]: I0513 07:34:34.994326 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-run\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994399 kubelet[1904]: I0513 07:34:34.994357 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-kernel\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994606 kubelet[1904]: I0513 07:34:34.994414 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-bpf-maps\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994606 kubelet[1904]: I0513 07:34:34.994467 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr9ws\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-kube-api-access-rr9ws\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994606 kubelet[1904]: I0513 07:34:34.994490 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-lib-modules\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994606 kubelet[1904]: I0513 07:34:34.994521 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-etc-cni-netd\") pod \"93afe149-6ef0-456d-88a5-c61458570676\" (UID: \"93afe149-6ef0-456d-88a5-c61458570676\") " May 13 07:34:34.994606 kubelet[1904]: I0513 07:34:34.994603 1904 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgvkn\" (UniqueName: \"kubernetes.io/projected/2514124f-a339-4011-9375-a8eeca905934-kube-api-access-lgvkn\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:34.994808 kubelet[1904]: I0513 07:34:34.994621 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2514124f-a339-4011-9375-a8eeca905934-cilium-config-path\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:34.994808 kubelet[1904]: I0513 07:34:34.993803 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.994808 kubelet[1904]: I0513 07:34:34.993822 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.994808 kubelet[1904]: I0513 07:34:34.993854 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-hostproc" (OuterVolumeSpecName: "hostproc") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.994808 kubelet[1904]: I0513 07:34:34.994652 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.995056 kubelet[1904]: I0513 07:34:34.994707 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.995056 kubelet[1904]: I0513 07:34:34.994726 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cni-path" (OuterVolumeSpecName: "cni-path") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.995459 kubelet[1904]: I0513 07:34:34.995437 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.995617 kubelet[1904]: I0513 07:34:34.995598 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.995783 kubelet[1904]: I0513 07:34:34.995762 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.996596 kubelet[1904]: I0513 07:34:34.996541 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:34.997317 kubelet[1904]: I0513 07:34:34.997289 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93afe149-6ef0-456d-88a5-c61458570676-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 07:34:35.000765 kubelet[1904]: I0513 07:34:35.000730 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93afe149-6ef0-456d-88a5-c61458570676-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:35.001645 kubelet[1904]: I0513 07:34:35.001608 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:35.003441 kubelet[1904]: I0513 07:34:35.003416 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-kube-api-access-rr9ws" (OuterVolumeSpecName: "kube-api-access-rr9ws") pod "93afe149-6ef0-456d-88a5-c61458570676" (UID: "93afe149-6ef0-456d-88a5-c61458570676"). InnerVolumeSpecName "kube-api-access-rr9ws". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:35.095235 kubelet[1904]: I0513 07:34:35.095014 1904 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-etc-cni-netd\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.095687 kubelet[1904]: I0513 07:34:35.095647 1904 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-hubble-tls\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.095932 kubelet[1904]: I0513 07:34:35.095891 1904 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-xtables-lock\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.096246 kubelet[1904]: I0513 07:34:35.096194 1904 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-net\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.096541 kubelet[1904]: I0513 07:34:35.096505 1904 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-hostproc\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.096804 kubelet[1904]: I0513 07:34:35.096739 1904 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93afe149-6ef0-456d-88a5-c61458570676-clustermesh-secrets\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.097046 kubelet[1904]: I0513 07:34:35.097007 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-cgroup\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.097536 kubelet[1904]: I0513 07:34:35.097497 1904 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cni-path\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.097787 kubelet[1904]: I0513 07:34:35.097753 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93afe149-6ef0-456d-88a5-c61458570676-cilium-config-path\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.098151 kubelet[1904]: I0513 07:34:35.098111 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-cilium-run\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.098422 kubelet[1904]: I0513 07:34:35.098349 1904 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.098716 kubelet[1904]: I0513 07:34:35.098664 1904 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-bpf-maps\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.099359 kubelet[1904]: I0513 07:34:35.099317 1904 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93afe149-6ef0-456d-88a5-c61458570676-lib-modules\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.099698 kubelet[1904]: I0513 07:34:35.099658 1904 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rr9ws\" (UniqueName: \"kubernetes.io/projected/93afe149-6ef0-456d-88a5-c61458570676-kube-api-access-rr9ws\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:35.204947 kubelet[1904]: I0513 07:34:35.204822 1904 scope.go:117] "RemoveContainer" containerID="f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd" May 13 07:34:35.220097 systemd[1]: Removed slice kubepods-burstable-pod93afe149_6ef0_456d_88a5_c61458570676.slice. May 13 07:34:35.220363 systemd[1]: kubepods-burstable-pod93afe149_6ef0_456d_88a5_c61458570676.slice: Consumed 10.168s CPU time. May 13 07:34:35.237328 env[1155]: time="2025-05-13T07:34:35.237183395Z" level=info msg="RemoveContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\"" May 13 07:34:35.250268 systemd[1]: Removed slice kubepods-besteffort-pod2514124f_a339_4011_9375_a8eeca905934.slice. May 13 07:34:35.250548 systemd[1]: kubepods-besteffort-pod2514124f_a339_4011_9375_a8eeca905934.slice: Consumed 1.214s CPU time. May 13 07:34:35.318052 env[1155]: time="2025-05-13T07:34:35.317949388Z" level=info msg="RemoveContainer for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" returns successfully" May 13 07:34:35.320112 kubelet[1904]: I0513 07:34:35.319822 1904 scope.go:117] "RemoveContainer" containerID="5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd" May 13 07:34:35.330264 env[1155]: time="2025-05-13T07:34:35.326247987Z" level=info msg="RemoveContainer for \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\"" May 13 07:34:35.343064 env[1155]: time="2025-05-13T07:34:35.342961764Z" level=info msg="RemoveContainer for \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\" returns successfully" May 13 07:34:35.343943 kubelet[1904]: I0513 07:34:35.343726 1904 scope.go:117] "RemoveContainer" containerID="500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da" May 13 07:34:35.355055 env[1155]: time="2025-05-13T07:34:35.353610486Z" level=info msg="RemoveContainer for \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\"" May 13 07:34:35.363293 env[1155]: time="2025-05-13T07:34:35.363112325Z" level=info msg="RemoveContainer for \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\" returns successfully" May 13 07:34:35.363774 kubelet[1904]: I0513 07:34:35.363707 1904 scope.go:117] "RemoveContainer" containerID="3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751" May 13 07:34:35.385865 env[1155]: time="2025-05-13T07:34:35.385806756Z" level=info msg="RemoveContainer for \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\"" May 13 07:34:35.391856 env[1155]: time="2025-05-13T07:34:35.391567526Z" level=info msg="RemoveContainer for \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\" returns successfully" May 13 07:34:35.392293 kubelet[1904]: I0513 07:34:35.392251 1904 scope.go:117] "RemoveContainer" containerID="2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7" May 13 07:34:35.393778 env[1155]: time="2025-05-13T07:34:35.393738041Z" level=info msg="RemoveContainer for \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\"" May 13 07:34:35.398310 env[1155]: time="2025-05-13T07:34:35.398266616Z" level=info msg="RemoveContainer for \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\" returns successfully" May 13 07:34:35.398656 kubelet[1904]: I0513 07:34:35.398596 1904 scope.go:117] "RemoveContainer" containerID="f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd" May 13 07:34:35.399079 env[1155]: time="2025-05-13T07:34:35.398950887Z" level=error msg="ContainerStatus for \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\": not found" May 13 07:34:35.399306 kubelet[1904]: E0513 07:34:35.399274 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\": not found" containerID="f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd" May 13 07:34:35.399587 kubelet[1904]: I0513 07:34:35.399446 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd"} err="failed to get container status \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0293b7f29cb7a7e598dfa3519ffeaae0ab6c6817821110c544faa8a0b5ea4fd\": not found" May 13 07:34:35.399737 kubelet[1904]: I0513 07:34:35.399720 1904 scope.go:117] "RemoveContainer" containerID="5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd" May 13 07:34:35.400127 env[1155]: time="2025-05-13T07:34:35.400062906Z" level=error msg="ContainerStatus for \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\": not found" May 13 07:34:35.400284 kubelet[1904]: E0513 07:34:35.400249 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\": not found" containerID="5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd" May 13 07:34:35.400329 kubelet[1904]: I0513 07:34:35.400288 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd"} err="failed to get container status \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d966317c0b86181e8c95ad34c093ab58e04eba13f0378a69977d46285b2eebd\": not found" May 13 07:34:35.400329 kubelet[1904]: I0513 07:34:35.400319 1904 scope.go:117] "RemoveContainer" containerID="500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da" May 13 07:34:35.400598 env[1155]: time="2025-05-13T07:34:35.400500461Z" level=error msg="ContainerStatus for \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\": not found" May 13 07:34:35.400903 kubelet[1904]: E0513 07:34:35.400833 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\": not found" containerID="500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da" May 13 07:34:35.400989 kubelet[1904]: I0513 07:34:35.400870 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da"} err="failed to get container status \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\": rpc error: code = NotFound desc = an error occurred when try to find container \"500c51763ba930058cf8f683be4afa26d0e55b077474bfb512c51b9f3fe3b8da\": not found" May 13 07:34:35.401035 kubelet[1904]: I0513 07:34:35.400967 1904 scope.go:117] "RemoveContainer" containerID="3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751" May 13 07:34:35.401339 env[1155]: time="2025-05-13T07:34:35.401242631Z" level=error msg="ContainerStatus for \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\": not found" May 13 07:34:35.401525 kubelet[1904]: E0513 07:34:35.401493 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\": not found" containerID="3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751" May 13 07:34:35.401584 kubelet[1904]: I0513 07:34:35.401522 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751"} err="failed to get container status \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e08dcb62bc2defbcd923c2ff507a3f3ae28e899f9e33a24a11ef32abfcb7751\": not found" May 13 07:34:35.401584 kubelet[1904]: I0513 07:34:35.401568 1904 scope.go:117] "RemoveContainer" containerID="2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7" May 13 07:34:35.401969 env[1155]: time="2025-05-13T07:34:35.401880565Z" level=error msg="ContainerStatus for \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\": not found" May 13 07:34:35.402173 kubelet[1904]: E0513 07:34:35.402140 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\": not found" containerID="2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7" May 13 07:34:35.402226 kubelet[1904]: I0513 07:34:35.402170 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7"} err="failed to get container status \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2944e6714024fb29f43c4274c5fcc1dcf7eaae647e03930d03479164bc432fc7\": not found" May 13 07:34:35.402226 kubelet[1904]: I0513 07:34:35.402199 1904 scope.go:117] "RemoveContainer" containerID="9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378" May 13 07:34:35.403315 env[1155]: time="2025-05-13T07:34:35.403280346Z" level=info msg="RemoveContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\"" May 13 07:34:35.407756 env[1155]: time="2025-05-13T07:34:35.407722658Z" level=info msg="RemoveContainer for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" returns successfully" May 13 07:34:35.407936 kubelet[1904]: I0513 07:34:35.407916 1904 scope.go:117] "RemoveContainer" containerID="9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378" May 13 07:34:35.408246 env[1155]: time="2025-05-13T07:34:35.408155946Z" level=error msg="ContainerStatus for \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\": not found" May 13 07:34:35.408417 kubelet[1904]: E0513 07:34:35.408370 1904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\": not found" containerID="9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378" May 13 07:34:35.408490 kubelet[1904]: I0513 07:34:35.408424 1904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378"} err="failed to get container status \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\": rpc error: code = NotFound desc = an error occurred when try to find container \"9735b6d6933eb327ebf61c758f79824350fe8b762edf6b870166f58af30cb378\": not found" May 13 07:34:35.583629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef-rootfs.mount: Deactivated successfully. May 13 07:34:35.584341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef-shm.mount: Deactivated successfully. May 13 07:34:35.585010 systemd[1]: var-lib-kubelet-pods-2514124f\x2da339\x2d4011\x2d9375\x2da8eeca905934-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgvkn.mount: Deactivated successfully. May 13 07:34:35.585631 systemd[1]: var-lib-kubelet-pods-93afe149\x2d6ef0\x2d456d\x2d88a5\x2dc61458570676-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drr9ws.mount: Deactivated successfully. May 13 07:34:35.587319 systemd[1]: var-lib-kubelet-pods-93afe149\x2d6ef0\x2d456d\x2d88a5\x2dc61458570676-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 07:34:35.587839 systemd[1]: var-lib-kubelet-pods-93afe149\x2d6ef0\x2d456d\x2d88a5\x2dc61458570676-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 07:34:35.870971 kubelet[1904]: E0513 07:34:35.870844 1904 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:36.545094 sshd[3453]: pam_unix(sshd:session): session closed for user core May 13 07:34:36.551573 systemd[1]: Started sshd@23-172.24.4.239:22-172.24.4.1:41306.service. May 13 07:34:36.558849 systemd[1]: sshd@22-172.24.4.239:22-172.24.4.1:48958.service: Deactivated successfully. May 13 07:34:36.561077 systemd[1]: session-23.scope: Deactivated successfully. May 13 07:34:36.561888 systemd[1]: session-23.scope: Consumed 1.135s CPU time. May 13 07:34:36.567148 systemd-logind[1146]: Session 23 logged out. Waiting for processes to exit. May 13 07:34:36.571380 systemd-logind[1146]: Removed session 23. May 13 07:34:36.684753 kubelet[1904]: I0513 07:34:36.684587 1904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2514124f-a339-4011-9375-a8eeca905934" path="/var/lib/kubelet/pods/2514124f-a339-4011-9375-a8eeca905934/volumes" May 13 07:34:36.688026 kubelet[1904]: I0513 07:34:36.687983 1904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93afe149-6ef0-456d-88a5-c61458570676" path="/var/lib/kubelet/pods/93afe149-6ef0-456d-88a5-c61458570676/volumes" May 13 07:34:37.555862 kubelet[1904]: I0513 07:34:37.555699 1904 setters.go:602] "Node became not ready" node="ci-3510-3-7-n-1ba5f14697.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T07:34:37Z","lastTransitionTime":"2025-05-13T07:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 07:34:37.827063 sshd[3616]: Accepted publickey for core from 172.24.4.1 port 41306 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:37.830795 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:37.846570 systemd-logind[1146]: New session 24 of user core. May 13 07:34:37.847580 systemd[1]: Started session-24.scope. May 13 07:34:39.103055 kubelet[1904]: I0513 07:34:39.103005 1904 memory_manager.go:355] "RemoveStaleState removing state" podUID="93afe149-6ef0-456d-88a5-c61458570676" containerName="cilium-agent" May 13 07:34:39.103055 kubelet[1904]: I0513 07:34:39.103044 1904 memory_manager.go:355] "RemoveStaleState removing state" podUID="2514124f-a339-4011-9375-a8eeca905934" containerName="cilium-operator" May 13 07:34:39.110614 systemd[1]: Created slice kubepods-burstable-poddbc526cc_d901_4986_b03a_6dd16255f517.slice. May 13 07:34:39.121212 kubelet[1904]: I0513 07:34:39.121157 1904 status_manager.go:890] "Failed to get status for pod" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" pod="kube-system/cilium-zzf72" err="pods \"cilium-zzf72\" is forbidden: User \"system:node:ci-3510-3-7-n-1ba5f14697.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-n-1ba5f14697.novalocal' and this object" May 13 07:34:39.135644 kubelet[1904]: I0513 07:34:39.135581 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-kernel\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135644 kubelet[1904]: I0513 07:34:39.135652 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlkd\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-kube-api-access-nxlkd\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135936 kubelet[1904]: I0513 07:34:39.135683 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-bpf-maps\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135936 kubelet[1904]: I0513 07:34:39.135712 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-cgroup\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135936 kubelet[1904]: I0513 07:34:39.135739 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-lib-modules\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135936 kubelet[1904]: I0513 07:34:39.135810 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-hostproc\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.135936 kubelet[1904]: I0513 07:34:39.135858 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cni-path\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136153 kubelet[1904]: I0513 07:34:39.135936 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-hubble-tls\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136153 kubelet[1904]: I0513 07:34:39.135969 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-net\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136153 kubelet[1904]: I0513 07:34:39.135996 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-config-path\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136153 kubelet[1904]: I0513 07:34:39.136032 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-ipsec-secrets\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136153 kubelet[1904]: I0513 07:34:39.136071 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-clustermesh-secrets\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136370 kubelet[1904]: I0513 07:34:39.136133 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-etc-cni-netd\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136370 kubelet[1904]: I0513 07:34:39.136172 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-run\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.136370 kubelet[1904]: I0513 07:34:39.136205 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-xtables-lock\") pod \"cilium-zzf72\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " pod="kube-system/cilium-zzf72" May 13 07:34:39.336548 sshd[3616]: pam_unix(sshd:session): session closed for user core May 13 07:34:39.340110 systemd[1]: sshd@23-172.24.4.239:22-172.24.4.1:41306.service: Deactivated successfully. May 13 07:34:39.340795 systemd[1]: session-24.scope: Deactivated successfully. May 13 07:34:39.341560 systemd-logind[1146]: Session 24 logged out. Waiting for processes to exit. May 13 07:34:39.342910 systemd[1]: Started sshd@24-172.24.4.239:22-172.24.4.1:41308.service. May 13 07:34:39.345024 systemd-logind[1146]: Removed session 24. May 13 07:34:39.418843 env[1155]: time="2025-05-13T07:34:39.416894308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzf72,Uid:dbc526cc-d901-4986-b03a-6dd16255f517,Namespace:kube-system,Attempt:0,}" May 13 07:34:39.455096 env[1155]: time="2025-05-13T07:34:39.454912281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:39.455756 env[1155]: time="2025-05-13T07:34:39.455671073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:39.455997 env[1155]: time="2025-05-13T07:34:39.455937837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:39.456981 env[1155]: time="2025-05-13T07:34:39.456896445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa pid=3640 runtime=io.containerd.runc.v2 May 13 07:34:39.494641 systemd[1]: Started cri-containerd-e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa.scope. May 13 07:34:39.546366 env[1155]: time="2025-05-13T07:34:39.546303533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzf72,Uid:dbc526cc-d901-4986-b03a-6dd16255f517,Namespace:kube-system,Attempt:0,} returns sandbox id \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\"" May 13 07:34:39.552617 env[1155]: time="2025-05-13T07:34:39.552577934Z" level=info msg="CreateContainer within sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:34:39.571489 env[1155]: time="2025-05-13T07:34:39.571428557Z" level=info msg="CreateContainer within sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\"" May 13 07:34:39.573151 env[1155]: time="2025-05-13T07:34:39.573119418Z" level=info msg="StartContainer for \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\"" May 13 07:34:39.592322 systemd[1]: Started cri-containerd-a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e.scope. May 13 07:34:39.612490 systemd[1]: cri-containerd-a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e.scope: Deactivated successfully. May 13 07:34:39.634273 env[1155]: time="2025-05-13T07:34:39.634187017Z" level=info msg="shim disconnected" id=a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e May 13 07:34:39.634273 env[1155]: time="2025-05-13T07:34:39.634259855Z" level=warning msg="cleaning up after shim disconnected" id=a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e namespace=k8s.io May 13 07:34:39.634273 env[1155]: time="2025-05-13T07:34:39.634272258Z" level=info msg="cleaning up dead shim" May 13 07:34:39.649591 env[1155]: time="2025-05-13T07:34:39.649469215Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T07:34:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 07:34:39.650720 env[1155]: time="2025-05-13T07:34:39.650364686Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" May 13 07:34:39.651667 env[1155]: time="2025-05-13T07:34:39.651477495Z" level=error msg="Failed to pipe stderr of container \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\"" error="reading from a closed fifo" May 13 07:34:39.651945 env[1155]: time="2025-05-13T07:34:39.651518533Z" level=error msg="Failed to pipe stdout of container \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\"" error="reading from a closed fifo" May 13 07:34:39.655155 env[1155]: time="2025-05-13T07:34:39.655048003Z" level=error msg="StartContainer for \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 07:34:39.655458 kubelet[1904]: E0513 07:34:39.655359 1904 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e" May 13 07:34:39.655834 kubelet[1904]: E0513 07:34:39.655693 1904 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 07:34:39.655834 kubelet[1904]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 07:34:39.655834 kubelet[1904]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 07:34:39.655834 kubelet[1904]: rm /hostbin/cilium-mount May 13 07:34:39.656348 kubelet[1904]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxlkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zzf72_kube-system(dbc526cc-d901-4986-b03a-6dd16255f517): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 07:34:39.656348 kubelet[1904]: > logger="UnhandledError" May 13 07:34:39.657658 kubelet[1904]: E0513 07:34:39.657459 1904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zzf72" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" May 13 07:34:40.310703 env[1155]: time="2025-05-13T07:34:40.310509436Z" level=info msg="CreateContainer within sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 13 07:34:40.356687 env[1155]: time="2025-05-13T07:34:40.356564889Z" level=info msg="CreateContainer within sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\"" May 13 07:34:40.359502 env[1155]: time="2025-05-13T07:34:40.359325909Z" level=info msg="StartContainer for \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\"" May 13 07:34:40.407795 systemd[1]: Started cri-containerd-3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7.scope. May 13 07:34:40.423500 systemd[1]: cri-containerd-3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7.scope: Deactivated successfully. May 13 07:34:40.437718 env[1155]: time="2025-05-13T07:34:40.437658573Z" level=info msg="shim disconnected" id=3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7 May 13 07:34:40.438247 env[1155]: time="2025-05-13T07:34:40.438223439Z" level=warning msg="cleaning up after shim disconnected" id=3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7 namespace=k8s.io May 13 07:34:40.438337 env[1155]: time="2025-05-13T07:34:40.438320241Z" level=info msg="cleaning up dead shim" May 13 07:34:40.447342 env[1155]: time="2025-05-13T07:34:40.447260072Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T07:34:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 07:34:40.447677 env[1155]: time="2025-05-13T07:34:40.447609210Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" May 13 07:34:40.448513 env[1155]: time="2025-05-13T07:34:40.448454245Z" level=error msg="Failed to pipe stdout of container \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\"" error="reading from a closed fifo" May 13 07:34:40.448958 env[1155]: time="2025-05-13T07:34:40.448672387Z" level=error msg="Failed to pipe stderr of container \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\"" error="reading from a closed fifo" May 13 07:34:40.452094 env[1155]: time="2025-05-13T07:34:40.452053407Z" level=error msg="StartContainer for \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 07:34:40.453112 kubelet[1904]: E0513 07:34:40.452424 1904 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7" May 13 07:34:40.453112 kubelet[1904]: E0513 07:34:40.452631 1904 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 07:34:40.453112 kubelet[1904]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 07:34:40.453112 kubelet[1904]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 07:34:40.453112 kubelet[1904]: rm /hostbin/cilium-mount May 13 07:34:40.453112 kubelet[1904]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxlkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zzf72_kube-system(dbc526cc-d901-4986-b03a-6dd16255f517): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 07:34:40.453112 kubelet[1904]: > logger="UnhandledError" May 13 07:34:40.454237 kubelet[1904]: E0513 07:34:40.454171 1904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zzf72" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" May 13 07:34:40.577584 sshd[3632]: Accepted publickey for core from 172.24.4.1 port 41308 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:40.583296 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:40.602122 systemd-logind[1146]: New session 25 of user core. May 13 07:34:40.604678 systemd[1]: Started session-25.scope. May 13 07:34:40.873721 kubelet[1904]: E0513 07:34:40.873084 1904 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:41.253906 systemd[1]: run-containerd-runc-k8s.io-3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7-runc.h1NNoC.mount: Deactivated successfully. May 13 07:34:41.257865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7-rootfs.mount: Deactivated successfully. May 13 07:34:41.306553 kubelet[1904]: I0513 07:34:41.306500 1904 scope.go:117] "RemoveContainer" containerID="a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e" May 13 07:34:41.308168 env[1155]: time="2025-05-13T07:34:41.308087321Z" level=info msg="StopPodSandbox for \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\"" May 13 07:34:41.308550 env[1155]: time="2025-05-13T07:34:41.308477197Z" level=info msg="Container to stop \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:41.308753 env[1155]: time="2025-05-13T07:34:41.308712791Z" level=info msg="Container to stop \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:41.317180 env[1155]: time="2025-05-13T07:34:41.314506044Z" level=info msg="RemoveContainer for \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\"" May 13 07:34:41.315778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa-shm.mount: Deactivated successfully. May 13 07:34:41.327478 env[1155]: time="2025-05-13T07:34:41.327195642Z" level=info msg="RemoveContainer for \"a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e\" returns successfully" May 13 07:34:41.329662 systemd[1]: cri-containerd-e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa.scope: Deactivated successfully. May 13 07:34:41.336987 sshd[3632]: pam_unix(sshd:session): session closed for user core May 13 07:34:41.343040 systemd[1]: Started sshd@25-172.24.4.239:22-172.24.4.1:41312.service. May 13 07:34:41.345652 systemd[1]: sshd@24-172.24.4.239:22-172.24.4.1:41308.service: Deactivated successfully. May 13 07:34:41.346344 systemd[1]: session-25.scope: Deactivated successfully. May 13 07:34:41.347157 systemd-logind[1146]: Session 25 logged out. Waiting for processes to exit. May 13 07:34:41.356505 systemd-logind[1146]: Removed session 25. May 13 07:34:41.385770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa-rootfs.mount: Deactivated successfully. May 13 07:34:41.395659 env[1155]: time="2025-05-13T07:34:41.395579334Z" level=info msg="shim disconnected" id=e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa May 13 07:34:41.395936 env[1155]: time="2025-05-13T07:34:41.395913265Z" level=warning msg="cleaning up after shim disconnected" id=e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa namespace=k8s.io May 13 07:34:41.396054 env[1155]: time="2025-05-13T07:34:41.396023603Z" level=info msg="cleaning up dead shim" May 13 07:34:41.411029 env[1155]: time="2025-05-13T07:34:41.410968807Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" May 13 07:34:41.411611 env[1155]: time="2025-05-13T07:34:41.411578998Z" level=info msg="TearDown network for sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" successfully" May 13 07:34:41.411750 env[1155]: time="2025-05-13T07:34:41.411726487Z" level=info msg="StopPodSandbox for \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" returns successfully" May 13 07:34:41.476917 kubelet[1904]: I0513 07:34:41.476858 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-hubble-tls\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.476917 kubelet[1904]: I0513 07:34:41.476915 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-run\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.476942 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-ipsec-secrets\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.476961 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-etc-cni-netd\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.476997 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-config-path\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477018 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cni-path\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477055 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-clustermesh-secrets\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477093 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxlkd\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-kube-api-access-nxlkd\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477112 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-cgroup\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477131 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-net\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477152 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-kernel\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477175 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-bpf-maps\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477206 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-xtables-lock\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477238 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-lib-modules\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477256 1904 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-hostproc\") pod \"dbc526cc-d901-4986-b03a-6dd16255f517\" (UID: \"dbc526cc-d901-4986-b03a-6dd16255f517\") " May 13 07:34:41.477365 kubelet[1904]: I0513 07:34:41.477334 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-hostproc" (OuterVolumeSpecName: "hostproc") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478235 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478273 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478316 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478337 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478355 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478475 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.479416 kubelet[1904]: I0513 07:34:41.478585 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.484971 kubelet[1904]: I0513 07:34:41.481313 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 07:34:41.484971 kubelet[1904]: I0513 07:34:41.481358 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.484971 kubelet[1904]: I0513 07:34:41.481410 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cni-path" (OuterVolumeSpecName: "cni-path") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:41.484298 systemd[1]: var-lib-kubelet-pods-dbc526cc\x2dd901\x2d4986\x2db03a\x2d6dd16255f517-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnxlkd.mount: Deactivated successfully. May 13 07:34:41.487399 kubelet[1904]: I0513 07:34:41.487188 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-kube-api-access-nxlkd" (OuterVolumeSpecName: "kube-api-access-nxlkd") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "kube-api-access-nxlkd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:41.489843 systemd[1]: var-lib-kubelet-pods-dbc526cc\x2dd901\x2d4986\x2db03a\x2d6dd16255f517-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 07:34:41.495332 kubelet[1904]: I0513 07:34:41.495291 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:41.503081 kubelet[1904]: I0513 07:34:41.502916 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:41.503403 kubelet[1904]: I0513 07:34:41.503340 1904 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dbc526cc-d901-4986-b03a-6dd16255f517" (UID: "dbc526cc-d901-4986-b03a-6dd16255f517"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578520 1904 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-hubble-tls\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578594 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-run\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578622 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-config-path\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578654 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-ipsec-secrets\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578679 1904 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-etc-cni-netd\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578704 1904 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cni-path\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.578730 kubelet[1904]: I0513 07:34:41.578729 1904 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbc526cc-d901-4986-b03a-6dd16255f517-clustermesh-secrets\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578754 1904 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nxlkd\" (UniqueName: \"kubernetes.io/projected/dbc526cc-d901-4986-b03a-6dd16255f517-kube-api-access-nxlkd\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578779 1904 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-cilium-cgroup\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578809 1904 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-net\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578851 1904 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578877 1904 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-bpf-maps\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578901 1904 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-xtables-lock\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578923 1904 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-lib-modules\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:41.579420 kubelet[1904]: I0513 07:34:41.578989 1904 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbc526cc-d901-4986-b03a-6dd16255f517-hostproc\") on node \"ci-3510-3-7-n-1ba5f14697.novalocal\" DevicePath \"\"" May 13 07:34:42.256224 systemd[1]: var-lib-kubelet-pods-dbc526cc\x2dd901\x2d4986\x2db03a\x2d6dd16255f517-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 07:34:42.256576 systemd[1]: var-lib-kubelet-pods-dbc526cc\x2dd901\x2d4986\x2db03a\x2d6dd16255f517-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 07:34:42.330682 kubelet[1904]: I0513 07:34:42.330546 1904 scope.go:117] "RemoveContainer" containerID="3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7" May 13 07:34:42.349582 systemd[1]: Removed slice kubepods-burstable-poddbc526cc_d901_4986_b03a_6dd16255f517.slice. May 13 07:34:42.363810 env[1155]: time="2025-05-13T07:34:42.362789345Z" level=info msg="RemoveContainer for \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\"" May 13 07:34:42.396413 env[1155]: time="2025-05-13T07:34:42.396344070Z" level=info msg="RemoveContainer for \"3a15803b227a2896a2491537104dbe18d2b05aaa7866d90ab42e636256eee7a7\" returns successfully" May 13 07:34:42.446181 sshd[3767]: Accepted publickey for core from 172.24.4.1 port 41312 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:34:42.446770 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:34:42.453853 systemd[1]: Started session-26.scope. May 13 07:34:42.454525 systemd-logind[1146]: New session 26 of user core. May 13 07:34:42.470572 kubelet[1904]: I0513 07:34:42.470532 1904 memory_manager.go:355] "RemoveStaleState removing state" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" containerName="mount-cgroup" May 13 07:34:42.470858 kubelet[1904]: I0513 07:34:42.470829 1904 memory_manager.go:355] "RemoveStaleState removing state" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" containerName="mount-cgroup" May 13 07:34:42.478160 systemd[1]: Created slice kubepods-burstable-pod1ec981e6_96b8_4dc8_8e7f_b4042e74af09.slice. May 13 07:34:42.593259 kubelet[1904]: I0513 07:34:42.593052 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-xtables-lock\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.594733 kubelet[1904]: I0513 07:34:42.594169 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-cilium-run\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.595170 kubelet[1904]: I0513 07:34:42.595105 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-hostproc\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.595701 kubelet[1904]: I0513 07:34:42.595653 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-etc-cni-netd\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.596106 kubelet[1904]: I0513 07:34:42.596046 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-hubble-tls\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.596443 kubelet[1904]: I0513 07:34:42.596361 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-host-proc-sys-kernel\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.596810 kubelet[1904]: I0513 07:34:42.596716 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66pp7\" (UniqueName: \"kubernetes.io/projected/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-kube-api-access-66pp7\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.597111 kubelet[1904]: I0513 07:34:42.597052 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-cilium-cgroup\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.597461 kubelet[1904]: I0513 07:34:42.597375 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-cni-path\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.597731 kubelet[1904]: I0513 07:34:42.597689 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-host-proc-sys-net\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.598017 kubelet[1904]: I0513 07:34:42.597976 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-bpf-maps\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.598315 kubelet[1904]: I0513 07:34:42.598229 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-lib-modules\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.598645 kubelet[1904]: I0513 07:34:42.598597 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-clustermesh-secrets\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.598916 kubelet[1904]: I0513 07:34:42.598868 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-cilium-config-path\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.599312 kubelet[1904]: I0513 07:34:42.599252 1904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ec981e6-96b8-4dc8-8e7f-b4042e74af09-cilium-ipsec-secrets\") pod \"cilium-q6bgw\" (UID: \"1ec981e6-96b8-4dc8-8e7f-b4042e74af09\") " pod="kube-system/cilium-q6bgw" May 13 07:34:42.684798 kubelet[1904]: I0513 07:34:42.684622 1904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc526cc-d901-4986-b03a-6dd16255f517" path="/var/lib/kubelet/pods/dbc526cc-d901-4986-b03a-6dd16255f517/volumes" May 13 07:34:42.765000 kubelet[1904]: W0513 07:34:42.764865 1904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc526cc_d901_4986_b03a_6dd16255f517.slice/cri-containerd-a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e.scope WatchSource:0}: container "a1c3a7357c409f78203503971806e98f8662c83abbd833879d86526eaeef504e" in namespace "k8s.io": not found May 13 07:34:42.785599 env[1155]: time="2025-05-13T07:34:42.785230741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6bgw,Uid:1ec981e6-96b8-4dc8-8e7f-b4042e74af09,Namespace:kube-system,Attempt:0,}" May 13 07:34:42.810899 env[1155]: time="2025-05-13T07:34:42.810803842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:42.811173 env[1155]: time="2025-05-13T07:34:42.810871059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:42.811173 env[1155]: time="2025-05-13T07:34:42.810886779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:42.811290 env[1155]: time="2025-05-13T07:34:42.811240187Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca pid=3811 runtime=io.containerd.runc.v2 May 13 07:34:42.823748 systemd[1]: Started cri-containerd-d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca.scope. May 13 07:34:42.853471 env[1155]: time="2025-05-13T07:34:42.853320942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6bgw,Uid:1ec981e6-96b8-4dc8-8e7f-b4042e74af09,Namespace:kube-system,Attempt:0,} returns sandbox id \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\"" May 13 07:34:42.859179 env[1155]: time="2025-05-13T07:34:42.859133402Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:34:42.875894 env[1155]: time="2025-05-13T07:34:42.875832748Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286\"" May 13 07:34:42.877804 env[1155]: time="2025-05-13T07:34:42.877770775Z" level=info msg="StartContainer for \"4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286\"" May 13 07:34:42.895177 systemd[1]: Started cri-containerd-4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286.scope. May 13 07:34:42.955210 env[1155]: time="2025-05-13T07:34:42.955147463Z" level=info msg="StartContainer for \"4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286\" returns successfully" May 13 07:34:42.963266 systemd[1]: cri-containerd-4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286.scope: Deactivated successfully. May 13 07:34:42.997500 env[1155]: time="2025-05-13T07:34:42.997441962Z" level=info msg="shim disconnected" id=4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286 May 13 07:34:42.997805 env[1155]: time="2025-05-13T07:34:42.997769981Z" level=warning msg="cleaning up after shim disconnected" id=4eefe208e03a35fa7e50a4f384ba4046a35f02cb87713f64e9fad5f8b61cd286 namespace=k8s.io May 13 07:34:42.997912 env[1155]: time="2025-05-13T07:34:42.997894096Z" level=info msg="cleaning up dead shim" May 13 07:34:43.008156 env[1155]: time="2025-05-13T07:34:43.008101630Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3899 runtime=io.containerd.runc.v2\n" May 13 07:34:43.347692 env[1155]: time="2025-05-13T07:34:43.347594881Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 07:34:43.385767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159334521.mount: Deactivated successfully. May 13 07:34:43.403928 env[1155]: time="2025-05-13T07:34:43.403842967Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0\"" May 13 07:34:43.405934 env[1155]: time="2025-05-13T07:34:43.405876795Z" level=info msg="StartContainer for \"fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0\"" May 13 07:34:43.438679 systemd[1]: Started cri-containerd-fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0.scope. May 13 07:34:43.481693 env[1155]: time="2025-05-13T07:34:43.481649850Z" level=info msg="StartContainer for \"fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0\" returns successfully" May 13 07:34:43.489243 systemd[1]: cri-containerd-fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0.scope: Deactivated successfully. May 13 07:34:43.527996 env[1155]: time="2025-05-13T07:34:43.527912140Z" level=info msg="shim disconnected" id=fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0 May 13 07:34:43.527996 env[1155]: time="2025-05-13T07:34:43.527992702Z" level=warning msg="cleaning up after shim disconnected" id=fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0 namespace=k8s.io May 13 07:34:43.527996 env[1155]: time="2025-05-13T07:34:43.528005447Z" level=info msg="cleaning up dead shim" May 13 07:34:43.542068 env[1155]: time="2025-05-13T07:34:43.542004597Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" May 13 07:34:44.253504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fccf57ebd9ce03528a520559e1df84fc264d9ca67409bc16aa6d3868f9489fe0-rootfs.mount: Deactivated successfully. May 13 07:34:44.357284 env[1155]: time="2025-05-13T07:34:44.356801999Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 07:34:44.408133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231564026.mount: Deactivated successfully. May 13 07:34:44.415032 env[1155]: time="2025-05-13T07:34:44.414907122Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3\"" May 13 07:34:44.421425 env[1155]: time="2025-05-13T07:34:44.421349592Z" level=info msg="StartContainer for \"8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3\"" May 13 07:34:44.462673 systemd[1]: Started cri-containerd-8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3.scope. May 13 07:34:44.522016 systemd[1]: cri-containerd-8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3.scope: Deactivated successfully. May 13 07:34:44.525751 env[1155]: time="2025-05-13T07:34:44.525656755Z" level=info msg="StartContainer for \"8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3\" returns successfully" May 13 07:34:44.556518 env[1155]: time="2025-05-13T07:34:44.556462094Z" level=info msg="shim disconnected" id=8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3 May 13 07:34:44.556766 env[1155]: time="2025-05-13T07:34:44.556742834Z" level=warning msg="cleaning up after shim disconnected" id=8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3 namespace=k8s.io May 13 07:34:44.556846 env[1155]: time="2025-05-13T07:34:44.556830229Z" level=info msg="cleaning up dead shim" May 13 07:34:44.566666 env[1155]: time="2025-05-13T07:34:44.566610157Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4021 runtime=io.containerd.runc.v2\n" May 13 07:34:45.263086 systemd[1]: run-containerd-runc-k8s.io-8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3-runc.kKIAwL.mount: Deactivated successfully. May 13 07:34:45.265585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c866264e936a0f70b9b048e0a51aa6d37deadef8c7e1a065c4da641c90c79a3-rootfs.mount: Deactivated successfully. May 13 07:34:45.379179 env[1155]: time="2025-05-13T07:34:45.378972405Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 07:34:45.442839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667800111.mount: Deactivated successfully. May 13 07:34:45.449551 env[1155]: time="2025-05-13T07:34:45.449334946Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40\"" May 13 07:34:45.452948 env[1155]: time="2025-05-13T07:34:45.451545067Z" level=info msg="StartContainer for \"21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40\"" May 13 07:34:45.494844 systemd[1]: Started cri-containerd-21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40.scope. May 13 07:34:45.568314 systemd[1]: cri-containerd-21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40.scope: Deactivated successfully. May 13 07:34:45.571518 env[1155]: time="2025-05-13T07:34:45.570781912Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec981e6_96b8_4dc8_8e7f_b4042e74af09.slice/cri-containerd-21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40.scope/memory.events\": no such file or directory" May 13 07:34:45.579796 env[1155]: time="2025-05-13T07:34:45.579659187Z" level=info msg="StartContainer for \"21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40\" returns successfully" May 13 07:34:45.611607 env[1155]: time="2025-05-13T07:34:45.611520451Z" level=info msg="shim disconnected" id=21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40 May 13 07:34:45.611864 env[1155]: time="2025-05-13T07:34:45.611610693Z" level=warning msg="cleaning up after shim disconnected" id=21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40 namespace=k8s.io May 13 07:34:45.611864 env[1155]: time="2025-05-13T07:34:45.611631702Z" level=info msg="cleaning up dead shim" May 13 07:34:45.619810 env[1155]: time="2025-05-13T07:34:45.619732040Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4078 runtime=io.containerd.runc.v2\n" May 13 07:34:45.876522 kubelet[1904]: E0513 07:34:45.876207 1904 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:46.257461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21fffcb590b93d476b1c65662e7fac3b9f87d00648c7fdea38aac81410835f40-rootfs.mount: Deactivated successfully. May 13 07:34:46.384644 env[1155]: time="2025-05-13T07:34:46.384492225Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 07:34:46.457720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091970422.mount: Deactivated successfully. May 13 07:34:46.466145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361368920.mount: Deactivated successfully. May 13 07:34:46.476245 env[1155]: time="2025-05-13T07:34:46.476178656Z" level=info msg="CreateContainer within sandbox \"d35fd08109c75be2708f508a5f1dcbb200f709286d5eec3d799bda989b51e0ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8\"" May 13 07:34:46.477829 env[1155]: time="2025-05-13T07:34:46.477790739Z" level=info msg="StartContainer for \"1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8\"" May 13 07:34:46.505641 systemd[1]: Started cri-containerd-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8.scope. May 13 07:34:46.552940 env[1155]: time="2025-05-13T07:34:46.552896215Z" level=info msg="StartContainer for \"1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8\" returns successfully" May 13 07:34:47.061427 kernel: cryptd: max_cpu_qlen set to 1000 May 13 07:34:47.118429 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 07:34:47.426338 kubelet[1904]: I0513 07:34:47.426020 1904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q6bgw" podStartSLOduration=5.425829676 podStartE2EDuration="5.425829676s" podCreationTimestamp="2025-05-13 07:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:34:47.424677082 +0000 UTC m=+326.872447393" watchObservedRunningTime="2025-05-13 07:34:47.425829676 +0000 UTC m=+326.873599987" May 13 07:34:49.364188 systemd[1]: run-containerd-runc-k8s.io-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8-runc.2TrsxH.mount: Deactivated successfully. May 13 07:34:50.743316 systemd-networkd[987]: lxc_health: Link UP May 13 07:34:50.769732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 07:34:50.769514 systemd-networkd[987]: lxc_health: Gained carrier May 13 07:34:51.554433 systemd[1]: run-containerd-runc-k8s.io-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8-runc.RI7jib.mount: Deactivated successfully. May 13 07:34:51.946879 systemd-networkd[987]: lxc_health: Gained IPv6LL May 13 07:34:53.878678 systemd[1]: run-containerd-runc-k8s.io-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8-runc.gyKSj0.mount: Deactivated successfully. May 13 07:34:56.096632 systemd[1]: run-containerd-runc-k8s.io-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8-runc.4WKJao.mount: Deactivated successfully. May 13 07:34:58.376366 systemd[1]: run-containerd-runc-k8s.io-1d4842e55ed86a6ba8af5809294b6502b261cb4050168091010b7cb3dcfe96d8-runc.r2t8gQ.mount: Deactivated successfully. May 13 07:34:58.725566 sshd[3767]: pam_unix(sshd:session): session closed for user core May 13 07:34:58.735868 systemd[1]: sshd@25-172.24.4.239:22-172.24.4.1:41312.service: Deactivated successfully. May 13 07:34:58.738139 systemd[1]: session-26.scope: Deactivated successfully. May 13 07:34:58.746183 systemd-logind[1146]: Session 26 logged out. Waiting for processes to exit. May 13 07:34:58.750666 systemd-logind[1146]: Removed session 26. May 13 07:35:20.726464 env[1155]: time="2025-05-13T07:35:20.725684181Z" level=info msg="StopPodSandbox for \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\"" May 13 07:35:20.729209 env[1155]: time="2025-05-13T07:35:20.726840626Z" level=info msg="TearDown network for sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" successfully" May 13 07:35:20.729209 env[1155]: time="2025-05-13T07:35:20.727140082Z" level=info msg="StopPodSandbox for \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" returns successfully" May 13 07:35:20.730208 env[1155]: time="2025-05-13T07:35:20.730064655Z" level=info msg="RemovePodSandbox for \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\"" May 13 07:35:20.730516 env[1155]: time="2025-05-13T07:35:20.730250156Z" level=info msg="Forcibly stopping sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\"" May 13 07:35:20.730789 env[1155]: time="2025-05-13T07:35:20.730660280Z" level=info msg="TearDown network for sandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" successfully" May 13 07:35:20.766165 env[1155]: time="2025-05-13T07:35:20.766042526Z" level=info msg="RemovePodSandbox \"e233aa07e0933bc241518380c0664aff25734899a6d795b269b2eb23fe8e89fa\" returns successfully" May 13 07:35:20.768421 env[1155]: time="2025-05-13T07:35:20.768322302Z" level=info msg="StopPodSandbox for \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\"" May 13 07:35:20.768975 env[1155]: time="2025-05-13T07:35:20.768866259Z" level=info msg="TearDown network for sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" successfully" May 13 07:35:20.769207 env[1155]: time="2025-05-13T07:35:20.769156527Z" level=info msg="StopPodSandbox for \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" returns successfully" May 13 07:35:20.771043 env[1155]: time="2025-05-13T07:35:20.770985822Z" level=info msg="RemovePodSandbox for \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\"" May 13 07:35:20.771550 env[1155]: time="2025-05-13T07:35:20.771453445Z" level=info msg="Forcibly stopping sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\"" May 13 07:35:20.772110 env[1155]: time="2025-05-13T07:35:20.772055052Z" level=info msg="TearDown network for sandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" successfully" May 13 07:35:20.779771 env[1155]: time="2025-05-13T07:35:20.779680474Z" level=info msg="RemovePodSandbox \"e23a192445450022c5bbf5f9d77926d451b82503ecde50b00309d688ff5903ef\" returns successfully" May 13 07:35:20.781678 env[1155]: time="2025-05-13T07:35:20.781598847Z" level=info msg="StopPodSandbox for \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\"" May 13 07:35:20.782287 env[1155]: time="2025-05-13T07:35:20.782177009Z" level=info msg="TearDown network for sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" successfully" May 13 07:35:20.782614 env[1155]: time="2025-05-13T07:35:20.782548962Z" level=info msg="StopPodSandbox for \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" returns successfully" May 13 07:35:20.784182 env[1155]: time="2025-05-13T07:35:20.784048524Z" level=info msg="RemovePodSandbox for \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\"" May 13 07:35:20.784376 env[1155]: time="2025-05-13T07:35:20.784186143Z" level=info msg="Forcibly stopping sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\"" May 13 07:35:20.784564 env[1155]: time="2025-05-13T07:35:20.784493013Z" level=info msg="TearDown network for sandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" successfully" May 13 07:35:20.792923 env[1155]: time="2025-05-13T07:35:20.792752623Z" level=info msg="RemovePodSandbox \"6a79ea9fcde44a7b9afe73f9067447262bc4a02a176da1d2c0cd00bf88b2be29\" returns successfully"