May 13 08:23:11.974815 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 08:23:11.974872 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:11.974896 kernel: BIOS-provided physical RAM map: May 13 08:23:11.974919 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 08:23:11.974936 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 08:23:11.974953 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 08:23:11.974973 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 08:23:11.974990 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 08:23:11.975007 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 08:23:11.975023 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 08:23:11.975039 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 08:23:11.975056 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 08:23:11.975075 kernel: NX (Execute Disable) protection: active May 13 08:23:11.975091 kernel: SMBIOS 3.0.0 present. May 13 08:23:11.975112 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 08:23:11.975129 kernel: Hypervisor detected: KVM May 13 08:23:11.975147 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 08:23:11.975164 kernel: kvm-clock: cpu 0, msr 12f196001, primary cpu clock May 13 08:23:11.975185 kernel: kvm-clock: using sched offset of 4159938641 cycles May 13 08:23:11.975204 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 08:23:11.975223 kernel: tsc: Detected 1996.249 MHz processor May 13 08:23:11.975242 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 08:23:11.975261 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 08:23:11.975279 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 08:23:11.975298 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 08:23:11.975316 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 08:23:11.975334 kernel: ACPI: Early table checksum verification disabled May 13 08:23:11.975355 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 08:23:11.975374 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:11.975392 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:11.975410 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:11.975428 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 08:23:11.975446 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:11.975464 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:23:11.975483 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 08:23:11.975504 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 08:23:11.975522 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 08:23:11.975540 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 08:23:11.975558 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 08:23:11.975576 kernel: No NUMA configuration found May 13 08:23:11.975601 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 08:23:11.975620 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] May 13 08:23:11.975642 kernel: Zone ranges: May 13 08:23:11.975695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 08:23:11.975714 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 08:23:11.975733 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 08:23:11.975752 kernel: Movable zone start for each node May 13 08:23:11.975770 kernel: Early memory node ranges May 13 08:23:11.975789 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 08:23:11.975808 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 08:23:11.975831 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 08:23:11.975849 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 08:23:11.975868 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 08:23:11.975887 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 08:23:11.975906 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 08:23:11.975925 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 08:23:11.975944 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 08:23:11.975963 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 08:23:11.975982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 08:23:11.976004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 08:23:11.976023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 08:23:11.976042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 08:23:11.976061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 08:23:11.976079 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 08:23:11.976098 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 08:23:11.976117 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 08:23:11.976135 kernel: Booting paravirtualized kernel on KVM May 13 08:23:11.976155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 08:23:11.976178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 13 08:23:11.976197 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 13 08:23:11.976215 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 13 08:23:11.976234 kernel: pcpu-alloc: [0] 0 1 May 13 08:23:11.976252 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 13 08:23:11.976271 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 08:23:11.976290 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 08:23:11.976308 kernel: Policy zone: Normal May 13 08:23:11.976331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:11.976354 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 08:23:11.976373 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 08:23:11.976392 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 08:23:11.976411 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 08:23:11.976431 kernel: Memory: 3968288K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225224K reserved, 0K cma-reserved) May 13 08:23:11.976451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 08:23:11.976470 kernel: ftrace: allocating 34584 entries in 136 pages May 13 08:23:11.976488 kernel: ftrace: allocated 136 pages with 2 groups May 13 08:23:11.976511 kernel: rcu: Hierarchical RCU implementation. May 13 08:23:11.976557 kernel: rcu: RCU event tracing is enabled. May 13 08:23:11.976577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 08:23:11.976596 kernel: Rude variant of Tasks RCU enabled. May 13 08:23:11.976616 kernel: Tracing variant of Tasks RCU enabled. May 13 08:23:11.976635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 08:23:11.976679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 08:23:11.976698 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 08:23:11.976717 kernel: Console: colour VGA+ 80x25 May 13 08:23:11.976740 kernel: printk: console [tty0] enabled May 13 08:23:11.976759 kernel: printk: console [ttyS0] enabled May 13 08:23:11.976778 kernel: ACPI: Core revision 20210730 May 13 08:23:11.976797 kernel: APIC: Switch to symmetric I/O mode setup May 13 08:23:11.976815 kernel: x2apic enabled May 13 08:23:11.976834 kernel: Switched APIC routing to physical x2apic. May 13 08:23:11.976853 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 08:23:11.976872 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 08:23:11.976891 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 08:23:11.976914 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 08:23:11.976932 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 08:23:11.976952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 08:23:11.976971 kernel: Spectre V2 : Mitigation: Retpolines May 13 08:23:11.976990 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 08:23:11.977008 kernel: Speculative Store Bypass: Vulnerable May 13 08:23:11.977027 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 08:23:11.977046 kernel: Freeing SMP alternatives memory: 32K May 13 08:23:11.977064 kernel: pid_max: default: 32768 minimum: 301 May 13 08:23:11.977086 kernel: LSM: Security Framework initializing May 13 08:23:11.977104 kernel: SELinux: Initializing. May 13 08:23:11.977123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:23:11.977142 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:23:11.977162 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 08:23:11.977181 kernel: Performance Events: AMD PMU driver. May 13 08:23:11.977211 kernel: ... version: 0 May 13 08:23:11.977234 kernel: ... bit width: 48 May 13 08:23:11.977253 kernel: ... generic registers: 4 May 13 08:23:11.977273 kernel: ... value mask: 0000ffffffffffff May 13 08:23:11.977292 kernel: ... max period: 00007fffffffffff May 13 08:23:11.977311 kernel: ... fixed-purpose events: 0 May 13 08:23:11.977334 kernel: ... event mask: 000000000000000f May 13 08:23:11.977354 kernel: signal: max sigframe size: 1440 May 13 08:23:11.977374 kernel: rcu: Hierarchical SRCU implementation. May 13 08:23:11.977393 kernel: smp: Bringing up secondary CPUs ... May 13 08:23:11.977413 kernel: x86: Booting SMP configuration: May 13 08:23:11.977435 kernel: .... node #0, CPUs: #1 May 13 08:23:11.977456 kernel: kvm-clock: cpu 1, msr 12f196041, secondary cpu clock May 13 08:23:11.977475 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 13 08:23:11.977495 kernel: smp: Brought up 1 node, 2 CPUs May 13 08:23:11.977514 kernel: smpboot: Max logical packages: 2 May 13 08:23:11.977534 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 08:23:11.977553 kernel: devtmpfs: initialized May 13 08:23:11.977573 kernel: x86/mm: Memory block size: 128MB May 13 08:23:11.977593 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 08:23:11.977617 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 08:23:11.977636 kernel: pinctrl core: initialized pinctrl subsystem May 13 08:23:11.977679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 08:23:11.977700 kernel: audit: initializing netlink subsys (disabled) May 13 08:23:11.977720 kernel: audit: type=2000 audit(1747124590.580:1): state=initialized audit_enabled=0 res=1 May 13 08:23:11.977739 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 08:23:11.977759 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 08:23:11.977778 kernel: cpuidle: using governor menu May 13 08:23:11.977798 kernel: ACPI: bus type PCI registered May 13 08:23:11.977821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 08:23:11.977841 kernel: dca service started, version 1.12.1 May 13 08:23:11.977861 kernel: PCI: Using configuration type 1 for base access May 13 08:23:11.977881 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 08:23:11.977900 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 08:23:11.977920 kernel: ACPI: Added _OSI(Module Device) May 13 08:23:11.977940 kernel: ACPI: Added _OSI(Processor Device) May 13 08:23:11.977959 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 08:23:11.977974 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 08:23:11.977992 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 08:23:11.978006 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 08:23:11.978021 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 08:23:11.978036 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 08:23:11.978050 kernel: ACPI: Interpreter enabled May 13 08:23:11.978065 kernel: ACPI: PM: (supports S0 S3 S5) May 13 08:23:11.978080 kernel: ACPI: Using IOAPIC for interrupt routing May 13 08:23:11.978095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 08:23:11.978110 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 08:23:11.978127 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 08:23:11.978372 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 08:23:11.978532 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 13 08:23:11.978557 kernel: acpiphp: Slot [3] registered May 13 08:23:11.978572 kernel: acpiphp: Slot [4] registered May 13 08:23:11.978587 kernel: acpiphp: Slot [5] registered May 13 08:23:11.978601 kernel: acpiphp: Slot [6] registered May 13 08:23:11.978616 kernel: acpiphp: Slot [7] registered May 13 08:23:11.978635 kernel: acpiphp: Slot [8] registered May 13 08:23:11.983677 kernel: acpiphp: Slot [9] registered May 13 08:23:11.983694 kernel: acpiphp: Slot [10] registered May 13 08:23:11.983702 kernel: acpiphp: Slot [11] registered May 13 08:23:11.983710 kernel: acpiphp: Slot [12] registered May 13 08:23:11.983718 kernel: acpiphp: Slot [13] registered May 13 08:23:11.983726 kernel: acpiphp: Slot [14] registered May 13 08:23:11.983734 kernel: acpiphp: Slot [15] registered May 13 08:23:11.983742 kernel: acpiphp: Slot [16] registered May 13 08:23:11.983753 kernel: acpiphp: Slot [17] registered May 13 08:23:11.983761 kernel: acpiphp: Slot [18] registered May 13 08:23:11.983769 kernel: acpiphp: Slot [19] registered May 13 08:23:11.983777 kernel: acpiphp: Slot [20] registered May 13 08:23:11.983784 kernel: acpiphp: Slot [21] registered May 13 08:23:11.983792 kernel: acpiphp: Slot [22] registered May 13 08:23:11.983800 kernel: acpiphp: Slot [23] registered May 13 08:23:11.983808 kernel: acpiphp: Slot [24] registered May 13 08:23:11.983816 kernel: acpiphp: Slot [25] registered May 13 08:23:11.983823 kernel: acpiphp: Slot [26] registered May 13 08:23:11.983833 kernel: acpiphp: Slot [27] registered May 13 08:23:11.983840 kernel: acpiphp: Slot [28] registered May 13 08:23:11.983848 kernel: acpiphp: Slot [29] registered May 13 08:23:11.983856 kernel: acpiphp: Slot [30] registered May 13 08:23:11.983864 kernel: acpiphp: Slot [31] registered May 13 08:23:11.983872 kernel: PCI host bridge to bus 0000:00 May 13 08:23:11.983973 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 08:23:11.984049 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 08:23:11.984127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 08:23:11.984200 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 08:23:11.984271 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 08:23:11.984345 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 08:23:11.984449 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 08:23:11.984565 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 08:23:11.984691 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 08:23:11.984784 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 08:23:11.984872 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 08:23:11.984958 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 08:23:11.985046 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 08:23:11.985133 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 08:23:11.985234 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 08:23:11.985330 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 08:23:11.985419 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 08:23:11.985518 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 08:23:11.985607 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 08:23:11.987749 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 08:23:11.987844 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 08:23:11.987930 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 08:23:11.988021 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 08:23:11.988121 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 08:23:11.988207 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 08:23:11.988292 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 08:23:11.988375 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 08:23:11.988458 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 08:23:11.988566 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 08:23:11.988714 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 08:23:11.988806 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 08:23:11.988893 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 08:23:11.988989 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 08:23:11.989077 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 08:23:11.989165 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 08:23:11.989269 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 08:23:11.989357 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 08:23:11.989444 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 08:23:11.989531 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 08:23:11.989544 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 08:23:11.989554 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 08:23:11.989562 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 08:23:11.989571 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 08:23:11.989582 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 08:23:11.989591 kernel: iommu: Default domain type: Translated May 13 08:23:11.989600 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 08:23:11.989704 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 08:23:11.989793 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 08:23:11.989881 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 08:23:11.989894 kernel: vgaarb: loaded May 13 08:23:11.989903 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 08:23:11.989912 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 08:23:11.989923 kernel: PTP clock support registered May 13 08:23:11.989932 kernel: PCI: Using ACPI for IRQ routing May 13 08:23:11.989940 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 08:23:11.989950 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 08:23:11.989958 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 08:23:11.989967 kernel: clocksource: Switched to clocksource kvm-clock May 13 08:23:11.989975 kernel: VFS: Disk quotas dquot_6.6.0 May 13 08:23:11.989984 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 08:23:11.989994 kernel: pnp: PnP ACPI init May 13 08:23:11.990081 kernel: pnp 00:03: [dma 2] May 13 08:23:11.990094 kernel: pnp: PnP ACPI: found 5 devices May 13 08:23:11.990102 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 08:23:11.990111 kernel: NET: Registered PF_INET protocol family May 13 08:23:11.990119 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 08:23:11.990127 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 08:23:11.990135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 08:23:11.990143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 08:23:11.990153 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 08:23:11.990161 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 08:23:11.990170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:23:11.990178 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:23:11.990186 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 08:23:11.990193 kernel: NET: Registered PF_XDP protocol family May 13 08:23:11.990266 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 08:23:11.990338 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 08:23:11.990409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 08:23:11.990484 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 08:23:11.990556 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 08:23:11.990640 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 08:23:11.990742 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 08:23:11.990826 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 13 08:23:11.990838 kernel: PCI: CLS 0 bytes, default 64 May 13 08:23:11.990846 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 08:23:11.990854 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 08:23:11.990865 kernel: Initialise system trusted keyrings May 13 08:23:11.990874 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 08:23:11.990882 kernel: Key type asymmetric registered May 13 08:23:11.990890 kernel: Asymmetric key parser 'x509' registered May 13 08:23:11.990898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 08:23:11.990906 kernel: io scheduler mq-deadline registered May 13 08:23:11.990914 kernel: io scheduler kyber registered May 13 08:23:11.990921 kernel: io scheduler bfq registered May 13 08:23:11.990929 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 08:23:11.990940 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 08:23:11.990948 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 08:23:11.990957 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 08:23:11.990965 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 08:23:11.990973 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 08:23:11.990981 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 08:23:11.990989 kernel: random: crng init done May 13 08:23:11.990997 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 08:23:11.991005 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 08:23:11.991015 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 08:23:11.991023 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 08:23:11.991108 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 08:23:11.991185 kernel: rtc_cmos 00:04: registered as rtc0 May 13 08:23:11.991261 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T08:23:11 UTC (1747124591) May 13 08:23:11.991335 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 08:23:11.991347 kernel: NET: Registered PF_INET6 protocol family May 13 08:23:11.991355 kernel: Segment Routing with IPv6 May 13 08:23:11.991366 kernel: In-situ OAM (IOAM) with IPv6 May 13 08:23:11.991374 kernel: NET: Registered PF_PACKET protocol family May 13 08:23:11.991382 kernel: Key type dns_resolver registered May 13 08:23:11.991390 kernel: IPI shorthand broadcast: enabled May 13 08:23:11.991398 kernel: sched_clock: Marking stable (875941788, 167802261)->(1106836401, -63092352) May 13 08:23:11.991406 kernel: registered taskstats version 1 May 13 08:23:11.991414 kernel: Loading compiled-in X.509 certificates May 13 08:23:11.991423 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 08:23:11.991431 kernel: Key type .fscrypt registered May 13 08:23:11.991440 kernel: Key type fscrypt-provisioning registered May 13 08:23:11.991448 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 08:23:11.991457 kernel: ima: Allocated hash algorithm: sha1 May 13 08:23:11.991465 kernel: ima: No architecture policies found May 13 08:23:11.991473 kernel: clk: Disabling unused clocks May 13 08:23:11.991481 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 08:23:11.991489 kernel: Write protecting the kernel read-only data: 28672k May 13 08:23:11.991497 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 08:23:11.991507 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 08:23:11.991515 kernel: Run /init as init process May 13 08:23:11.991523 kernel: with arguments: May 13 08:23:11.991531 kernel: /init May 13 08:23:11.991539 kernel: with environment: May 13 08:23:11.991547 kernel: HOME=/ May 13 08:23:11.991554 kernel: TERM=linux May 13 08:23:11.991562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 08:23:11.991574 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:23:11.991586 systemd[1]: Detected virtualization kvm. May 13 08:23:11.991595 systemd[1]: Detected architecture x86-64. May 13 08:23:11.991604 systemd[1]: Running in initrd. May 13 08:23:11.991612 systemd[1]: No hostname configured, using default hostname. May 13 08:23:11.991621 systemd[1]: Hostname set to . May 13 08:23:11.991630 systemd[1]: Initializing machine ID from VM UUID. May 13 08:23:11.991639 systemd[1]: Queued start job for default target initrd.target. May 13 08:23:11.995912 systemd[1]: Started systemd-ask-password-console.path. May 13 08:23:11.995945 systemd[1]: Reached target cryptsetup.target. May 13 08:23:11.995955 systemd[1]: Reached target paths.target. May 13 08:23:11.995965 systemd[1]: Reached target slices.target. May 13 08:23:11.995975 systemd[1]: Reached target swap.target. May 13 08:23:11.995984 systemd[1]: Reached target timers.target. May 13 08:23:11.995994 systemd[1]: Listening on iscsid.socket. May 13 08:23:11.996003 systemd[1]: Listening on iscsiuio.socket. May 13 08:23:11.996020 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:23:11.996038 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:23:11.996049 systemd[1]: Listening on systemd-journald.socket. May 13 08:23:11.996058 systemd[1]: Listening on systemd-networkd.socket. May 13 08:23:11.996068 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:23:11.996077 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:23:11.996089 systemd[1]: Reached target sockets.target. May 13 08:23:11.996098 systemd[1]: Starting kmod-static-nodes.service... May 13 08:23:11.996108 systemd[1]: Finished network-cleanup.service. May 13 08:23:11.996118 systemd[1]: Starting systemd-fsck-usr.service... May 13 08:23:11.996127 systemd[1]: Starting systemd-journald.service... May 13 08:23:11.996136 systemd[1]: Starting systemd-modules-load.service... May 13 08:23:11.996146 systemd[1]: Starting systemd-resolved.service... May 13 08:23:11.996156 systemd[1]: Starting systemd-vconsole-setup.service... May 13 08:23:11.996165 systemd[1]: Finished kmod-static-nodes.service. May 13 08:23:11.996177 systemd[1]: Finished systemd-fsck-usr.service. May 13 08:23:11.996192 systemd-journald[185]: Journal started May 13 08:23:11.996274 systemd-journald[185]: Runtime Journal (/run/log/journal/55316bde79eb4a5d90e8aa7f408a794d) is 8.0M, max 78.4M, 70.4M free. May 13 08:23:11.953123 systemd-modules-load[186]: Inserted module 'overlay' May 13 08:23:12.017568 systemd[1]: Started systemd-journald.service. May 13 08:23:12.017595 kernel: audit: type=1130 audit(1747124592.010:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.003526 systemd-resolved[187]: Positive Trust Anchors: May 13 08:23:12.023003 kernel: audit: type=1130 audit(1747124592.017:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.003535 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:23:12.003571 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:23:12.033892 kernel: audit: type=1130 audit(1747124592.023:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.033910 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 08:23:12.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.006397 systemd-resolved[187]: Defaulting to hostname 'linux'. May 13 08:23:12.043255 kernel: audit: type=1130 audit(1747124592.034:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.043290 kernel: Bridge firewalling registered May 13 08:23:12.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.018050 systemd[1]: Started systemd-resolved.service. May 13 08:23:12.024670 systemd[1]: Finished systemd-vconsole-setup.service. May 13 08:23:12.035135 systemd[1]: Reached target nss-lookup.target. May 13 08:23:12.036598 systemd[1]: Starting dracut-cmdline-ask.service... May 13 08:23:12.043160 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 08:23:12.046906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:23:12.055539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:23:12.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.063182 kernel: audit: type=1130 audit(1747124592.056:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.065480 systemd[1]: Finished dracut-cmdline-ask.service. May 13 08:23:12.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.071669 kernel: audit: type=1130 audit(1747124592.065:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.073047 systemd[1]: Starting dracut-cmdline.service... May 13 08:23:12.073664 kernel: SCSI subsystem initialized May 13 08:23:12.082336 dracut-cmdline[202]: dracut-dracut-053 May 13 08:23:12.085526 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:23:12.094164 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 08:23:12.094190 kernel: device-mapper: uevent: version 1.0.3 May 13 08:23:12.096387 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 08:23:12.100179 systemd-modules-load[186]: Inserted module 'dm_multipath' May 13 08:23:12.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.101567 systemd[1]: Finished systemd-modules-load.service. May 13 08:23:12.102766 systemd[1]: Starting systemd-sysctl.service... May 13 08:23:12.108798 kernel: audit: type=1130 audit(1747124592.101:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.113927 systemd[1]: Finished systemd-sysctl.service. May 13 08:23:12.119866 kernel: audit: type=1130 audit(1747124592.113:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.150673 kernel: Loading iSCSI transport class v2.0-870. May 13 08:23:12.170666 kernel: iscsi: registered transport (tcp) May 13 08:23:12.197112 kernel: iscsi: registered transport (qla4xxx) May 13 08:23:12.197173 kernel: QLogic iSCSI HBA Driver May 13 08:23:12.249463 systemd[1]: Finished dracut-cmdline.service. May 13 08:23:12.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.262684 kernel: audit: type=1130 audit(1747124592.250:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.262676 systemd[1]: Starting dracut-pre-udev.service... May 13 08:23:12.327854 kernel: raid6: sse2x4 gen() 13142 MB/s May 13 08:23:12.345821 kernel: raid6: sse2x4 xor() 5017 MB/s May 13 08:23:12.363781 kernel: raid6: sse2x2 gen() 14283 MB/s May 13 08:23:12.381822 kernel: raid6: sse2x2 xor() 8848 MB/s May 13 08:23:12.399777 kernel: raid6: sse2x1 gen() 10917 MB/s May 13 08:23:12.418292 kernel: raid6: sse2x1 xor() 6968 MB/s May 13 08:23:12.418385 kernel: raid6: using algorithm sse2x2 gen() 14283 MB/s May 13 08:23:12.418411 kernel: raid6: .... xor() 8848 MB/s, rmw enabled May 13 08:23:12.423134 kernel: raid6: using ssse3x2 recovery algorithm May 13 08:23:12.439798 kernel: xor: measuring software checksum speed May 13 08:23:12.440449 kernel: prefetch64-sse : 7035 MB/sec May 13 08:23:12.446859 kernel: generic_sse : 7894 MB/sec May 13 08:23:12.446894 kernel: xor: using function: generic_sse (7894 MB/sec) May 13 08:23:12.608729 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 08:23:12.627242 systemd[1]: Finished dracut-pre-udev.service. May 13 08:23:12.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.628000 audit: BPF prog-id=7 op=LOAD May 13 08:23:12.628000 audit: BPF prog-id=8 op=LOAD May 13 08:23:12.629830 systemd[1]: Starting systemd-udevd.service... May 13 08:23:12.659118 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 13 08:23:12.670489 systemd[1]: Started systemd-udevd.service. May 13 08:23:12.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.676461 systemd[1]: Starting dracut-pre-trigger.service... May 13 08:23:12.696844 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation May 13 08:23:12.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.729546 systemd[1]: Finished dracut-pre-trigger.service. May 13 08:23:12.731354 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:23:12.773429 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:23:12.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:12.839678 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 08:23:12.847338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 08:23:12.847354 kernel: GPT:17805311 != 20971519 May 13 08:23:12.847366 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 08:23:12.847377 kernel: GPT:17805311 != 20971519 May 13 08:23:12.847387 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 08:23:12.847403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:12.856674 kernel: libata version 3.00 loaded. May 13 08:23:12.860698 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 08:23:12.861879 kernel: scsi host0: ata_piix May 13 08:23:12.862014 kernel: scsi host1: ata_piix May 13 08:23:12.862131 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 08:23:12.862152 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 08:23:12.873686 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) May 13 08:23:12.889099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 08:23:12.943476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:23:12.947952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 08:23:12.951766 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 08:23:12.952965 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 08:23:12.955198 systemd[1]: Starting disk-uuid.service... May 13 08:23:12.976576 disk-uuid[468]: Primary Header is updated. May 13 08:23:12.976576 disk-uuid[468]: Secondary Entries is updated. May 13 08:23:12.976576 disk-uuid[468]: Secondary Header is updated. May 13 08:23:12.986703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:12.998726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:14.081643 disk-uuid[469]: The operation has completed successfully. May 13 08:23:14.083495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:23:14.150387 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 08:23:14.152359 systemd[1]: Finished disk-uuid.service. May 13 08:23:14.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.167690 systemd[1]: Starting verity-setup.service... May 13 08:23:14.199709 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 08:23:14.306521 systemd[1]: Found device dev-mapper-usr.device. May 13 08:23:14.309418 systemd[1]: Mounting sysusr-usr.mount... May 13 08:23:14.314020 systemd[1]: Finished verity-setup.service. May 13 08:23:14.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.453714 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 08:23:14.454540 systemd[1]: Mounted sysusr-usr.mount. May 13 08:23:14.455220 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 08:23:14.456041 systemd[1]: Starting ignition-setup.service... May 13 08:23:14.457640 systemd[1]: Starting parse-ip-for-networkd.service... May 13 08:23:14.486410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:14.486458 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:14.486470 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:14.503392 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 08:23:14.516765 systemd[1]: Finished ignition-setup.service. May 13 08:23:14.518202 systemd[1]: Starting ignition-fetch-offline.service... May 13 08:23:14.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.575905 systemd[1]: Finished parse-ip-for-networkd.service. May 13 08:23:14.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.576000 audit: BPF prog-id=9 op=LOAD May 13 08:23:14.577897 systemd[1]: Starting systemd-networkd.service... May 13 08:23:14.599569 systemd-networkd[640]: lo: Link UP May 13 08:23:14.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.599580 systemd-networkd[640]: lo: Gained carrier May 13 08:23:14.600037 systemd-networkd[640]: Enumeration completed May 13 08:23:14.600107 systemd[1]: Started systemd-networkd.service. May 13 08:23:14.600597 systemd-networkd[640]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:23:14.601749 systemd[1]: Reached target network.target. May 13 08:23:14.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.602432 systemd-networkd[640]: eth0: Link UP May 13 08:23:14.602435 systemd-networkd[640]: eth0: Gained carrier May 13 08:23:14.604964 systemd[1]: Starting iscsiuio.service... May 13 08:23:14.611298 systemd[1]: Started iscsiuio.service. May 13 08:23:14.614539 systemd[1]: Starting iscsid.service... May 13 08:23:14.619809 systemd-networkd[640]: eth0: DHCPv4 address 172.24.4.152/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:23:14.624241 iscsid[645]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 08:23:14.624241 iscsid[645]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 08:23:14.624241 iscsid[645]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 08:23:14.628239 iscsid[645]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 08:23:14.628239 iscsid[645]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 08:23:14.628239 iscsid[645]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 08:23:14.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.627863 systemd[1]: Started iscsid.service. May 13 08:23:14.630628 systemd[1]: Starting dracut-initqueue.service... May 13 08:23:14.644150 systemd[1]: Finished dracut-initqueue.service. May 13 08:23:14.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.644887 systemd[1]: Reached target remote-fs-pre.target. May 13 08:23:14.646465 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:23:14.648422 systemd[1]: Reached target remote-fs.target. May 13 08:23:14.650269 systemd[1]: Starting dracut-pre-mount.service... May 13 08:23:14.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.662043 systemd[1]: Finished dracut-pre-mount.service. May 13 08:23:14.848843 ignition[580]: Ignition 2.14.0 May 13 08:23:14.848871 ignition[580]: Stage: fetch-offline May 13 08:23:14.849026 ignition[580]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:14.849075 ignition[580]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:14.851550 ignition[580]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:14.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:14.854736 systemd[1]: Finished ignition-fetch-offline.service. May 13 08:23:14.851880 ignition[580]: parsed url from cmdline: "" May 13 08:23:14.857845 systemd[1]: Starting ignition-fetch.service... May 13 08:23:14.851891 ignition[580]: no config URL provided May 13 08:23:14.851936 ignition[580]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:23:14.851962 ignition[580]: no config at "/usr/lib/ignition/user.ign" May 13 08:23:14.851974 ignition[580]: failed to fetch config: resource requires networking May 13 08:23:14.852723 ignition[580]: Ignition finished successfully May 13 08:23:14.879571 ignition[664]: Ignition 2.14.0 May 13 08:23:14.879592 ignition[664]: Stage: fetch May 13 08:23:14.879889 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:14.879932 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:14.882062 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:14.882265 ignition[664]: parsed url from cmdline: "" May 13 08:23:14.882275 ignition[664]: no config URL provided May 13 08:23:14.882289 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:23:14.882309 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 13 08:23:14.887301 ignition[664]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 08:23:14.887343 ignition[664]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 08:23:14.887586 ignition[664]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 08:23:15.322105 ignition[664]: GET result: OK May 13 08:23:15.322274 ignition[664]: parsing config with SHA512: 550ed204c364770848f7c07506cc077572c157c36372176d126dd70695fede67352a25c4adcca061d0b7ed26898b4ee7b1817036ce9e68d14c489ab2c6f3b20a May 13 08:23:15.337258 unknown[664]: fetched base config from "system" May 13 08:23:15.337287 unknown[664]: fetched base config from "system" May 13 08:23:15.338547 ignition[664]: fetch: fetch complete May 13 08:23:15.337301 unknown[664]: fetched user config from "openstack" May 13 08:23:15.338562 ignition[664]: fetch: fetch passed May 13 08:23:15.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.341478 systemd[1]: Finished ignition-fetch.service. May 13 08:23:15.338642 ignition[664]: Ignition finished successfully May 13 08:23:15.346193 systemd[1]: Starting ignition-kargs.service... May 13 08:23:15.369776 ignition[670]: Ignition 2.14.0 May 13 08:23:15.369805 ignition[670]: Stage: kargs May 13 08:23:15.370044 ignition[670]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:15.370095 ignition[670]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:15.372358 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:15.375355 ignition[670]: kargs: kargs passed May 13 08:23:15.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.375449 ignition[670]: Ignition finished successfully May 13 08:23:15.377551 systemd[1]: Finished ignition-kargs.service. May 13 08:23:15.381143 systemd[1]: Starting ignition-disks.service... May 13 08:23:15.397841 ignition[676]: Ignition 2.14.0 May 13 08:23:15.397875 ignition[676]: Stage: disks May 13 08:23:15.398156 ignition[676]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:15.398201 ignition[676]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:15.400461 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:15.403524 ignition[676]: disks: disks passed May 13 08:23:15.403640 ignition[676]: Ignition finished successfully May 13 08:23:15.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.405322 systemd[1]: Finished ignition-disks.service. May 13 08:23:15.407512 systemd[1]: Reached target initrd-root-device.target. May 13 08:23:15.409863 systemd[1]: Reached target local-fs-pre.target. May 13 08:23:15.412213 systemd[1]: Reached target local-fs.target. May 13 08:23:15.414640 systemd[1]: Reached target sysinit.target. May 13 08:23:15.417081 systemd[1]: Reached target basic.target. May 13 08:23:15.421232 systemd[1]: Starting systemd-fsck-root.service... May 13 08:23:15.454034 systemd-fsck[683]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 08:23:15.468999 systemd[1]: Finished systemd-fsck-root.service. May 13 08:23:15.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.472830 systemd[1]: Mounting sysroot.mount... May 13 08:23:15.495900 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 08:23:15.497283 systemd[1]: Mounted sysroot.mount. May 13 08:23:15.499858 systemd[1]: Reached target initrd-root-fs.target. May 13 08:23:15.505135 systemd[1]: Mounting sysroot-usr.mount... May 13 08:23:15.507145 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 08:23:15.508865 systemd[1]: Starting flatcar-openstack-hostname.service... May 13 08:23:15.515381 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 08:23:15.515459 systemd[1]: Reached target ignition-diskful.target. May 13 08:23:15.522530 systemd[1]: Mounted sysroot-usr.mount. May 13 08:23:15.532845 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:23:15.538344 systemd[1]: Starting initrd-setup-root.service... May 13 08:23:15.563085 initrd-setup-root[695]: cut: /sysroot/etc/passwd: No such file or directory May 13 08:23:15.569714 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (690) May 13 08:23:15.577140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:15.577171 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:15.577184 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:15.579308 initrd-setup-root[714]: cut: /sysroot/etc/group: No such file or directory May 13 08:23:15.587385 initrd-setup-root[727]: cut: /sysroot/etc/shadow: No such file or directory May 13 08:23:15.592495 initrd-setup-root[737]: cut: /sysroot/etc/gshadow: No such file or directory May 13 08:23:15.596172 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:23:15.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.666748 systemd[1]: Finished initrd-setup-root.service. May 13 08:23:15.668088 systemd[1]: Starting ignition-mount.service... May 13 08:23:15.670514 systemd[1]: Starting sysroot-boot.service... May 13 08:23:15.685702 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 13 08:23:15.685801 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 13 08:23:15.714378 ignition[758]: INFO : Ignition 2.14.0 May 13 08:23:15.714378 ignition[758]: INFO : Stage: mount May 13 08:23:15.715563 ignition[758]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:15.715563 ignition[758]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:15.717638 ignition[758]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:15.720853 ignition[758]: INFO : mount: mount passed May 13 08:23:15.721361 ignition[758]: INFO : Ignition finished successfully May 13 08:23:15.722787 systemd[1]: Finished ignition-mount.service. May 13 08:23:15.724555 systemd[1]: Finished sysroot-boot.service. May 13 08:23:15.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.728451 coreos-metadata[689]: May 13 08:23:15.728 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 08:23:15.745043 coreos-metadata[689]: May 13 08:23:15.744 INFO Fetch successful May 13 08:23:15.745043 coreos-metadata[689]: May 13 08:23:15.745 INFO wrote hostname ci-3510-3-7-n-5ac23fdacd.novalocal to /sysroot/etc/hostname May 13 08:23:15.748711 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 08:23:15.748816 systemd[1]: Finished flatcar-openstack-hostname.service. May 13 08:23:15.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:15.750932 systemd[1]: Starting ignition-files.service... May 13 08:23:15.758284 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:23:15.768683 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (766) May 13 08:23:15.773703 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:23:15.773734 kernel: BTRFS info (device vda6): using free space tree May 13 08:23:15.773745 kernel: BTRFS info (device vda6): has skinny extents May 13 08:23:15.783095 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:23:15.794008 ignition[785]: INFO : Ignition 2.14.0 May 13 08:23:15.794008 ignition[785]: INFO : Stage: files May 13 08:23:15.796681 ignition[785]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:15.796681 ignition[785]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:15.796681 ignition[785]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:15.802772 ignition[785]: DEBUG : files: compiled without relabeling support, skipping May 13 08:23:15.802772 ignition[785]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 08:23:15.802772 ignition[785]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 08:23:15.802772 ignition[785]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 08:23:15.809698 ignition[785]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 08:23:15.809698 ignition[785]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 08:23:15.809698 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:23:15.809698 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:23:15.809698 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 08:23:15.809698 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 08:23:15.804499 unknown[785]: wrote ssh authorized keys file for user: core May 13 08:23:15.894615 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 08:23:16.496181 systemd-networkd[640]: eth0: Gained IPv6LL May 13 08:23:18.498081 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 08:23:18.501286 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 08:23:18.501286 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 08:23:19.247791 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 13 08:23:19.746206 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 08:23:19.747872 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 13 08:23:19.749482 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 13 08:23:19.750552 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 08:23:19.755427 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 08:23:19.755427 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 08:23:19.755427 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 08:23:19.755427 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 08:23:19.755427 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 08:23:19.779531 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:23:19.780564 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:23:19.780564 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:19.780564 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:19.780564 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:19.784860 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 08:23:20.274931 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 13 08:23:21.955815 ignition[785]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:23:21.957339 ignition[785]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" May 13 08:23:21.957339 ignition[785]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" May 13 08:23:21.957339 ignition[785]: INFO : files: op(e): [started] processing unit "containerd.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:23:21.959758 ignition[785]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:23:21.959758 ignition[785]: INFO : files: op(e): [finished] processing unit "containerd.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(10): [started] processing unit "prepare-helm.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:23:21.959758 ignition[785]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:23:21.959758 ignition[785]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 08:23:21.959758 ignition[785]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 08:23:21.977432 ignition[785]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 08:23:21.980853 ignition[785]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 08:23:21.980853 ignition[785]: INFO : files: files passed May 13 08:23:21.980853 ignition[785]: INFO : Ignition finished successfully May 13 08:23:22.005915 kernel: kauditd_printk_skb: 27 callbacks suppressed May 13 08:23:22.005968 kernel: audit: type=1130 audit(1747124601.981:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.006001 kernel: audit: type=1130 audit(1747124602.000:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:21.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:21.980821 systemd[1]: Finished ignition-files.service. May 13 08:23:22.024331 kernel: audit: type=1131 audit(1747124602.000:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.024371 kernel: audit: type=1130 audit(1747124602.010:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:21.983281 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 08:23:21.994496 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 08:23:22.030857 initrd-setup-root-after-ignition[810]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 08:23:21.995345 systemd[1]: Starting ignition-quench.service... May 13 08:23:22.000106 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 08:23:22.000193 systemd[1]: Finished ignition-quench.service. May 13 08:23:22.006144 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 08:23:22.011470 systemd[1]: Reached target ignition-complete.target. May 13 08:23:22.025578 systemd[1]: Starting initrd-parse-etc.service... May 13 08:23:22.048315 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 08:23:22.071847 kernel: audit: type=1130 audit(1747124602.048:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.071886 kernel: audit: type=1131 audit(1747124602.048:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.048417 systemd[1]: Finished initrd-parse-etc.service. May 13 08:23:22.049152 systemd[1]: Reached target initrd-fs.target. May 13 08:23:22.072300 systemd[1]: Reached target initrd.target. May 13 08:23:22.073958 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 08:23:22.074808 systemd[1]: Starting dracut-pre-pivot.service... May 13 08:23:22.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.087901 systemd[1]: Finished dracut-pre-pivot.service. May 13 08:23:22.093863 kernel: audit: type=1130 audit(1747124602.087:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.093842 systemd[1]: Starting initrd-cleanup.service... May 13 08:23:22.104480 systemd[1]: Stopped target nss-lookup.target. May 13 08:23:22.105679 systemd[1]: Stopped target remote-cryptsetup.target. May 13 08:23:22.106877 systemd[1]: Stopped target timers.target. May 13 08:23:22.107993 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 08:23:22.108747 systemd[1]: Stopped dracut-pre-pivot.service. May 13 08:23:22.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.114673 kernel: audit: type=1131 audit(1747124602.109:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.114761 systemd[1]: Stopped target initrd.target. May 13 08:23:22.115863 systemd[1]: Stopped target basic.target. May 13 08:23:22.116967 systemd[1]: Stopped target ignition-complete.target. May 13 08:23:22.118144 systemd[1]: Stopped target ignition-diskful.target. May 13 08:23:22.119406 systemd[1]: Stopped target initrd-root-device.target. May 13 08:23:22.120056 systemd[1]: Stopped target remote-fs.target. May 13 08:23:22.121059 systemd[1]: Stopped target remote-fs-pre.target. May 13 08:23:22.122056 systemd[1]: Stopped target sysinit.target. May 13 08:23:22.123019 systemd[1]: Stopped target local-fs.target. May 13 08:23:22.123916 systemd[1]: Stopped target local-fs-pre.target. May 13 08:23:22.124884 systemd[1]: Stopped target swap.target. May 13 08:23:22.125780 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 08:23:22.131927 kernel: audit: type=1131 audit(1747124602.126:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.125944 systemd[1]: Stopped dracut-pre-mount.service. May 13 08:23:22.126873 systemd[1]: Stopped target cryptsetup.target. May 13 08:23:22.138574 kernel: audit: type=1131 audit(1747124602.132:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.132482 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 08:23:22.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.132641 systemd[1]: Stopped dracut-initqueue.service. May 13 08:23:22.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.133626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 08:23:22.133797 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 08:23:22.149117 iscsid[645]: iscsid shutting down. May 13 08:23:22.139255 systemd[1]: ignition-files.service: Deactivated successfully. May 13 08:23:22.139398 systemd[1]: Stopped ignition-files.service. May 13 08:23:22.153029 ignition[823]: INFO : Ignition 2.14.0 May 13 08:23:22.153029 ignition[823]: INFO : Stage: umount May 13 08:23:22.141256 systemd[1]: Stopping ignition-mount.service... May 13 08:23:22.154607 ignition[823]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:23:22.154607 ignition[823]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:23:22.154607 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:23:22.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.146717 systemd[1]: Stopping iscsid.service... May 13 08:23:22.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.166507 ignition[823]: INFO : umount: umount passed May 13 08:23:22.166507 ignition[823]: INFO : Ignition finished successfully May 13 08:23:22.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.150355 systemd[1]: Stopping sysroot-boot.service... May 13 08:23:22.158275 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 08:23:22.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.158453 systemd[1]: Stopped systemd-udev-trigger.service. May 13 08:23:22.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.159048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 08:23:22.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.159148 systemd[1]: Stopped dracut-pre-trigger.service. May 13 08:23:22.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.161061 systemd[1]: iscsid.service: Deactivated successfully. May 13 08:23:22.162384 systemd[1]: Stopped iscsid.service. May 13 08:23:22.164095 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 08:23:22.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.164175 systemd[1]: Stopped ignition-mount.service. May 13 08:23:22.166372 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 08:23:22.167013 systemd[1]: Finished initrd-cleanup.service. May 13 08:23:22.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.168887 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 08:23:22.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.168929 systemd[1]: Stopped ignition-disks.service. May 13 08:23:22.170773 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 08:23:22.170809 systemd[1]: Stopped ignition-kargs.service. May 13 08:23:22.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.171790 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 08:23:22.171825 systemd[1]: Stopped ignition-fetch.service. May 13 08:23:22.173264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 08:23:22.173301 systemd[1]: Stopped ignition-fetch-offline.service. May 13 08:23:22.173905 systemd[1]: Stopped target paths.target. May 13 08:23:22.174335 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 08:23:22.175582 systemd[1]: Stopped systemd-ask-password-console.path. May 13 08:23:22.176117 systemd[1]: Stopped target slices.target. May 13 08:23:22.177039 systemd[1]: Stopped target sockets.target. May 13 08:23:22.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.177762 systemd[1]: iscsid.socket: Deactivated successfully. May 13 08:23:22.177792 systemd[1]: Closed iscsid.socket. May 13 08:23:22.178202 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 08:23:22.178236 systemd[1]: Stopped ignition-setup.service. May 13 08:23:22.178756 systemd[1]: Stopping iscsiuio.service... May 13 08:23:22.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.181764 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 08:23:22.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.182143 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 08:23:22.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.182224 systemd[1]: Stopped iscsiuio.service. May 13 08:23:22.183257 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 08:23:22.183335 systemd[1]: Stopped sysroot-boot.service. May 13 08:23:22.184163 systemd[1]: Stopped target network.target. May 13 08:23:22.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.185072 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 08:23:22.185102 systemd[1]: Closed iscsiuio.socket. May 13 08:23:22.185979 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 08:23:22.207000 audit: BPF prog-id=6 op=UNLOAD May 13 08:23:22.186013 systemd[1]: Stopped initrd-setup-root.service. May 13 08:23:22.187260 systemd[1]: Stopping systemd-networkd.service... May 13 08:23:22.188092 systemd[1]: Stopping systemd-resolved.service... May 13 08:23:22.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.191690 systemd-networkd[640]: eth0: DHCPv6 lease lost May 13 08:23:22.210000 audit: BPF prog-id=9 op=UNLOAD May 13 08:23:22.192994 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 08:23:22.193071 systemd[1]: Stopped systemd-networkd.service. May 13 08:23:22.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.194372 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 08:23:22.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.194401 systemd[1]: Closed systemd-networkd.socket. May 13 08:23:22.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.195493 systemd[1]: Stopping network-cleanup.service... May 13 08:23:22.198330 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 08:23:22.198375 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 08:23:22.199360 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 08:23:22.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.199397 systemd[1]: Stopped systemd-sysctl.service. May 13 08:23:22.200588 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 08:23:22.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.200624 systemd[1]: Stopped systemd-modules-load.service. May 13 08:23:22.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.201401 systemd[1]: Stopping systemd-udevd.service... May 13 08:23:22.203341 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 08:23:22.203790 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 08:23:22.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:22.203869 systemd[1]: Stopped systemd-resolved.service. May 13 08:23:22.207552 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 08:23:22.207637 systemd[1]: Stopped network-cleanup.service. May 13 08:23:22.209551 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 08:23:22.209697 systemd[1]: Stopped systemd-udevd.service. May 13 08:23:22.210374 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 08:23:22.210412 systemd[1]: Closed systemd-udevd-control.socket. May 13 08:23:22.211357 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 08:23:22.211390 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 08:23:22.212384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 08:23:22.212427 systemd[1]: Stopped dracut-pre-udev.service. May 13 08:23:22.213435 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 08:23:22.213473 systemd[1]: Stopped dracut-cmdline.service. May 13 08:23:22.214418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 08:23:22.214456 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 08:23:22.216163 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 08:23:22.221201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 08:23:22.241000 audit: BPF prog-id=8 op=UNLOAD May 13 08:23:22.241000 audit: BPF prog-id=7 op=UNLOAD May 13 08:23:22.241000 audit: BPF prog-id=5 op=UNLOAD May 13 08:23:22.241000 audit: BPF prog-id=4 op=UNLOAD May 13 08:23:22.241000 audit: BPF prog-id=3 op=UNLOAD May 13 08:23:22.221250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 08:23:22.221888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 08:23:22.221928 systemd[1]: Stopped kmod-static-nodes.service. May 13 08:23:22.222942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 08:23:22.222979 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 08:23:22.225024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 08:23:22.225460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 08:23:22.225538 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 08:23:22.226495 systemd[1]: Reached target initrd-switch-root.target. May 13 08:23:22.228038 systemd[1]: Starting initrd-switch-root.service... May 13 08:23:22.236818 systemd[1]: Switching root. May 13 08:23:22.257570 systemd-journald[185]: Journal stopped May 13 08:23:27.155833 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). May 13 08:23:27.155884 kernel: SELinux: Class mctp_socket not defined in policy. May 13 08:23:27.155903 kernel: SELinux: Class anon_inode not defined in policy. May 13 08:23:27.155915 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 08:23:27.155927 kernel: SELinux: policy capability network_peer_controls=1 May 13 08:23:27.155941 kernel: SELinux: policy capability open_perms=1 May 13 08:23:27.155955 kernel: SELinux: policy capability extended_socket_class=1 May 13 08:23:27.155966 kernel: SELinux: policy capability always_check_network=0 May 13 08:23:27.155976 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 08:23:27.155987 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 08:23:27.155998 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 08:23:27.156009 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 08:23:27.156020 systemd[1]: Successfully loaded SELinux policy in 93.790ms. May 13 08:23:27.156041 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.787ms. May 13 08:23:27.156057 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:23:27.156069 systemd[1]: Detected virtualization kvm. May 13 08:23:27.156080 systemd[1]: Detected architecture x86-64. May 13 08:23:27.156091 systemd[1]: Detected first boot. May 13 08:23:27.156103 systemd[1]: Hostname set to . May 13 08:23:27.156115 systemd[1]: Initializing machine ID from VM UUID. May 13 08:23:27.156127 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 08:23:27.156141 systemd[1]: Populated /etc with preset unit settings. May 13 08:23:27.156154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:23:27.156168 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:23:27.156181 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:23:27.156195 systemd[1]: Queued start job for default target multi-user.target. May 13 08:23:27.156207 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 08:23:27.156219 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 08:23:27.156232 systemd[1]: Created slice system-addon\x2drun.slice. May 13 08:23:27.156244 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 13 08:23:27.156255 systemd[1]: Created slice system-getty.slice. May 13 08:23:27.156267 systemd[1]: Created slice system-modprobe.slice. May 13 08:23:27.156278 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 08:23:27.156290 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 08:23:27.156302 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 08:23:27.156313 systemd[1]: Created slice user.slice. May 13 08:23:27.156325 systemd[1]: Started systemd-ask-password-console.path. May 13 08:23:27.156339 systemd[1]: Started systemd-ask-password-wall.path. May 13 08:23:27.156350 systemd[1]: Set up automount boot.automount. May 13 08:23:27.156361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 08:23:27.156373 systemd[1]: Reached target integritysetup.target. May 13 08:23:27.156385 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:23:27.156398 systemd[1]: Reached target remote-fs.target. May 13 08:23:27.156411 systemd[1]: Reached target slices.target. May 13 08:23:27.156423 systemd[1]: Reached target swap.target. May 13 08:23:27.156435 systemd[1]: Reached target torcx.target. May 13 08:23:27.156447 systemd[1]: Reached target veritysetup.target. May 13 08:23:27.156459 systemd[1]: Listening on systemd-coredump.socket. May 13 08:23:27.156470 systemd[1]: Listening on systemd-initctl.socket. May 13 08:23:27.156497 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:23:27.156509 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:23:27.156520 systemd[1]: Listening on systemd-journald.socket. May 13 08:23:27.156532 systemd[1]: Listening on systemd-networkd.socket. May 13 08:23:27.156545 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:23:27.156557 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:23:27.156568 systemd[1]: Listening on systemd-userdbd.socket. May 13 08:23:27.156580 systemd[1]: Mounting dev-hugepages.mount... May 13 08:23:27.156591 systemd[1]: Mounting dev-mqueue.mount... May 13 08:23:27.156603 systemd[1]: Mounting media.mount... May 13 08:23:27.156614 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:27.156626 systemd[1]: Mounting sys-kernel-debug.mount... May 13 08:23:27.156638 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 08:23:27.156669 systemd[1]: Mounting tmp.mount... May 13 08:23:27.156685 systemd[1]: Starting flatcar-tmpfiles.service... May 13 08:23:27.156698 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:27.156710 systemd[1]: Starting kmod-static-nodes.service... May 13 08:23:27.156723 systemd[1]: Starting modprobe@configfs.service... May 13 08:23:27.156735 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:27.156750 systemd[1]: Starting modprobe@drm.service... May 13 08:23:27.156762 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:27.156774 systemd[1]: Starting modprobe@fuse.service... May 13 08:23:27.156789 systemd[1]: Starting modprobe@loop.service... May 13 08:23:27.156802 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 08:23:27.156815 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 08:23:27.156828 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 08:23:27.156840 systemd[1]: Starting systemd-journald.service... May 13 08:23:27.156852 systemd[1]: Starting systemd-modules-load.service... May 13 08:23:27.156865 systemd[1]: Starting systemd-network-generator.service... May 13 08:23:27.156878 systemd[1]: Starting systemd-remount-fs.service... May 13 08:23:27.156891 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:23:27.156906 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:27.156918 kernel: loop: module loaded May 13 08:23:27.156930 systemd[1]: Mounted dev-hugepages.mount. May 13 08:23:27.156942 systemd[1]: Mounted dev-mqueue.mount. May 13 08:23:27.156956 systemd[1]: Mounted media.mount. May 13 08:23:27.156968 systemd[1]: Mounted sys-kernel-debug.mount. May 13 08:23:27.156980 kernel: fuse: init (API version 7.34) May 13 08:23:27.156992 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 08:23:27.157005 systemd[1]: Mounted tmp.mount. May 13 08:23:27.157019 systemd[1]: Finished kmod-static-nodes.service. May 13 08:23:27.157031 kernel: kauditd_printk_skb: 51 callbacks suppressed May 13 08:23:27.157043 kernel: audit: type=1130 audit(1747124607.144:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.157056 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 08:23:27.157069 systemd[1]: Finished modprobe@configfs.service. May 13 08:23:27.157081 kernel: audit: type=1305 audit(1747124607.153:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 08:23:27.157096 systemd-journald[957]: Journal started May 13 08:23:27.157141 systemd-journald[957]: Runtime Journal (/run/log/journal/55316bde79eb4a5d90e8aa7f408a794d) is 8.0M, max 78.4M, 70.4M free. May 13 08:23:27.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.153000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 08:23:27.161644 kernel: audit: type=1300 audit(1747124607.153:93): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd7b595ca0 a2=4000 a3=7ffd7b595d3c items=0 ppid=1 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:27.153000 audit[957]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd7b595ca0 a2=4000 a3=7ffd7b595d3c items=0 ppid=1 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:27.168696 systemd[1]: Started systemd-journald.service. May 13 08:23:27.171219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:27.171381 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:27.172065 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:23:27.172202 systemd[1]: Finished modprobe@drm.service. May 13 08:23:27.173984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:27.174146 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:27.174833 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 08:23:27.174972 systemd[1]: Finished modprobe@fuse.service. May 13 08:23:27.175603 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:27.175851 systemd[1]: Finished modprobe@loop.service. May 13 08:23:27.176577 systemd[1]: Finished systemd-modules-load.service. May 13 08:23:27.178218 systemd[1]: Finished systemd-network-generator.service. May 13 08:23:27.202787 kernel: audit: type=1327 audit(1747124607.153:93): proctitle="/usr/lib/systemd/systemd-journald" May 13 08:23:27.202860 kernel: audit: type=1130 audit(1747124607.166:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.202878 kernel: audit: type=1131 audit(1747124607.166:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.202893 kernel: audit: type=1130 audit(1747124607.169:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.153000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 08:23:27.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.191977 systemd[1]: Finished systemd-remount-fs.service. May 13 08:23:27.200167 systemd[1]: Reached target network-pre.target. May 13 08:23:27.201711 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 08:23:27.203296 systemd[1]: Mounting sys-kernel-config.mount... May 13 08:23:27.207155 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 08:23:27.209515 systemd[1]: Starting systemd-hwdb-update.service... May 13 08:23:27.211036 systemd[1]: Starting systemd-journal-flush.service... May 13 08:23:27.211555 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:27.225163 systemd[1]: Starting systemd-random-seed.service... May 13 08:23:27.225789 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:27.228789 systemd[1]: Starting systemd-sysctl.service... May 13 08:23:27.235952 kernel: audit: type=1130 audit(1747124607.171:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.230724 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 08:23:27.237245 systemd[1]: Mounted sys-kernel-config.mount. May 13 08:23:27.255734 kernel: audit: type=1131 audit(1747124607.171:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.255823 kernel: audit: type=1130 audit(1747124607.173:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.260261 systemd-journald[957]: Time spent on flushing to /var/log/journal/55316bde79eb4a5d90e8aa7f408a794d is 30.995ms for 1045 entries. May 13 08:23:27.260261 systemd-journald[957]: System Journal (/var/log/journal/55316bde79eb4a5d90e8aa7f408a794d) is 8.0M, max 584.8M, 576.8M free. May 13 08:23:27.313165 systemd-journald[957]: Received client request to flush runtime journal. May 13 08:23:27.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.263081 systemd[1]: Finished flatcar-tmpfiles.service. May 13 08:23:27.265141 systemd[1]: Starting systemd-sysusers.service... May 13 08:23:27.316205 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 08:23:27.271025 systemd[1]: Finished systemd-sysctl.service. May 13 08:23:27.274722 systemd[1]: Finished systemd-random-seed.service. May 13 08:23:27.275324 systemd[1]: Reached target first-boot-complete.target. May 13 08:23:27.299432 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:23:27.301209 systemd[1]: Starting systemd-udev-settle.service... May 13 08:23:27.314741 systemd[1]: Finished systemd-journal-flush.service. May 13 08:23:27.321343 systemd[1]: Finished systemd-sysusers.service. May 13 08:23:27.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.322961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:23:27.369214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:23:27.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.855092 systemd[1]: Finished systemd-hwdb-update.service. May 13 08:23:27.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.858869 systemd[1]: Starting systemd-udevd.service... May 13 08:23:27.899909 systemd-udevd[1023]: Using default interface naming scheme 'v252'. May 13 08:23:27.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:27.960018 systemd[1]: Started systemd-udevd.service. May 13 08:23:27.968607 systemd[1]: Starting systemd-networkd.service... May 13 08:23:27.991324 systemd[1]: Starting systemd-userdbd.service... May 13 08:23:28.052288 systemd[1]: Found device dev-ttyS0.device. May 13 08:23:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.054852 systemd[1]: Started systemd-userdbd.service. May 13 08:23:28.098811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:23:28.149681 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 08:23:28.159712 kernel: ACPI: button: Power Button [PWRF] May 13 08:23:28.169596 systemd-networkd[1035]: lo: Link UP May 13 08:23:28.169604 systemd-networkd[1035]: lo: Gained carrier May 13 08:23:28.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.171034 systemd-networkd[1035]: Enumeration completed May 13 08:23:28.171138 systemd[1]: Started systemd-networkd.service. May 13 08:23:28.172219 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:23:28.174963 systemd-networkd[1035]: eth0: Link UP May 13 08:23:28.174971 systemd-networkd[1035]: eth0: Gained carrier May 13 08:23:28.176000 audit[1031]: AVC avc: denied { confidentiality } for pid=1031 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 08:23:28.176000 audit[1031]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5606aa69b7c0 a1=338ac a2=7f2105fffbc5 a3=5 items=110 ppid=1023 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:28.176000 audit: CWD cwd="/" May 13 08:23:28.176000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=1 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=2 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=3 name=(null) inode=14422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=4 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=5 name=(null) inode=14423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=6 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=7 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=8 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=9 name=(null) inode=14425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=10 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=11 name=(null) inode=14426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=12 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=13 name=(null) inode=14427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=14 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=15 name=(null) inode=14428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=16 name=(null) inode=14424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=17 name=(null) inode=14429 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=18 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=19 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=20 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=21 name=(null) inode=14431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=22 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=23 name=(null) inode=14432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=24 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=25 name=(null) inode=14433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=26 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=27 name=(null) inode=14434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=28 name=(null) inode=14430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=29 name=(null) inode=14435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=30 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=31 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=32 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=33 name=(null) inode=14437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=34 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=35 name=(null) inode=14438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=36 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=37 name=(null) inode=14439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=38 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=39 name=(null) inode=14440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=40 name=(null) inode=14436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=41 name=(null) inode=14441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=42 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=43 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=44 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=45 name=(null) inode=14443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=46 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=47 name=(null) inode=14444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=48 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=49 name=(null) inode=14445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=50 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=51 name=(null) inode=14446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=52 name=(null) inode=14442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=53 name=(null) inode=14447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=55 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=56 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=57 name=(null) inode=14449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=58 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=59 name=(null) inode=14450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=60 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=61 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=62 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=63 name=(null) inode=14452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=64 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=65 name=(null) inode=14453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=66 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=67 name=(null) inode=14454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=68 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=69 name=(null) inode=14455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=70 name=(null) inode=14451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=71 name=(null) inode=14456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=72 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=73 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=74 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=75 name=(null) inode=14458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=76 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=77 name=(null) inode=14459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=78 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=79 name=(null) inode=14460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=80 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=81 name=(null) inode=14461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=82 name=(null) inode=14457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=83 name=(null) inode=14462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=84 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=85 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=86 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=87 name=(null) inode=14464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=88 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=89 name=(null) inode=14465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=90 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=91 name=(null) inode=14466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=92 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=93 name=(null) inode=14467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=94 name=(null) inode=14463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=95 name=(null) inode=14468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=96 name=(null) inode=14448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=97 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=98 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=99 name=(null) inode=14470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=100 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=101 name=(null) inode=14471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=102 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=103 name=(null) inode=14472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=104 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=105 name=(null) inode=14473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=106 name=(null) inode=14469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=107 name=(null) inode=14474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PATH item=109 name=(null) inode=14475 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:23:28.176000 audit: PROCTITLE proctitle="(udev-worker)" May 13 08:23:28.186808 systemd-networkd[1035]: eth0: DHCPv4 address 172.24.4.152/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:23:28.193675 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 08:23:28.213688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 08:23:28.219681 kernel: mousedev: PS/2 mouse device common for all mice May 13 08:23:28.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.266137 systemd[1]: Finished systemd-udev-settle.service. May 13 08:23:28.267878 systemd[1]: Starting lvm2-activation-early.service... May 13 08:23:28.300676 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:23:28.333729 systemd[1]: Finished lvm2-activation-early.service. May 13 08:23:28.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.335268 systemd[1]: Reached target cryptsetup.target. May 13 08:23:28.339000 systemd[1]: Starting lvm2-activation.service... May 13 08:23:28.349958 lvm[1057]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:23:28.389611 systemd[1]: Finished lvm2-activation.service. May 13 08:23:28.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.391112 systemd[1]: Reached target local-fs-pre.target. May 13 08:23:28.392295 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 08:23:28.392352 systemd[1]: Reached target local-fs.target. May 13 08:23:28.393531 systemd[1]: Reached target machines.target. May 13 08:23:28.397471 systemd[1]: Starting ldconfig.service... May 13 08:23:28.401307 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:28.402100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:28.404994 systemd[1]: Starting systemd-boot-update.service... May 13 08:23:28.410061 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 08:23:28.418061 systemd[1]: Starting systemd-machine-id-commit.service... May 13 08:23:28.428070 systemd[1]: Starting systemd-sysext.service... May 13 08:23:28.430502 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1060 (bootctl) May 13 08:23:28.435434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 08:23:28.453123 systemd[1]: Unmounting usr-share-oem.mount... May 13 08:23:28.459692 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 08:23:28.459936 systemd[1]: Unmounted usr-share-oem.mount. May 13 08:23:28.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:28.497069 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 08:23:28.525726 kernel: loop0: detected capacity change from 0 to 210664 May 13 08:23:29.101298 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 08:23:29.102786 systemd[1]: Finished systemd-machine-id-commit.service. May 13 08:23:29.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.138702 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 08:23:29.170461 kernel: loop1: detected capacity change from 0 to 210664 May 13 08:23:29.211987 (sd-sysext)[1079]: Using extensions 'kubernetes'. May 13 08:23:29.214208 (sd-sysext)[1079]: Merged extensions into '/usr'. May 13 08:23:29.242501 systemd-fsck[1075]: fsck.fat 4.2 (2021-01-31) May 13 08:23:29.242501 systemd-fsck[1075]: /dev/vda1: 790 files, 120692/258078 clusters May 13 08:23:29.244643 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 08:23:29.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.247634 systemd[1]: Mounting boot.mount... May 13 08:23:29.263959 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.265599 systemd[1]: Mounting usr-share-oem.mount... May 13 08:23:29.266325 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:29.267534 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:29.271222 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:29.277196 systemd[1]: Starting modprobe@loop.service... May 13 08:23:29.279802 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:29.279956 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.280083 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.290494 systemd[1]: Mounted boot.mount. May 13 08:23:29.291315 systemd[1]: Mounted usr-share-oem.mount. May 13 08:23:29.292704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:29.292976 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:29.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.295091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:29.295231 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:29.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.297096 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:29.297248 systemd[1]: Finished modprobe@loop.service. May 13 08:23:29.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.300790 systemd[1]: Finished systemd-sysext.service. May 13 08:23:29.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.304845 systemd[1]: Starting ensure-sysext.service... May 13 08:23:29.305405 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:29.305473 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:29.306577 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 08:23:29.317470 systemd[1]: Reloading. May 13 08:23:29.341683 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 08:23:29.347071 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 08:23:29.349508 systemd-tmpfiles[1097]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 08:23:29.412106 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-13T08:23:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:23:29.413256 /usr/lib/systemd/system-generators/torcx-generator[1118]: time="2025-05-13T08:23:29Z" level=info msg="torcx already run" May 13 08:23:29.535002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:23:29.535021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:23:29.565181 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:23:29.632754 systemd[1]: Finished systemd-boot-update.service. May 13 08:23:29.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.633711 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 08:23:29.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.637537 systemd[1]: Starting audit-rules.service... May 13 08:23:29.639175 systemd[1]: Starting clean-ca-certificates.service... May 13 08:23:29.641298 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 08:23:29.647592 systemd[1]: Starting systemd-resolved.service... May 13 08:23:29.653511 systemd[1]: Starting systemd-timesyncd.service... May 13 08:23:29.657032 systemd[1]: Starting systemd-update-utmp.service... May 13 08:23:29.673505 systemd[1]: Finished clean-ca-certificates.service. May 13 08:23:29.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.674185 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:29.680000 audit[1177]: SYSTEM_BOOT pid=1177 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 08:23:29.682636 systemd[1]: Finished systemd-update-utmp.service. May 13 08:23:29.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.693374 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.693625 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:29.695473 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:29.697385 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:29.700202 systemd[1]: Starting modprobe@loop.service... May 13 08:23:29.701313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:29.701479 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.701624 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:29.701764 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.705411 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.705684 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:29.705833 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:29.705935 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.706039 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:29.706122 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.709158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:29.709324 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:29.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.710273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:29.710451 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:29.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.711166 ldconfig[1059]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 08:23:29.711234 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:29.714083 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.714341 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:23:29.716881 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:23:29.718702 systemd[1]: Starting modprobe@drm.service... May 13 08:23:29.727023 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:23:29.727717 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:23:29.727864 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.729364 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 08:23:29.733903 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:23:29.734075 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:23:29.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.740220 systemd[1]: Finished ldconfig.service. May 13 08:23:29.741150 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:23:29.741320 systemd[1]: Finished modprobe@loop.service. May 13 08:23:29.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.742358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:23:29.742526 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:23:29.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.743394 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:23:29.743542 systemd[1]: Finished modprobe@drm.service. May 13 08:23:29.744375 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:23:29.744871 systemd-networkd[1035]: eth0: Gained IPv6LL May 13 08:23:29.748022 systemd[1]: Finished ensure-sysext.service. May 13 08:23:29.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.755783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:23:29.755946 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:23:29.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.756737 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:23:29.757829 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 08:23:29.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.774626 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 08:23:29.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.776623 systemd[1]: Starting systemd-update-done.service... May 13 08:23:29.787517 systemd[1]: Finished systemd-update-done.service. May 13 08:23:29.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:23:29.817000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 08:23:29.817000 audit[1215]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb2f03800 a2=420 a3=0 items=0 ppid=1172 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:23:29.817000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 08:23:29.818112 augenrules[1215]: No rules May 13 08:23:29.818711 systemd[1]: Finished audit-rules.service. May 13 08:23:29.834000 systemd[1]: Started systemd-timesyncd.service. May 13 08:23:29.834582 systemd[1]: Reached target time-set.target. May 13 08:23:29.847463 systemd-resolved[1175]: Positive Trust Anchors: May 13 08:23:29.847800 systemd-resolved[1175]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:23:29.847896 systemd-resolved[1175]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:23:29.855081 systemd-resolved[1175]: Using system hostname 'ci-3510-3-7-n-5ac23fdacd.novalocal'. May 13 08:23:29.856669 systemd[1]: Started systemd-resolved.service. May 13 08:23:29.857232 systemd[1]: Reached target network.target. May 13 08:23:29.857724 systemd[1]: Reached target network-online.target. May 13 08:23:29.858201 systemd[1]: Reached target nss-lookup.target. May 13 08:23:29.858689 systemd[1]: Reached target sysinit.target. May 13 08:23:29.859231 systemd[1]: Started motdgen.path. May 13 08:23:29.859721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 08:23:29.860394 systemd[1]: Started logrotate.timer. May 13 08:23:29.860974 systemd[1]: Started mdadm.timer. May 13 08:23:29.861420 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 08:23:29.861902 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 08:23:29.861932 systemd[1]: Reached target paths.target. May 13 08:23:29.862348 systemd[1]: Reached target timers.target. May 13 08:23:29.863104 systemd[1]: Listening on dbus.socket. May 13 08:23:29.864838 systemd[1]: Starting docker.socket... May 13 08:23:29.867248 systemd[1]: Listening on sshd.socket. May 13 08:23:29.867935 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.868264 systemd[1]: Listening on docker.socket. May 13 08:23:29.868905 systemd[1]: Reached target sockets.target. May 13 08:23:29.869442 systemd[1]: Reached target basic.target. May 13 08:23:29.870137 systemd[1]: System is tainted: cgroupsv1 May 13 08:23:29.870265 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:23:29.870365 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:23:29.871522 systemd[1]: Starting containerd.service... May 13 08:23:29.873154 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 13 08:23:29.874965 systemd[1]: Starting dbus.service... May 13 08:23:29.876745 systemd[1]: Starting enable-oem-cloudinit.service... May 13 08:23:29.878605 systemd[1]: Starting extend-filesystems.service... May 13 08:23:29.879775 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 08:23:29.883593 systemd[1]: Starting kubelet.service... May 13 08:23:29.886431 systemd[1]: Starting motdgen.service... May 13 08:23:29.888449 systemd[1]: Starting prepare-helm.service... May 13 08:23:29.910068 jq[1228]: false May 13 08:23:29.892808 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 08:23:29.895395 systemd[1]: Starting sshd-keygen.service... May 13 08:23:29.900635 systemd[1]: Starting systemd-logind.service... May 13 08:23:29.905603 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:23:29.905689 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 08:23:29.912786 systemd[1]: Starting update-engine.service... May 13 08:23:29.914505 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 08:23:29.916748 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 08:23:29.918139 jq[1246]: true May 13 08:23:29.916996 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 08:23:29.927718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 08:23:29.928000 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 08:23:29.961841 jq[1254]: true May 13 08:23:29.972856 tar[1252]: linux-amd64/helm May 13 08:23:29.975397 systemd[1]: Created slice system-sshd.slice. May 13 08:23:29.989966 extend-filesystems[1229]: Found loop1 May 13 08:23:29.993958 extend-filesystems[1229]: Found vda May 13 08:23:29.998749 extend-filesystems[1229]: Found vda1 May 13 08:23:29.999467 extend-filesystems[1229]: Found vda2 May 13 08:23:30.002206 extend-filesystems[1229]: Found vda3 May 13 08:23:30.002929 extend-filesystems[1229]: Found usr May 13 08:23:30.004064 extend-filesystems[1229]: Found vda4 May 13 08:23:30.004942 extend-filesystems[1229]: Found vda6 May 13 08:23:30.005776 extend-filesystems[1229]: Found vda7 May 13 08:23:30.008019 extend-filesystems[1229]: Found vda9 May 13 08:23:30.008019 extend-filesystems[1229]: Checking size of /dev/vda9 May 13 08:23:30.023007 dbus-daemon[1227]: [system] SELinux support is enabled May 13 08:23:30.023309 systemd[1]: Started dbus.service. May 13 08:23:30.026074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 08:23:30.026104 systemd[1]: Reached target system-config.target. May 13 08:23:30.026637 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 08:23:30.026678 systemd[1]: Reached target user-config.target. May 13 08:23:30.033327 systemd-timesyncd[1176]: Contacted time server 172.234.37.140:123 (0.flatcar.pool.ntp.org). May 13 08:23:30.033461 systemd-timesyncd[1176]: Initial clock synchronization to Tue 2025-05-13 08:23:29.945975 UTC. May 13 08:23:30.042334 systemd[1]: motdgen.service: Deactivated successfully. May 13 08:23:30.042583 systemd[1]: Finished motdgen.service. May 13 08:23:30.075855 bash[1284]: Updated "/home/core/.ssh/authorized_keys" May 13 08:23:30.076282 extend-filesystems[1229]: Resized partition /dev/vda9 May 13 08:23:30.076611 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 08:23:30.092972 extend-filesystems[1292]: resize2fs 1.46.5 (30-Dec-2021) May 13 08:23:30.130274 env[1257]: time="2025-05-13T08:23:30.130225934Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 08:23:30.137097 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 08:23:30.140145 update_engine[1244]: I0513 08:23:30.139187 1244 main.cc:92] Flatcar Update Engine starting May 13 08:23:30.143748 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 08:23:30.150822 systemd[1]: Started update-engine.service. May 13 08:23:30.151079 update_engine[1244]: I0513 08:23:30.151036 1244 update_check_scheduler.cc:74] Next update check in 11m15s May 13 08:23:30.153155 systemd[1]: Started locksmithd.service. May 13 08:23:30.175038 systemd-logind[1242]: Watching system buttons on /dev/input/event1 (Power Button) May 13 08:23:30.175066 systemd-logind[1242]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 08:23:30.177672 extend-filesystems[1292]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 08:23:30.177672 extend-filesystems[1292]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 08:23:30.177672 extend-filesystems[1292]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 08:23:30.176167 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 08:23:30.190608 extend-filesystems[1229]: Resized filesystem in /dev/vda9 May 13 08:23:30.176422 systemd[1]: Finished extend-filesystems.service. May 13 08:23:30.178456 systemd-logind[1242]: New seat seat0. May 13 08:23:30.186335 systemd[1]: Started systemd-logind.service. May 13 08:23:30.212100 env[1257]: time="2025-05-13T08:23:30.211915192Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 08:23:30.212211 env[1257]: time="2025-05-13T08:23:30.212106501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.213776503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.213817740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.214122121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.214148791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.214167837Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.214181212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214415 env[1257]: time="2025-05-13T08:23:30.214270590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214623 env[1257]: time="2025-05-13T08:23:30.214530828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 08:23:30.214756 env[1257]: time="2025-05-13T08:23:30.214724972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:23:30.214756 env[1257]: time="2025-05-13T08:23:30.214750820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 08:23:30.215067 env[1257]: time="2025-05-13T08:23:30.214807887Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 08:23:30.215067 env[1257]: time="2025-05-13T08:23:30.214830570Z" level=info msg="metadata content store policy set" policy=shared May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.236956427Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237017522Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237038601Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237080359Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237098303Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237114013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237128019Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237146253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237164898Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237180457Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237195666Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237209682Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237354944Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 08:23:30.237593 env[1257]: time="2025-05-13T08:23:30.237441717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.237942677Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238400295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238420443Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238477540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238496375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238511554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238527123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238542772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238558722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238573971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238588959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238607023Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238832005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238856120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239108 env[1257]: time="2025-05-13T08:23:30.238872110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239441 env[1257]: time="2025-05-13T08:23:30.238886767Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 08:23:30.239441 env[1257]: time="2025-05-13T08:23:30.238904731Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 08:23:30.239441 env[1257]: time="2025-05-13T08:23:30.238918827Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 08:23:30.239441 env[1257]: time="2025-05-13T08:23:30.238938885Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 08:23:30.239441 env[1257]: time="2025-05-13T08:23:30.238977858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 08:23:30.239563 env[1257]: time="2025-05-13T08:23:30.239184876Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 08:23:30.239563 env[1257]: time="2025-05-13T08:23:30.239251902Z" level=info msg="Connect containerd service" May 13 08:23:30.239563 env[1257]: time="2025-05-13T08:23:30.239291446Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 08:23:30.248329 env[1257]: time="2025-05-13T08:23:30.248291476Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:23:30.250345 env[1257]: time="2025-05-13T08:23:30.248594895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 08:23:30.250345 env[1257]: time="2025-05-13T08:23:30.248642925Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 08:23:30.250345 env[1257]: time="2025-05-13T08:23:30.248711433Z" level=info msg="containerd successfully booted in 0.119819s" May 13 08:23:30.249455 systemd[1]: Started containerd.service. May 13 08:23:30.251332 env[1257]: time="2025-05-13T08:23:30.251295841Z" level=info msg="Start subscribing containerd event" May 13 08:23:30.251374 env[1257]: time="2025-05-13T08:23:30.251346045Z" level=info msg="Start recovering state" May 13 08:23:30.251496 env[1257]: time="2025-05-13T08:23:30.251408442Z" level=info msg="Start event monitor" May 13 08:23:30.251496 env[1257]: time="2025-05-13T08:23:30.251439199Z" level=info msg="Start snapshots syncer" May 13 08:23:30.251496 env[1257]: time="2025-05-13T08:23:30.251451142Z" level=info msg="Start cni network conf syncer for default" May 13 08:23:30.251496 env[1257]: time="2025-05-13T08:23:30.251461491Z" level=info msg="Start streaming server" May 13 08:23:30.679461 tar[1252]: linux-amd64/LICENSE May 13 08:23:30.679782 tar[1252]: linux-amd64/README.md May 13 08:23:30.688071 systemd[1]: Finished prepare-helm.service. May 13 08:23:30.777974 locksmithd[1295]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 08:23:30.847061 sshd_keygen[1260]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 08:23:30.888740 systemd[1]: Finished sshd-keygen.service. May 13 08:23:30.890939 systemd[1]: Starting issuegen.service... May 13 08:23:30.892448 systemd[1]: Started sshd@0-172.24.4.152:22-172.24.4.1:37148.service. May 13 08:23:30.899712 systemd[1]: issuegen.service: Deactivated successfully. May 13 08:23:30.899955 systemd[1]: Finished issuegen.service. May 13 08:23:30.902017 systemd[1]: Starting systemd-user-sessions.service... May 13 08:23:30.910378 systemd[1]: Finished systemd-user-sessions.service. May 13 08:23:30.912452 systemd[1]: Started getty@tty1.service. May 13 08:23:30.914184 systemd[1]: Started serial-getty@ttyS0.service. May 13 08:23:30.915091 systemd[1]: Reached target getty.target. May 13 08:23:31.992711 sshd[1317]: Accepted publickey for core from 172.24.4.1 port 37148 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:31.996968 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:32.021840 systemd[1]: Created slice user-500.slice. May 13 08:23:32.025459 systemd[1]: Starting user-runtime-dir@500.service... May 13 08:23:32.035791 systemd-logind[1242]: New session 1 of user core. May 13 08:23:32.051915 systemd[1]: Finished user-runtime-dir@500.service. May 13 08:23:32.057463 systemd[1]: Starting user@500.service... May 13 08:23:32.068306 (systemd)[1330]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:32.131933 systemd[1]: Started kubelet.service. May 13 08:23:32.205482 systemd[1330]: Queued start job for default target default.target. May 13 08:23:32.205973 systemd[1330]: Reached target paths.target. May 13 08:23:32.206013 systemd[1330]: Reached target sockets.target. May 13 08:23:32.206067 systemd[1330]: Reached target timers.target. May 13 08:23:32.206092 systemd[1330]: Reached target basic.target. May 13 08:23:32.206175 systemd[1330]: Reached target default.target. May 13 08:23:32.206219 systemd[1330]: Startup finished in 125ms. May 13 08:23:32.206251 systemd[1]: Started user@500.service. May 13 08:23:32.208068 systemd[1]: Started session-1.scope. May 13 08:23:32.688010 systemd[1]: Started sshd@1-172.24.4.152:22-172.24.4.1:37160.service. May 13 08:23:33.653910 kubelet[1341]: E0513 08:23:33.653739 1341 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:33.657111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:33.657452 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:34.267956 sshd[1350]: Accepted publickey for core from 172.24.4.1 port 37160 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:34.270935 sshd[1350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:34.281763 systemd-logind[1242]: New session 2 of user core. May 13 08:23:34.283299 systemd[1]: Started session-2.scope. May 13 08:23:34.864442 sshd[1350]: pam_unix(sshd:session): session closed for user core May 13 08:23:34.869153 systemd[1]: Started sshd@2-172.24.4.152:22-172.24.4.1:40152.service. May 13 08:23:34.877246 systemd[1]: sshd@1-172.24.4.152:22-172.24.4.1:37160.service: Deactivated successfully. May 13 08:23:34.880285 systemd[1]: session-2.scope: Deactivated successfully. May 13 08:23:34.881253 systemd-logind[1242]: Session 2 logged out. Waiting for processes to exit. May 13 08:23:34.885160 systemd-logind[1242]: Removed session 2. May 13 08:23:36.340551 sshd[1357]: Accepted publickey for core from 172.24.4.1 port 40152 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:36.342617 sshd[1357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:36.353361 systemd-logind[1242]: New session 3 of user core. May 13 08:23:36.353575 systemd[1]: Started session-3.scope. May 13 08:23:36.924463 sshd[1357]: pam_unix(sshd:session): session closed for user core May 13 08:23:36.937975 systemd[1]: sshd@2-172.24.4.152:22-172.24.4.1:40152.service: Deactivated successfully. May 13 08:23:36.941436 systemd-logind[1242]: Session 3 logged out. Waiting for processes to exit. May 13 08:23:36.942790 systemd[1]: session-3.scope: Deactivated successfully. May 13 08:23:36.946106 systemd-logind[1242]: Removed session 3. May 13 08:23:37.030128 coreos-metadata[1225]: May 13 08:23:37.029 WARN failed to locate config-drive, using the metadata service API instead May 13 08:23:37.148125 coreos-metadata[1225]: May 13 08:23:37.147 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 08:23:37.542863 coreos-metadata[1225]: May 13 08:23:37.542 INFO Fetch successful May 13 08:23:37.543148 coreos-metadata[1225]: May 13 08:23:37.543 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 08:23:37.559039 coreos-metadata[1225]: May 13 08:23:37.558 INFO Fetch successful May 13 08:23:37.563084 unknown[1225]: wrote ssh authorized keys file for user: core May 13 08:23:37.597253 update-ssh-keys[1369]: Updated "/home/core/.ssh/authorized_keys" May 13 08:23:37.599015 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 13 08:23:37.599844 systemd[1]: Reached target multi-user.target. May 13 08:23:37.603111 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 08:23:37.624913 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 08:23:37.625433 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 08:23:37.626486 systemd[1]: Startup finished in 11.912s (kernel) + 15.078s (userspace) = 26.990s. May 13 08:23:43.868820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 08:23:43.869288 systemd[1]: Stopped kubelet.service. May 13 08:23:43.872498 systemd[1]: Starting kubelet.service... May 13 08:23:44.118162 systemd[1]: Started kubelet.service. May 13 08:23:44.249243 kubelet[1382]: E0513 08:23:44.249176 1382 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:44.256576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:44.256978 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:46.903282 systemd[1]: Started sshd@3-172.24.4.152:22-172.24.4.1:37134.service. May 13 08:23:48.199225 sshd[1389]: Accepted publickey for core from 172.24.4.1 port 37134 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:48.202520 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:48.213418 systemd[1]: Started session-4.scope. May 13 08:23:48.214382 systemd-logind[1242]: New session 4 of user core. May 13 08:23:49.072344 sshd[1389]: pam_unix(sshd:session): session closed for user core May 13 08:23:49.078735 systemd[1]: Started sshd@4-172.24.4.152:22-172.24.4.1:37150.service. May 13 08:23:49.083972 systemd[1]: sshd@3-172.24.4.152:22-172.24.4.1:37134.service: Deactivated successfully. May 13 08:23:49.086001 systemd-logind[1242]: Session 4 logged out. Waiting for processes to exit. May 13 08:23:49.086204 systemd[1]: session-4.scope: Deactivated successfully. May 13 08:23:49.088973 systemd-logind[1242]: Removed session 4. May 13 08:23:50.536933 sshd[1394]: Accepted publickey for core from 172.24.4.1 port 37150 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:50.539580 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:50.550757 systemd-logind[1242]: New session 5 of user core. May 13 08:23:50.552150 systemd[1]: Started session-5.scope. May 13 08:23:51.242159 sshd[1394]: pam_unix(sshd:session): session closed for user core May 13 08:23:51.244134 systemd[1]: Started sshd@5-172.24.4.152:22-172.24.4.1:37166.service. May 13 08:23:51.251447 systemd[1]: sshd@4-172.24.4.152:22-172.24.4.1:37150.service: Deactivated successfully. May 13 08:23:51.258301 systemd-logind[1242]: Session 5 logged out. Waiting for processes to exit. May 13 08:23:51.258306 systemd[1]: session-5.scope: Deactivated successfully. May 13 08:23:51.262368 systemd-logind[1242]: Removed session 5. May 13 08:23:52.526493 sshd[1401]: Accepted publickey for core from 172.24.4.1 port 37166 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:52.529555 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:52.540803 systemd-logind[1242]: New session 6 of user core. May 13 08:23:52.542562 systemd[1]: Started session-6.scope. May 13 08:23:53.269729 sshd[1401]: pam_unix(sshd:session): session closed for user core May 13 08:23:53.273190 systemd[1]: Started sshd@6-172.24.4.152:22-172.24.4.1:37178.service. May 13 08:23:53.276035 systemd-logind[1242]: Session 6 logged out. Waiting for processes to exit. May 13 08:23:53.276769 systemd[1]: sshd@5-172.24.4.152:22-172.24.4.1:37166.service: Deactivated successfully. May 13 08:23:53.278388 systemd[1]: session-6.scope: Deactivated successfully. May 13 08:23:53.279879 systemd-logind[1242]: Removed session 6. May 13 08:23:54.368851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 08:23:54.369313 systemd[1]: Stopped kubelet.service. May 13 08:23:54.372275 systemd[1]: Starting kubelet.service... May 13 08:23:54.709174 sshd[1408]: Accepted publickey for core from 172.24.4.1 port 37178 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:23:54.710846 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:23:54.717546 systemd-logind[1242]: New session 7 of user core. May 13 08:23:54.719993 systemd[1]: Started session-7.scope. May 13 08:23:54.761030 systemd[1]: Started kubelet.service. May 13 08:23:54.846967 kubelet[1422]: E0513 08:23:54.846923 1422 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:23:54.850069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:23:54.850357 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:23:55.359332 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 08:23:55.360538 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:23:55.415565 systemd[1]: Starting docker.service... May 13 08:23:55.548033 env[1439]: time="2025-05-13T08:23:55.547935364Z" level=info msg="Starting up" May 13 08:23:55.552309 env[1439]: time="2025-05-13T08:23:55.552252635Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 08:23:55.552426 env[1439]: time="2025-05-13T08:23:55.552298349Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 08:23:55.552426 env[1439]: time="2025-05-13T08:23:55.552359657Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 08:23:55.552426 env[1439]: time="2025-05-13T08:23:55.552403461Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 08:23:55.557761 env[1439]: time="2025-05-13T08:23:55.557580564Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 08:23:55.557761 env[1439]: time="2025-05-13T08:23:55.557641081Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 08:23:55.557761 env[1439]: time="2025-05-13T08:23:55.557718481Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 08:23:55.557761 env[1439]: time="2025-05-13T08:23:55.557744860Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 08:23:56.253037 env[1439]: time="2025-05-13T08:23:56.252974446Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 08:23:56.253037 env[1439]: time="2025-05-13T08:23:56.253025892Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 08:23:56.253381 env[1439]: time="2025-05-13T08:23:56.253356074Z" level=info msg="Loading containers: start." May 13 08:23:56.523937 kernel: Initializing XFRM netlink socket May 13 08:23:56.592164 env[1439]: time="2025-05-13T08:23:56.592105944Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 08:23:56.676130 systemd-networkd[1035]: docker0: Link UP May 13 08:23:56.692932 env[1439]: time="2025-05-13T08:23:56.692860611Z" level=info msg="Loading containers: done." May 13 08:23:56.718339 env[1439]: time="2025-05-13T08:23:56.718245602Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 08:23:56.718601 env[1439]: time="2025-05-13T08:23:56.718439500Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 08:23:56.718601 env[1439]: time="2025-05-13T08:23:56.718532429Z" level=info msg="Daemon has completed initialization" May 13 08:23:56.753901 systemd[1]: Started docker.service. May 13 08:23:56.772697 env[1439]: time="2025-05-13T08:23:56.772566062Z" level=info msg="API listen on /run/docker.sock" May 13 08:23:58.664786 env[1257]: time="2025-05-13T08:23:58.664534371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 08:23:59.532550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809043333.mount: Deactivated successfully. May 13 08:24:02.172617 env[1257]: time="2025-05-13T08:24:02.172503398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.175339 env[1257]: time="2025-05-13T08:24:02.175282199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.178628 env[1257]: time="2025-05-13T08:24:02.178590099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.182602 env[1257]: time="2025-05-13T08:24:02.182579329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:02.184635 env[1257]: time="2025-05-13T08:24:02.184559890Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 08:24:02.197403 env[1257]: time="2025-05-13T08:24:02.197332251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 08:24:04.869752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 08:24:04.870352 systemd[1]: Stopped kubelet.service. May 13 08:24:04.873085 systemd[1]: Starting kubelet.service... May 13 08:24:05.175324 systemd[1]: Started kubelet.service. May 13 08:24:05.270797 kubelet[1582]: E0513 08:24:05.270752 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:24:05.273572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:24:05.273870 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:24:05.369647 env[1257]: time="2025-05-13T08:24:05.369571643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.374136 env[1257]: time="2025-05-13T08:24:05.374029796Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.379588 env[1257]: time="2025-05-13T08:24:05.379524804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.384983 env[1257]: time="2025-05-13T08:24:05.384927950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:05.387092 env[1257]: time="2025-05-13T08:24:05.387033173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 08:24:05.414519 env[1257]: time="2025-05-13T08:24:05.414457833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 08:24:07.672578 env[1257]: time="2025-05-13T08:24:07.672372698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:07.676284 env[1257]: time="2025-05-13T08:24:07.676244626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:07.678938 env[1257]: time="2025-05-13T08:24:07.678910478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:07.681406 env[1257]: time="2025-05-13T08:24:07.681370493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:07.682433 env[1257]: time="2025-05-13T08:24:07.682392910Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 08:24:07.694933 env[1257]: time="2025-05-13T08:24:07.694889697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 08:24:10.400302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573383258.mount: Deactivated successfully. May 13 08:24:11.526518 env[1257]: time="2025-05-13T08:24:11.526418282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:11.542099 env[1257]: time="2025-05-13T08:24:11.541999843Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:11.545071 env[1257]: time="2025-05-13T08:24:11.545002818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:11.547835 env[1257]: time="2025-05-13T08:24:11.547770566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:11.549268 env[1257]: time="2025-05-13T08:24:11.549159578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 08:24:11.581437 env[1257]: time="2025-05-13T08:24:11.581377574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 08:24:12.374489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630568571.mount: Deactivated successfully. May 13 08:24:14.915058 env[1257]: time="2025-05-13T08:24:14.914806107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:14.929764 env[1257]: time="2025-05-13T08:24:14.929647193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:14.974708 env[1257]: time="2025-05-13T08:24:14.974565318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.111828 env[1257]: time="2025-05-13T08:24:15.111748065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.115217 env[1257]: time="2025-05-13T08:24:15.115157405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 08:24:15.135884 env[1257]: time="2025-05-13T08:24:15.135808295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 08:24:15.368798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 08:24:15.369275 systemd[1]: Stopped kubelet.service. May 13 08:24:15.372952 systemd[1]: Starting kubelet.service... May 13 08:24:15.614594 systemd[1]: Started kubelet.service. May 13 08:24:15.762253 update_engine[1244]: I0513 08:24:15.654724 1244 update_attempter.cc:509] Updating boot flags... May 13 08:24:15.825822 kubelet[1618]: E0513 08:24:15.825789 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:24:15.827801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:24:15.827974 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:24:15.947180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948115984.mount: Deactivated successfully. May 13 08:24:15.957752 env[1257]: time="2025-05-13T08:24:15.957641636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.960685 env[1257]: time="2025-05-13T08:24:15.960608770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.963033 env[1257]: time="2025-05-13T08:24:15.963008938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.965731 env[1257]: time="2025-05-13T08:24:15.965706905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:15.966804 env[1257]: time="2025-05-13T08:24:15.966777181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 08:24:15.983591 env[1257]: time="2025-05-13T08:24:15.983556818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 08:24:17.089467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101214294.mount: Deactivated successfully. May 13 08:24:21.402270 env[1257]: time="2025-05-13T08:24:21.401488194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:21.413799 env[1257]: time="2025-05-13T08:24:21.405812449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:21.413799 env[1257]: time="2025-05-13T08:24:21.409423294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:21.413799 env[1257]: time="2025-05-13T08:24:21.411947558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:21.413799 env[1257]: time="2025-05-13T08:24:21.413175766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 08:24:25.351719 systemd[1]: Stopped kubelet.service. May 13 08:24:25.355166 systemd[1]: Starting kubelet.service... May 13 08:24:25.386639 systemd[1]: Reloading. May 13 08:24:25.504236 /usr/lib/systemd/system-generators/torcx-generator[1740]: time="2025-05-13T08:24:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:24:25.504726 /usr/lib/systemd/system-generators/torcx-generator[1740]: time="2025-05-13T08:24:25Z" level=info msg="torcx already run" May 13 08:24:25.628238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:24:25.628259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:24:25.654068 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:24:25.754287 systemd[1]: Started kubelet.service. May 13 08:24:25.770460 systemd[1]: Stopping kubelet.service... May 13 08:24:25.772008 systemd[1]: kubelet.service: Deactivated successfully. May 13 08:24:25.772278 systemd[1]: Stopped kubelet.service. May 13 08:24:25.774397 systemd[1]: Starting kubelet.service... May 13 08:24:26.026943 systemd[1]: Started kubelet.service. May 13 08:24:26.150501 kubelet[1809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:26.151324 kubelet[1809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 08:24:26.151467 kubelet[1809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:26.724924 kubelet[1809]: I0513 08:24:26.724783 1809 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 08:24:27.540450 kubelet[1809]: I0513 08:24:27.540080 1809 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 08:24:27.541330 kubelet[1809]: I0513 08:24:27.541285 1809 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 08:24:27.541643 kubelet[1809]: I0513 08:24:27.541593 1809 server.go:927] "Client rotation is on, will bootstrap in background" May 13 08:24:27.568872 kubelet[1809]: I0513 08:24:27.567488 1809 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:24:27.570048 kubelet[1809]: E0513 08:24:27.570005 1809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.604989 kubelet[1809]: I0513 08:24:27.604916 1809 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 08:24:27.606185 kubelet[1809]: I0513 08:24:27.606121 1809 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 08:24:27.606854 kubelet[1809]: I0513 08:24:27.606353 1809 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-5ac23fdacd.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 08:24:27.607198 kubelet[1809]: I0513 08:24:27.607171 1809 topology_manager.go:138] "Creating topology manager with none policy" May 13 08:24:27.607339 kubelet[1809]: I0513 08:24:27.607320 1809 container_manager_linux.go:301] "Creating device plugin manager" May 13 08:24:27.607700 kubelet[1809]: I0513 08:24:27.607639 1809 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:27.610139 kubelet[1809]: I0513 08:24:27.610090 1809 kubelet.go:400] "Attempting to sync node with API server" May 13 08:24:27.610583 kubelet[1809]: I0513 08:24:27.610557 1809 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 08:24:27.610864 kubelet[1809]: I0513 08:24:27.610840 1809 kubelet.go:312] "Adding apiserver pod source" May 13 08:24:27.611041 kubelet[1809]: I0513 08:24:27.611017 1809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 08:24:27.613323 kubelet[1809]: W0513 08:24:27.613240 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-5ac23fdacd.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.613575 kubelet[1809]: E0513 08:24:27.613545 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-5ac23fdacd.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.624492 kubelet[1809]: W0513 08:24:27.624378 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.152:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.624731 kubelet[1809]: E0513 08:24:27.624476 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.152:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.625646 kubelet[1809]: I0513 08:24:27.625322 1809 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 08:24:27.633927 kubelet[1809]: I0513 08:24:27.633904 1809 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 08:24:27.634130 kubelet[1809]: W0513 08:24:27.634111 1809 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 08:24:27.635228 kubelet[1809]: I0513 08:24:27.635204 1809 server.go:1264] "Started kubelet" May 13 08:24:27.649472 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 08:24:27.650974 kubelet[1809]: I0513 08:24:27.650945 1809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 08:24:27.654124 kubelet[1809]: I0513 08:24:27.654028 1809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 08:24:27.656429 kubelet[1809]: I0513 08:24:27.656387 1809 server.go:455] "Adding debug handlers to kubelet server" May 13 08:24:27.659043 kubelet[1809]: I0513 08:24:27.658552 1809 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 08:24:27.659355 kubelet[1809]: I0513 08:24:27.659241 1809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 08:24:27.659574 kubelet[1809]: I0513 08:24:27.659507 1809 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 08:24:27.659694 kubelet[1809]: I0513 08:24:27.659633 1809 reconciler.go:26] "Reconciler: start to sync state" May 13 08:24:27.661813 kubelet[1809]: I0513 08:24:27.661787 1809 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 08:24:27.662244 kubelet[1809]: W0513 08:24:27.662176 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.662446 kubelet[1809]: E0513 08:24:27.662404 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.663419 kubelet[1809]: E0513 08:24:27.663348 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-5ac23fdacd.novalocal?timeout=10s\": dial tcp 172.24.4.152:6443: connect: connection refused" interval="200ms" May 13 08:24:27.663996 kubelet[1809]: I0513 08:24:27.663938 1809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 08:24:27.685509 kubelet[1809]: E0513 08:24:27.685468 1809 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 08:24:27.687474 kubelet[1809]: E0513 08:24:27.687271 1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.152:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.152:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-n-5ac23fdacd.novalocal.183f08a0e8b770c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-n-5ac23fdacd.novalocal,UID:ci-3510-3-7-n-5ac23fdacd.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-n-5ac23fdacd.novalocal,},FirstTimestamp:2025-05-13 08:24:27.635167431 +0000 UTC m=+1.587726371,LastTimestamp:2025-05-13 08:24:27.635167431 +0000 UTC m=+1.587726371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-n-5ac23fdacd.novalocal,}" May 13 08:24:27.692357 kubelet[1809]: I0513 08:24:27.692324 1809 factory.go:221] Registration of the containerd container factory successfully May 13 08:24:27.692357 kubelet[1809]: I0513 08:24:27.692344 1809 factory.go:221] Registration of the systemd container factory successfully May 13 08:24:27.711578 kubelet[1809]: I0513 08:24:27.711521 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 08:24:27.712443 kubelet[1809]: I0513 08:24:27.712395 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 08:24:27.712443 kubelet[1809]: I0513 08:24:27.712424 1809 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 08:24:27.712525 kubelet[1809]: I0513 08:24:27.712451 1809 kubelet.go:2337] "Starting kubelet main sync loop" May 13 08:24:27.712525 kubelet[1809]: E0513 08:24:27.712492 1809 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 08:24:27.721746 kubelet[1809]: W0513 08:24:27.721643 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.721746 kubelet[1809]: E0513 08:24:27.721748 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:27.728454 kubelet[1809]: I0513 08:24:27.728429 1809 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 08:24:27.728454 kubelet[1809]: I0513 08:24:27.728459 1809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 08:24:27.728587 kubelet[1809]: I0513 08:24:27.728480 1809 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:27.733057 kubelet[1809]: I0513 08:24:27.733026 1809 policy_none.go:49] "None policy: Start" May 13 08:24:27.733914 kubelet[1809]: I0513 08:24:27.733898 1809 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 08:24:27.734035 kubelet[1809]: I0513 08:24:27.734025 1809 state_mem.go:35] "Initializing new in-memory state store" May 13 08:24:27.740862 kubelet[1809]: I0513 08:24:27.740821 1809 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 08:24:27.741232 kubelet[1809]: I0513 08:24:27.741196 1809 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 08:24:27.741431 kubelet[1809]: I0513 08:24:27.741421 1809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 08:24:27.744582 kubelet[1809]: E0513 08:24:27.744557 1809 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" not found" May 13 08:24:27.761765 kubelet[1809]: I0513 08:24:27.761745 1809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.762378 kubelet[1809]: E0513 08:24:27.762357 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.152:6443/api/v1/nodes\": dial tcp 172.24.4.152:6443: connect: connection refused" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.813829 kubelet[1809]: I0513 08:24:27.813540 1809 topology_manager.go:215] "Topology Admit Handler" podUID="5aa87d6e63eb4292d05ec073c5288bdc" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.818721 kubelet[1809]: I0513 08:24:27.818627 1809 topology_manager.go:215] "Topology Admit Handler" podUID="62cce975b0e052752d2a7ac80e08326b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.822486 kubelet[1809]: I0513 08:24:27.822402 1809 topology_manager.go:215] "Topology Admit Handler" podUID="a966144f05540b18712ebdc302993167" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.861352 kubelet[1809]: I0513 08:24:27.861281 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a966144f05540b18712ebdc302993167-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"a966144f05540b18712ebdc302993167\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.861857 kubelet[1809]: I0513 08:24:27.861813 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.862098 kubelet[1809]: I0513 08:24:27.862063 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.862328 kubelet[1809]: I0513 08:24:27.862287 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.862545 kubelet[1809]: I0513 08:24:27.862510 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.864961 kubelet[1809]: E0513 08:24:27.864903 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-5ac23fdacd.novalocal?timeout=10s\": dial tcp 172.24.4.152:6443: connect: connection refused" interval="400ms" May 13 08:24:27.964720 kubelet[1809]: I0513 08:24:27.964022 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.964720 kubelet[1809]: I0513 08:24:27.964240 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.964720 kubelet[1809]: I0513 08:24:27.964391 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.964720 kubelet[1809]: I0513 08:24:27.964492 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.967728 kubelet[1809]: I0513 08:24:27.967636 1809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:27.968596 kubelet[1809]: E0513 08:24:27.968512 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.152:6443/api/v1/nodes\": dial tcp 172.24.4.152:6443: connect: connection refused" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:28.140993 env[1257]: time="2025-05-13T08:24:28.140896768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:5aa87d6e63eb4292d05ec073c5288bdc,Namespace:kube-system,Attempt:0,}" May 13 08:24:28.145787 env[1257]: time="2025-05-13T08:24:28.144823602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:62cce975b0e052752d2a7ac80e08326b,Namespace:kube-system,Attempt:0,}" May 13 08:24:28.145787 env[1257]: time="2025-05-13T08:24:28.145320445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:a966144f05540b18712ebdc302993167,Namespace:kube-system,Attempt:0,}" May 13 08:24:28.266872 kubelet[1809]: E0513 08:24:28.266775 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-5ac23fdacd.novalocal?timeout=10s\": dial tcp 172.24.4.152:6443: connect: connection refused" interval="800ms" May 13 08:24:28.373907 kubelet[1809]: I0513 08:24:28.373034 1809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:28.373907 kubelet[1809]: E0513 08:24:28.373846 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.152:6443/api/v1/nodes\": dial tcp 172.24.4.152:6443: connect: connection refused" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:28.488327 kubelet[1809]: W0513 08:24:28.468879 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.488327 kubelet[1809]: E0513 08:24:28.469095 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.578465 kubelet[1809]: W0513 08:24:28.578385 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.578465 kubelet[1809]: E0513 08:24:28.578484 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.599771 kubelet[1809]: W0513 08:24:28.599579 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.152:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.599771 kubelet[1809]: E0513 08:24:28.599763 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.152:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:28.964684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562294490.mount: Deactivated successfully. May 13 08:24:28.981330 env[1257]: time="2025-05-13T08:24:28.981247466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:28.989984 env[1257]: time="2025-05-13T08:24:28.989887060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:28.992202 env[1257]: time="2025-05-13T08:24:28.992125004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:28.996150 env[1257]: time="2025-05-13T08:24:28.996089339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:28.998067 env[1257]: time="2025-05-13T08:24:28.997999265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:28.999978 env[1257]: time="2025-05-13T08:24:28.999917878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.002446 env[1257]: time="2025-05-13T08:24:29.002381865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.009458 env[1257]: time="2025-05-13T08:24:29.009381720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.017763 env[1257]: time="2025-05-13T08:24:29.017645638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.020905 env[1257]: time="2025-05-13T08:24:29.020844835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.024369 env[1257]: time="2025-05-13T08:24:29.024224742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.058880 env[1257]: time="2025-05-13T08:24:29.058781995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:24:29.068495 kubelet[1809]: E0513 08:24:29.068326 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-5ac23fdacd.novalocal?timeout=10s\": dial tcp 172.24.4.152:6443: connect: connection refused" interval="1.6s" May 13 08:24:29.078551 env[1257]: time="2025-05-13T08:24:29.078180372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:29.078551 env[1257]: time="2025-05-13T08:24:29.078273887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:29.078551 env[1257]: time="2025-05-13T08:24:29.078308552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:29.079258 env[1257]: time="2025-05-13T08:24:29.079141587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3eb7538c363fa3309efa13cf2a90d7435db16d4e5148dff8fab3de1670307485 pid=1848 runtime=io.containerd.runc.v2 May 13 08:24:29.108556 env[1257]: time="2025-05-13T08:24:29.106429199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:29.108556 env[1257]: time="2025-05-13T08:24:29.106511994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:29.108556 env[1257]: time="2025-05-13T08:24:29.106541950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:29.108556 env[1257]: time="2025-05-13T08:24:29.106714765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08cfa46e6677f4fbde81d8b0a1f430f0657097211a96c6245e64c3ca1842e176 pid=1864 runtime=io.containerd.runc.v2 May 13 08:24:29.140836 kubelet[1809]: W0513 08:24:29.140724 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-5ac23fdacd.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:29.140836 kubelet[1809]: E0513 08:24:29.140804 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-5ac23fdacd.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.152:6443: connect: connection refused May 13 08:24:29.157423 env[1257]: time="2025-05-13T08:24:29.157332867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:29.157912 env[1257]: time="2025-05-13T08:24:29.157885625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:29.158044 env[1257]: time="2025-05-13T08:24:29.158022311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:29.158302 env[1257]: time="2025-05-13T08:24:29.158275567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e080aee2cef8421a07df1329874da6e253a0ab39bc46142981d61acb6abac65 pid=1904 runtime=io.containerd.runc.v2 May 13 08:24:29.179791 kubelet[1809]: I0513 08:24:29.179763 1809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:29.180990 kubelet[1809]: E0513 08:24:29.180962 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.152:6443/api/v1/nodes\": dial tcp 172.24.4.152:6443: connect: connection refused" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:29.203546 env[1257]: time="2025-05-13T08:24:29.203496765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:5aa87d6e63eb4292d05ec073c5288bdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb7538c363fa3309efa13cf2a90d7435db16d4e5148dff8fab3de1670307485\"" May 13 08:24:29.213108 env[1257]: time="2025-05-13T08:24:29.209520377Z" level=info msg="CreateContainer within sandbox \"3eb7538c363fa3309efa13cf2a90d7435db16d4e5148dff8fab3de1670307485\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 08:24:29.268565 env[1257]: time="2025-05-13T08:24:29.268441259Z" level=info msg="CreateContainer within sandbox \"3eb7538c363fa3309efa13cf2a90d7435db16d4e5148dff8fab3de1670307485\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31729a0bd18f394ab57e7b0697e18c3ffd84d5b9f79ff1cd1a0bc0405cdfd24f\"" May 13 08:24:29.269855 env[1257]: time="2025-05-13T08:24:29.269805180Z" level=info msg="StartContainer for \"31729a0bd18f394ab57e7b0697e18c3ffd84d5b9f79ff1cd1a0bc0405cdfd24f\"" May 13 08:24:29.272016 env[1257]: time="2025-05-13T08:24:29.271971719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:62cce975b0e052752d2a7ac80e08326b,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cfa46e6677f4fbde81d8b0a1f430f0657097211a96c6245e64c3ca1842e176\"" May 13 08:24:29.275802 env[1257]: time="2025-05-13T08:24:29.275757969Z" level=info msg="CreateContainer within sandbox \"08cfa46e6677f4fbde81d8b0a1f430f0657097211a96c6245e64c3ca1842e176\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 08:24:29.281458 env[1257]: time="2025-05-13T08:24:29.281368144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal,Uid:a966144f05540b18712ebdc302993167,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e080aee2cef8421a07df1329874da6e253a0ab39bc46142981d61acb6abac65\"" May 13 08:24:29.286969 env[1257]: time="2025-05-13T08:24:29.286930098Z" level=info msg="CreateContainer within sandbox \"6e080aee2cef8421a07df1329874da6e253a0ab39bc46142981d61acb6abac65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 08:24:29.326132 env[1257]: time="2025-05-13T08:24:29.326063094Z" level=info msg="CreateContainer within sandbox \"08cfa46e6677f4fbde81d8b0a1f430f0657097211a96c6245e64c3ca1842e176\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0fde7021abba091d5f2b39b107967f1225c9e0025475cf59ab1a93990af1bde\"" May 13 08:24:29.326831 env[1257]: time="2025-05-13T08:24:29.326787514Z" level=info msg="StartContainer for \"a0fde7021abba091d5f2b39b107967f1225c9e0025475cf59ab1a93990af1bde\"" May 13 08:24:29.369920 env[1257]: time="2025-05-13T08:24:29.369800856Z" level=info msg="StartContainer for \"31729a0bd18f394ab57e7b0697e18c3ffd84d5b9f79ff1cd1a0bc0405cdfd24f\" returns successfully" May 13 08:24:29.374988 env[1257]: time="2025-05-13T08:24:29.374935919Z" level=info msg="CreateContainer within sandbox \"6e080aee2cef8421a07df1329874da6e253a0ab39bc46142981d61acb6abac65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c21ad7c97f94f9df3a7298e30892c1bc417179d9e88d9375f7de35ccd827a90e\"" May 13 08:24:29.375983 env[1257]: time="2025-05-13T08:24:29.375940114Z" level=info msg="StartContainer for \"c21ad7c97f94f9df3a7298e30892c1bc417179d9e88d9375f7de35ccd827a90e\"" May 13 08:24:29.456576 env[1257]: time="2025-05-13T08:24:29.456517545Z" level=info msg="StartContainer for \"a0fde7021abba091d5f2b39b107967f1225c9e0025475cf59ab1a93990af1bde\" returns successfully" May 13 08:24:29.493147 env[1257]: time="2025-05-13T08:24:29.493065776Z" level=info msg="StartContainer for \"c21ad7c97f94f9df3a7298e30892c1bc417179d9e88d9375f7de35ccd827a90e\" returns successfully" May 13 08:24:30.783446 kubelet[1809]: I0513 08:24:30.783408 1809 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:31.802289 kubelet[1809]: E0513 08:24:31.802247 1809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-n-5ac23fdacd.novalocal\" not found" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:31.905502 kubelet[1809]: I0513 08:24:31.903837 1809 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:31.997063 kubelet[1809]: E0513 08:24:31.996991 1809 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:32.617736 kubelet[1809]: I0513 08:24:32.617578 1809 apiserver.go:52] "Watching apiserver" May 13 08:24:32.660181 kubelet[1809]: I0513 08:24:32.660115 1809 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 08:24:34.746859 kubelet[1809]: W0513 08:24:34.746791 1809 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:34.752824 systemd[1]: Reloading. May 13 08:24:34.888453 /usr/lib/systemd/system-generators/torcx-generator[2103]: time="2025-05-13T08:24:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:24:34.888486 /usr/lib/systemd/system-generators/torcx-generator[2103]: time="2025-05-13T08:24:34Z" level=info msg="torcx already run" May 13 08:24:34.994587 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:24:34.994613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:24:35.021767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:24:35.128282 kubelet[1809]: I0513 08:24:35.128209 1809 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:24:35.128699 systemd[1]: Stopping kubelet.service... May 13 08:24:35.148108 systemd[1]: kubelet.service: Deactivated successfully. May 13 08:24:35.148465 systemd[1]: Stopped kubelet.service. May 13 08:24:35.151161 systemd[1]: Starting kubelet.service... May 13 08:24:35.411739 systemd[1]: Started kubelet.service. May 13 08:24:35.555418 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:35.555856 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 08:24:35.555907 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:24:35.556133 kubelet[2163]: I0513 08:24:35.556101 2163 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 08:24:35.566088 kubelet[2163]: I0513 08:24:35.566053 2163 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 08:24:35.566254 kubelet[2163]: I0513 08:24:35.566244 2163 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 08:24:35.566611 kubelet[2163]: I0513 08:24:35.566598 2163 server.go:927] "Client rotation is on, will bootstrap in background" May 13 08:24:35.568231 kubelet[2163]: I0513 08:24:35.568216 2163 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 08:24:35.569834 kubelet[2163]: I0513 08:24:35.569819 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:24:35.577397 kubelet[2163]: I0513 08:24:35.577375 2163 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 08:24:35.578092 kubelet[2163]: I0513 08:24:35.578064 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 08:24:35.578332 kubelet[2163]: I0513 08:24:35.578152 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-5ac23fdacd.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 08:24:35.578481 kubelet[2163]: I0513 08:24:35.578469 2163 topology_manager.go:138] "Creating topology manager with none policy" May 13 08:24:35.578540 kubelet[2163]: I0513 08:24:35.578532 2163 container_manager_linux.go:301] "Creating device plugin manager" May 13 08:24:35.578632 kubelet[2163]: I0513 08:24:35.578623 2163 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:35.578810 kubelet[2163]: I0513 08:24:35.578799 2163 kubelet.go:400] "Attempting to sync node with API server" May 13 08:24:35.578883 kubelet[2163]: I0513 08:24:35.578873 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 08:24:35.578959 kubelet[2163]: I0513 08:24:35.578950 2163 kubelet.go:312] "Adding apiserver pod source" May 13 08:24:35.579036 kubelet[2163]: I0513 08:24:35.579025 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 08:24:35.585853 kubelet[2163]: I0513 08:24:35.585783 2163 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 08:24:35.586024 kubelet[2163]: I0513 08:24:35.586003 2163 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 08:24:35.586479 kubelet[2163]: I0513 08:24:35.586386 2163 server.go:1264] "Started kubelet" May 13 08:24:35.591397 kubelet[2163]: I0513 08:24:35.591361 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 08:24:35.601776 kubelet[2163]: I0513 08:24:35.601725 2163 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 08:24:35.603349 kubelet[2163]: I0513 08:24:35.603333 2163 server.go:455] "Adding debug handlers to kubelet server" May 13 08:24:35.605435 kubelet[2163]: I0513 08:24:35.605405 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 08:24:35.607085 kubelet[2163]: I0513 08:24:35.607026 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 08:24:35.607301 kubelet[2163]: I0513 08:24:35.607274 2163 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 08:24:35.611611 kubelet[2163]: I0513 08:24:35.611532 2163 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 08:24:35.612244 kubelet[2163]: I0513 08:24:35.611738 2163 reconciler.go:26] "Reconciler: start to sync state" May 13 08:24:35.616243 kubelet[2163]: I0513 08:24:35.616193 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 08:24:35.620840 kubelet[2163]: E0513 08:24:35.620749 2163 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 08:24:35.622509 kubelet[2163]: I0513 08:24:35.621593 2163 factory.go:221] Registration of the containerd container factory successfully May 13 08:24:35.622509 kubelet[2163]: I0513 08:24:35.621604 2163 factory.go:221] Registration of the systemd container factory successfully May 13 08:24:35.631547 kubelet[2163]: I0513 08:24:35.631504 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 08:24:35.637711 kubelet[2163]: I0513 08:24:35.637683 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 08:24:35.637816 kubelet[2163]: I0513 08:24:35.637715 2163 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 08:24:35.637816 kubelet[2163]: I0513 08:24:35.637734 2163 kubelet.go:2337] "Starting kubelet main sync loop" May 13 08:24:35.637816 kubelet[2163]: E0513 08:24:35.637792 2163 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 08:24:35.699367 kubelet[2163]: I0513 08:24:35.697884 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 08:24:35.699528 kubelet[2163]: I0513 08:24:35.699513 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 08:24:35.699597 kubelet[2163]: I0513 08:24:35.699588 2163 state_mem.go:36] "Initialized new in-memory state store" May 13 08:24:35.699929 kubelet[2163]: I0513 08:24:35.699914 2163 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 08:24:35.700044 kubelet[2163]: I0513 08:24:35.700005 2163 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 08:24:35.700105 kubelet[2163]: I0513 08:24:35.700097 2163 policy_none.go:49] "None policy: Start" May 13 08:24:35.702598 kubelet[2163]: I0513 08:24:35.702582 2163 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 08:24:35.702747 kubelet[2163]: I0513 08:24:35.702737 2163 state_mem.go:35] "Initializing new in-memory state store" May 13 08:24:35.703023 kubelet[2163]: I0513 08:24:35.703012 2163 state_mem.go:75] "Updated machine memory state" May 13 08:24:35.704446 kubelet[2163]: I0513 08:24:35.704422 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 08:24:35.704764 kubelet[2163]: I0513 08:24:35.704726 2163 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 08:24:35.704911 kubelet[2163]: I0513 08:24:35.704901 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 08:24:35.708563 sudo[2196]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 08:24:35.708911 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 08:24:35.715740 kubelet[2163]: I0513 08:24:35.715713 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.729243 kubelet[2163]: I0513 08:24:35.729215 2163 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.729479 kubelet[2163]: I0513 08:24:35.729470 2163 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.739950 kubelet[2163]: I0513 08:24:35.739750 2163 topology_manager.go:215] "Topology Admit Handler" podUID="5aa87d6e63eb4292d05ec073c5288bdc" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.739950 kubelet[2163]: I0513 08:24:35.739897 2163 topology_manager.go:215] "Topology Admit Handler" podUID="62cce975b0e052752d2a7ac80e08326b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.739950 kubelet[2163]: I0513 08:24:35.739939 2163 topology_manager.go:215] "Topology Admit Handler" podUID="a966144f05540b18712ebdc302993167" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.751626 kubelet[2163]: W0513 08:24:35.751572 2163 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:35.751836 kubelet[2163]: W0513 08:24:35.751811 2163 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:35.754224 kubelet[2163]: W0513 08:24:35.754204 2163 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 08:24:35.754388 kubelet[2163]: E0513 08:24:35.754353 2163 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.813369 kubelet[2163]: I0513 08:24:35.813329 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.813636 kubelet[2163]: I0513 08:24:35.813608 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.813780 kubelet[2163]: I0513 08:24:35.813760 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.813897 kubelet[2163]: I0513 08:24:35.813882 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aa87d6e63eb4292d05ec073c5288bdc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"5aa87d6e63eb4292d05ec073c5288bdc\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.814010 kubelet[2163]: I0513 08:24:35.813994 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.814113 kubelet[2163]: I0513 08:24:35.814099 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.814235 kubelet[2163]: I0513 08:24:35.814208 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.814350 kubelet[2163]: I0513 08:24:35.814333 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62cce975b0e052752d2a7ac80e08326b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"62cce975b0e052752d2a7ac80e08326b\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:35.814456 kubelet[2163]: I0513 08:24:35.814441 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a966144f05540b18712ebdc302993167-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal\" (UID: \"a966144f05540b18712ebdc302993167\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" May 13 08:24:36.427022 sudo[2196]: pam_unix(sudo:session): session closed for user root May 13 08:24:36.586778 kubelet[2163]: I0513 08:24:36.586728 2163 apiserver.go:52] "Watching apiserver" May 13 08:24:36.612387 kubelet[2163]: I0513 08:24:36.612353 2163 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 08:24:36.743611 kubelet[2163]: I0513 08:24:36.742632 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-n-5ac23fdacd.novalocal" podStartSLOduration=2.742475661 podStartE2EDuration="2.742475661s" podCreationTimestamp="2025-05-13 08:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:36.74235822 +0000 UTC m=+1.319176434" watchObservedRunningTime="2025-05-13 08:24:36.742475661 +0000 UTC m=+1.319293886" May 13 08:24:36.774559 kubelet[2163]: I0513 08:24:36.774424 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-n-5ac23fdacd.novalocal" podStartSLOduration=1.7743609839999999 podStartE2EDuration="1.774360984s" podCreationTimestamp="2025-05-13 08:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:36.758945227 +0000 UTC m=+1.335763441" watchObservedRunningTime="2025-05-13 08:24:36.774360984 +0000 UTC m=+1.351179208" May 13 08:24:36.775203 kubelet[2163]: I0513 08:24:36.775110 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-5ac23fdacd.novalocal" podStartSLOduration=1.7750926379999998 podStartE2EDuration="1.775092638s" podCreationTimestamp="2025-05-13 08:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:36.773887817 +0000 UTC m=+1.350706041" watchObservedRunningTime="2025-05-13 08:24:36.775092638 +0000 UTC m=+1.351910863" May 13 08:24:39.217871 sudo[1429]: pam_unix(sudo:session): session closed for user root May 13 08:24:39.482033 sshd[1408]: pam_unix(sshd:session): session closed for user core May 13 08:24:39.488587 systemd[1]: sshd@6-172.24.4.152:22-172.24.4.1:37178.service: Deactivated successfully. May 13 08:24:39.490531 systemd[1]: session-7.scope: Deactivated successfully. May 13 08:24:39.493367 systemd-logind[1242]: Session 7 logged out. Waiting for processes to exit. May 13 08:24:39.496052 systemd-logind[1242]: Removed session 7. May 13 08:24:49.244880 kubelet[2163]: I0513 08:24:49.244831 2163 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 08:24:49.246494 env[1257]: time="2025-05-13T08:24:49.246393763Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 08:24:49.246954 kubelet[2163]: I0513 08:24:49.246899 2163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 08:24:50.177939 kubelet[2163]: I0513 08:24:50.177826 2163 topology_manager.go:215] "Topology Admit Handler" podUID="afe14a35-69ae-4086-8510-1723b953d326" podNamespace="kube-system" podName="kube-proxy-thp4v" May 13 08:24:50.209686 kubelet[2163]: I0513 08:24:50.209631 2163 topology_manager.go:215] "Topology Admit Handler" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" podNamespace="kube-system" podName="cilium-fzqws" May 13 08:24:50.216516 kubelet[2163]: I0513 08:24:50.216479 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afe14a35-69ae-4086-8510-1723b953d326-kube-proxy\") pod \"kube-proxy-thp4v\" (UID: \"afe14a35-69ae-4086-8510-1723b953d326\") " pod="kube-system/kube-proxy-thp4v" May 13 08:24:50.226496 kubelet[2163]: I0513 08:24:50.221641 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe14a35-69ae-4086-8510-1723b953d326-lib-modules\") pod \"kube-proxy-thp4v\" (UID: \"afe14a35-69ae-4086-8510-1723b953d326\") " pod="kube-system/kube-proxy-thp4v" May 13 08:24:50.226496 kubelet[2163]: I0513 08:24:50.221716 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbxq\" (UniqueName: \"kubernetes.io/projected/afe14a35-69ae-4086-8510-1723b953d326-kube-api-access-trbxq\") pod \"kube-proxy-thp4v\" (UID: \"afe14a35-69ae-4086-8510-1723b953d326\") " pod="kube-system/kube-proxy-thp4v" May 13 08:24:50.226496 kubelet[2163]: I0513 08:24:50.221746 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe14a35-69ae-4086-8510-1723b953d326-xtables-lock\") pod \"kube-proxy-thp4v\" (UID: \"afe14a35-69ae-4086-8510-1723b953d326\") " pod="kube-system/kube-proxy-thp4v" May 13 08:24:50.318834 kubelet[2163]: I0513 08:24:50.318788 2163 topology_manager.go:215] "Topology Admit Handler" podUID="35c058ea-28d3-4987-ab2a-49510a55db2c" podNamespace="kube-system" podName="cilium-operator-599987898-s88nf" May 13 08:24:50.322836 kubelet[2163]: I0513 08:24:50.322796 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-cgroup\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.322836 kubelet[2163]: I0513 08:24:50.322837 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-run\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322859 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-etc-cni-netd\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322883 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hostproc\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322917 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3b66d18-0e9b-4cff-85bc-782d516c6b42-clustermesh-secrets\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322937 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-config-path\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322960 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-bpf-maps\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323071 kubelet[2163]: I0513 08:24:50.322978 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-lib-modules\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.322997 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-net\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.323016 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-kernel\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.323036 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hubble-tls\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.323090 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cni-path\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.323110 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xsv8\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-kube-api-access-4xsv8\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.323236 kubelet[2163]: I0513 08:24:50.323131 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-xtables-lock\") pod \"cilium-fzqws\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " pod="kube-system/cilium-fzqws" May 13 08:24:50.424213 kubelet[2163]: I0513 08:24:50.424182 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35c058ea-28d3-4987-ab2a-49510a55db2c-cilium-config-path\") pod \"cilium-operator-599987898-s88nf\" (UID: \"35c058ea-28d3-4987-ab2a-49510a55db2c\") " pod="kube-system/cilium-operator-599987898-s88nf" May 13 08:24:50.424393 kubelet[2163]: I0513 08:24:50.424348 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nl2n\" (UniqueName: \"kubernetes.io/projected/35c058ea-28d3-4987-ab2a-49510a55db2c-kube-api-access-4nl2n\") pod \"cilium-operator-599987898-s88nf\" (UID: \"35c058ea-28d3-4987-ab2a-49510a55db2c\") " pod="kube-system/cilium-operator-599987898-s88nf" May 13 08:24:50.501320 env[1257]: time="2025-05-13T08:24:50.500160158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thp4v,Uid:afe14a35-69ae-4086-8510-1723b953d326,Namespace:kube-system,Attempt:0,}" May 13 08:24:50.521593 env[1257]: time="2025-05-13T08:24:50.521321166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzqws,Uid:d3b66d18-0e9b-4cff-85bc-782d516c6b42,Namespace:kube-system,Attempt:0,}" May 13 08:24:50.600360 env[1257]: time="2025-05-13T08:24:50.600276903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:50.600598 env[1257]: time="2025-05-13T08:24:50.600323701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:50.600598 env[1257]: time="2025-05-13T08:24:50.600339630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:50.604353 env[1257]: time="2025-05-13T08:24:50.604294602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3ed0efc0544bfda0349b6f2fc2e52287e70d01685b53f44886a98b9968e82ac pid=2248 runtime=io.containerd.runc.v2 May 13 08:24:50.610247 env[1257]: time="2025-05-13T08:24:50.609937801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:50.610247 env[1257]: time="2025-05-13T08:24:50.610001991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:50.610247 env[1257]: time="2025-05-13T08:24:50.610016318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:50.610727 env[1257]: time="2025-05-13T08:24:50.610363930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58 pid=2260 runtime=io.containerd.runc.v2 May 13 08:24:50.623172 env[1257]: time="2025-05-13T08:24:50.623114567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s88nf,Uid:35c058ea-28d3-4987-ab2a-49510a55db2c,Namespace:kube-system,Attempt:0,}" May 13 08:24:50.733087 env[1257]: time="2025-05-13T08:24:50.732988188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:24:50.733296 env[1257]: time="2025-05-13T08:24:50.733046618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:24:50.733296 env[1257]: time="2025-05-13T08:24:50.733061907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:24:50.733601 env[1257]: time="2025-05-13T08:24:50.733566474Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c pid=2311 runtime=io.containerd.runc.v2 May 13 08:24:50.742948 env[1257]: time="2025-05-13T08:24:50.742909326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fzqws,Uid:d3b66d18-0e9b-4cff-85bc-782d516c6b42,Namespace:kube-system,Attempt:0,} returns sandbox id \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\"" May 13 08:24:50.747324 env[1257]: time="2025-05-13T08:24:50.745944471Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 08:24:50.761221 env[1257]: time="2025-05-13T08:24:50.760608067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thp4v,Uid:afe14a35-69ae-4086-8510-1723b953d326,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ed0efc0544bfda0349b6f2fc2e52287e70d01685b53f44886a98b9968e82ac\"" May 13 08:24:50.764482 env[1257]: time="2025-05-13T08:24:50.763847225Z" level=info msg="CreateContainer within sandbox \"e3ed0efc0544bfda0349b6f2fc2e52287e70d01685b53f44886a98b9968e82ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 08:24:50.794254 env[1257]: time="2025-05-13T08:24:50.794198447Z" level=info msg="CreateContainer within sandbox \"e3ed0efc0544bfda0349b6f2fc2e52287e70d01685b53f44886a98b9968e82ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f09dcd621d1871d87bc309c5aa4510756964d8d61de34f0179942825f1b7e8b\"" May 13 08:24:50.796459 env[1257]: time="2025-05-13T08:24:50.796420036Z" level=info msg="StartContainer for \"7f09dcd621d1871d87bc309c5aa4510756964d8d61de34f0179942825f1b7e8b\"" May 13 08:24:50.831375 env[1257]: time="2025-05-13T08:24:50.831321737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s88nf,Uid:35c058ea-28d3-4987-ab2a-49510a55db2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\"" May 13 08:24:50.897687 env[1257]: time="2025-05-13T08:24:50.897600465Z" level=info msg="StartContainer for \"7f09dcd621d1871d87bc309c5aa4510756964d8d61de34f0179942825f1b7e8b\" returns successfully" May 13 08:24:55.757429 kubelet[2163]: I0513 08:24:55.757294 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-thp4v" podStartSLOduration=5.757260849 podStartE2EDuration="5.757260849s" podCreationTimestamp="2025-05-13 08:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:24:51.77326256 +0000 UTC m=+16.350080784" watchObservedRunningTime="2025-05-13 08:24:55.757260849 +0000 UTC m=+20.334079073" May 13 08:25:01.157100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721993810.mount: Deactivated successfully. May 13 08:25:07.330414 env[1257]: time="2025-05-13T08:25:07.330258721Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:07.337137 env[1257]: time="2025-05-13T08:25:07.337032630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:07.343041 env[1257]: time="2025-05-13T08:25:07.342954240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:07.346938 env[1257]: time="2025-05-13T08:25:07.345166670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 08:25:07.353599 env[1257]: time="2025-05-13T08:25:07.353514162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 08:25:07.357574 env[1257]: time="2025-05-13T08:25:07.356793564Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:25:07.384009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729441572.mount: Deactivated successfully. May 13 08:25:07.393759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586909960.mount: Deactivated successfully. May 13 08:25:07.409525 env[1257]: time="2025-05-13T08:25:07.409431350Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\"" May 13 08:25:07.410552 env[1257]: time="2025-05-13T08:25:07.410496037Z" level=info msg="StartContainer for \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\"" May 13 08:25:07.576331 env[1257]: time="2025-05-13T08:25:07.576274991Z" level=info msg="StartContainer for \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\" returns successfully" May 13 08:25:08.381602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351-rootfs.mount: Deactivated successfully. May 13 08:25:08.604588 env[1257]: time="2025-05-13T08:25:08.604279446Z" level=info msg="shim disconnected" id=5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351 May 13 08:25:08.606354 env[1257]: time="2025-05-13T08:25:08.605632996Z" level=warning msg="cleaning up after shim disconnected" id=5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351 namespace=k8s.io May 13 08:25:08.606354 env[1257]: time="2025-05-13T08:25:08.605803025Z" level=info msg="cleaning up dead shim" May 13 08:25:08.671711 env[1257]: time="2025-05-13T08:25:08.670918119Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" May 13 08:25:08.798863 env[1257]: time="2025-05-13T08:25:08.798782148Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:25:08.848289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121076663.mount: Deactivated successfully. May 13 08:25:08.869185 env[1257]: time="2025-05-13T08:25:08.869118828Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\"" May 13 08:25:08.871430 env[1257]: time="2025-05-13T08:25:08.871212626Z" level=info msg="StartContainer for \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\"" May 13 08:25:08.941401 env[1257]: time="2025-05-13T08:25:08.941091717Z" level=info msg="StartContainer for \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\" returns successfully" May 13 08:25:08.949886 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 08:25:08.950230 systemd[1]: Stopped systemd-sysctl.service. May 13 08:25:08.950433 systemd[1]: Stopping systemd-sysctl.service... May 13 08:25:08.956562 systemd[1]: Starting systemd-sysctl.service... May 13 08:25:08.972323 systemd[1]: Finished systemd-sysctl.service. May 13 08:25:08.999352 env[1257]: time="2025-05-13T08:25:08.999300294Z" level=info msg="shim disconnected" id=38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1 May 13 08:25:08.999664 env[1257]: time="2025-05-13T08:25:08.999628119Z" level=warning msg="cleaning up after shim disconnected" id=38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1 namespace=k8s.io May 13 08:25:08.999796 env[1257]: time="2025-05-13T08:25:08.999778141Z" level=info msg="cleaning up dead shim" May 13 08:25:09.008118 env[1257]: time="2025-05-13T08:25:09.008095355Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2639 runtime=io.containerd.runc.v2\n" May 13 08:25:09.380933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1-rootfs.mount: Deactivated successfully. May 13 08:25:09.802888 env[1257]: time="2025-05-13T08:25:09.802672912Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 08:25:09.844715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460648823.mount: Deactivated successfully. May 13 08:25:09.853837 env[1257]: time="2025-05-13T08:25:09.853696457Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\"" May 13 08:25:09.856485 env[1257]: time="2025-05-13T08:25:09.855285388Z" level=info msg="StartContainer for \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\"" May 13 08:25:10.023202 env[1257]: time="2025-05-13T08:25:10.023138199Z" level=info msg="StartContainer for \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\" returns successfully" May 13 08:25:10.118833 env[1257]: time="2025-05-13T08:25:10.118790843Z" level=info msg="shim disconnected" id=4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388 May 13 08:25:10.119119 env[1257]: time="2025-05-13T08:25:10.119099182Z" level=warning msg="cleaning up after shim disconnected" id=4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388 namespace=k8s.io May 13 08:25:10.119191 env[1257]: time="2025-05-13T08:25:10.119176016Z" level=info msg="cleaning up dead shim" May 13 08:25:10.134197 env[1257]: time="2025-05-13T08:25:10.134154628Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2696 runtime=io.containerd.runc.v2\n" May 13 08:25:10.381131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975973070.mount: Deactivated successfully. May 13 08:25:10.814930 env[1257]: time="2025-05-13T08:25:10.814517028Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 08:25:10.835838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447789336.mount: Deactivated successfully. May 13 08:25:10.860357 env[1257]: time="2025-05-13T08:25:10.860311794Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\"" May 13 08:25:10.861376 env[1257]: time="2025-05-13T08:25:10.861354620Z" level=info msg="StartContainer for \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\"" May 13 08:25:11.044871 env[1257]: time="2025-05-13T08:25:11.044823977Z" level=info msg="StartContainer for \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\" returns successfully" May 13 08:25:11.252528 env[1257]: time="2025-05-13T08:25:11.252450334Z" level=info msg="shim disconnected" id=4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3 May 13 08:25:11.252528 env[1257]: time="2025-05-13T08:25:11.252505798Z" level=warning msg="cleaning up after shim disconnected" id=4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3 namespace=k8s.io May 13 08:25:11.252528 env[1257]: time="2025-05-13T08:25:11.252516649Z" level=info msg="cleaning up dead shim" May 13 08:25:11.275793 env[1257]: time="2025-05-13T08:25:11.275630473Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:25:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2753 runtime=io.containerd.runc.v2\n" May 13 08:25:11.379048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3-rootfs.mount: Deactivated successfully. May 13 08:25:11.542283 env[1257]: time="2025-05-13T08:25:11.541186350Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.544570 env[1257]: time="2025-05-13T08:25:11.544506459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.547905 env[1257]: time="2025-05-13T08:25:11.547841235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:25:11.549513 env[1257]: time="2025-05-13T08:25:11.548697643Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 08:25:11.558307 env[1257]: time="2025-05-13T08:25:11.558230477Z" level=info msg="CreateContainer within sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 08:25:11.589709 env[1257]: time="2025-05-13T08:25:11.589283565Z" level=info msg="CreateContainer within sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\"" May 13 08:25:11.591881 env[1257]: time="2025-05-13T08:25:11.590613290Z" level=info msg="StartContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\"" May 13 08:25:11.593559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038708351.mount: Deactivated successfully. May 13 08:25:11.715052 env[1257]: time="2025-05-13T08:25:11.714975256Z" level=info msg="StartContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" returns successfully" May 13 08:25:11.825288 env[1257]: time="2025-05-13T08:25:11.824736178Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 08:25:11.845965 kubelet[2163]: I0513 08:25:11.845898 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s88nf" podStartSLOduration=1.127860815 podStartE2EDuration="21.845862924s" podCreationTimestamp="2025-05-13 08:24:50 +0000 UTC" firstStartedPulling="2025-05-13 08:24:50.832914336 +0000 UTC m=+15.409732510" lastFinishedPulling="2025-05-13 08:25:11.550916445 +0000 UTC m=+36.127734619" observedRunningTime="2025-05-13 08:25:11.843720854 +0000 UTC m=+36.420539058" watchObservedRunningTime="2025-05-13 08:25:11.845862924 +0000 UTC m=+36.422681098" May 13 08:25:12.480059 env[1257]: time="2025-05-13T08:25:12.479938726Z" level=info msg="CreateContainer within sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\"" May 13 08:25:12.482074 env[1257]: time="2025-05-13T08:25:12.482005223Z" level=info msg="StartContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\"" May 13 08:25:12.768999 systemd[1]: run-containerd-runc-k8s.io-cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913-runc.kDPCkI.mount: Deactivated successfully. May 13 08:25:13.013830 env[1257]: time="2025-05-13T08:25:13.013696783Z" level=info msg="StartContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" returns successfully" May 13 08:25:13.248967 kubelet[2163]: I0513 08:25:13.248907 2163 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 08:25:13.379975 systemd[1]: run-containerd-runc-k8s.io-cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913-runc.iDWjwi.mount: Deactivated successfully. May 13 08:25:13.586695 kubelet[2163]: I0513 08:25:13.586544 2163 topology_manager.go:215] "Topology Admit Handler" podUID="c476d541-2543-46fd-847a-0cf45b057e32" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jgxhv" May 13 08:25:13.609360 kubelet[2163]: I0513 08:25:13.609300 2163 topology_manager.go:215] "Topology Admit Handler" podUID="28ddb01d-d467-41c6-ad04-44b89d09f8c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4gx7v" May 13 08:25:13.735064 kubelet[2163]: I0513 08:25:13.735016 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c476d541-2543-46fd-847a-0cf45b057e32-config-volume\") pod \"coredns-7db6d8ff4d-jgxhv\" (UID: \"c476d541-2543-46fd-847a-0cf45b057e32\") " pod="kube-system/coredns-7db6d8ff4d-jgxhv" May 13 08:25:13.735258 kubelet[2163]: I0513 08:25:13.735079 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7dc\" (UniqueName: \"kubernetes.io/projected/28ddb01d-d467-41c6-ad04-44b89d09f8c7-kube-api-access-nm7dc\") pod \"coredns-7db6d8ff4d-4gx7v\" (UID: \"28ddb01d-d467-41c6-ad04-44b89d09f8c7\") " pod="kube-system/coredns-7db6d8ff4d-4gx7v" May 13 08:25:13.735258 kubelet[2163]: I0513 08:25:13.735112 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdsj\" (UniqueName: \"kubernetes.io/projected/c476d541-2543-46fd-847a-0cf45b057e32-kube-api-access-hbdsj\") pod \"coredns-7db6d8ff4d-jgxhv\" (UID: \"c476d541-2543-46fd-847a-0cf45b057e32\") " pod="kube-system/coredns-7db6d8ff4d-jgxhv" May 13 08:25:13.735258 kubelet[2163]: I0513 08:25:13.735134 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ddb01d-d467-41c6-ad04-44b89d09f8c7-config-volume\") pod \"coredns-7db6d8ff4d-4gx7v\" (UID: \"28ddb01d-d467-41c6-ad04-44b89d09f8c7\") " pod="kube-system/coredns-7db6d8ff4d-4gx7v" May 13 08:25:14.098707 kubelet[2163]: I0513 08:25:14.098596 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fzqws" podStartSLOduration=7.494767218 podStartE2EDuration="24.098532201s" podCreationTimestamp="2025-05-13 08:24:50 +0000 UTC" firstStartedPulling="2025-05-13 08:24:50.74524605 +0000 UTC m=+15.322064234" lastFinishedPulling="2025-05-13 08:25:07.349010993 +0000 UTC m=+31.925829217" observedRunningTime="2025-05-13 08:25:14.097415185 +0000 UTC m=+38.674233369" watchObservedRunningTime="2025-05-13 08:25:14.098532201 +0000 UTC m=+38.675350375" May 13 08:25:14.190952 env[1257]: time="2025-05-13T08:25:14.190875604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgxhv,Uid:c476d541-2543-46fd-847a-0cf45b057e32,Namespace:kube-system,Attempt:0,}" May 13 08:25:14.215507 env[1257]: time="2025-05-13T08:25:14.215447142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4gx7v,Uid:28ddb01d-d467-41c6-ad04-44b89d09f8c7,Namespace:kube-system,Attempt:0,}" May 13 08:25:16.072318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 08:25:16.072470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 08:25:16.072613 systemd-networkd[1035]: cilium_host: Link UP May 13 08:25:16.072860 systemd-networkd[1035]: cilium_net: Link UP May 13 08:25:16.073489 systemd-networkd[1035]: cilium_net: Gained carrier May 13 08:25:16.073664 systemd-networkd[1035]: cilium_host: Gained carrier May 13 08:25:16.194964 systemd-networkd[1035]: cilium_vxlan: Link UP May 13 08:25:16.194970 systemd-networkd[1035]: cilium_vxlan: Gained carrier May 13 08:25:16.624806 systemd-networkd[1035]: cilium_net: Gained IPv6LL May 13 08:25:16.635770 kernel: NET: Registered PF_ALG protocol family May 13 08:25:16.815902 systemd-networkd[1035]: cilium_host: Gained IPv6LL May 13 08:25:17.503781 systemd-networkd[1035]: lxc_health: Link UP May 13 08:25:17.520273 systemd-networkd[1035]: lxc_health: Gained carrier May 13 08:25:17.520715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 08:25:17.647963 systemd-networkd[1035]: cilium_vxlan: Gained IPv6LL May 13 08:25:17.950695 systemd-networkd[1035]: lxccbf6976b7676: Link UP May 13 08:25:17.962762 kernel: eth0: renamed from tmp7db48 May 13 08:25:17.963938 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbf6976b7676: link becomes ready May 13 08:25:17.965354 systemd-networkd[1035]: lxccbf6976b7676: Gained carrier May 13 08:25:17.989399 systemd-networkd[1035]: lxccccb0528f159: Link UP May 13 08:25:18.000695 kernel: eth0: renamed from tmpf4b72 May 13 08:25:18.007363 systemd-networkd[1035]: lxccccb0528f159: Gained carrier May 13 08:25:18.007879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccccb0528f159: link becomes ready May 13 08:25:18.607939 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 13 08:25:19.061809 systemd-networkd[1035]: lxccbf6976b7676: Gained IPv6LL May 13 08:25:19.375934 systemd-networkd[1035]: lxccccb0528f159: Gained IPv6LL May 13 08:25:22.568748 env[1257]: time="2025-05-13T08:25:22.561857305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:25:22.568748 env[1257]: time="2025-05-13T08:25:22.561909655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:25:22.568748 env[1257]: time="2025-05-13T08:25:22.561923552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:25:22.568748 env[1257]: time="2025-05-13T08:25:22.562062346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4b722dc7ed47e6b1082ae7e7060aab5c37bcae6c7d6fabdd1dc4ad9fe157ebe pid=3327 runtime=io.containerd.runc.v2 May 13 08:25:22.626582 env[1257]: time="2025-05-13T08:25:22.626453426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:25:22.626787 env[1257]: time="2025-05-13T08:25:22.626594265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:25:22.626787 env[1257]: time="2025-05-13T08:25:22.626624903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:25:22.629899 env[1257]: time="2025-05-13T08:25:22.629003129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7db48db06f05b93002e3171d7c4d6bf35808662ea2f05086f5f817d4b90f7780 pid=3355 runtime=io.containerd.runc.v2 May 13 08:25:22.687432 env[1257]: time="2025-05-13T08:25:22.687363968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgxhv,Uid:c476d541-2543-46fd-847a-0cf45b057e32,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b722dc7ed47e6b1082ae7e7060aab5c37bcae6c7d6fabdd1dc4ad9fe157ebe\"" May 13 08:25:22.694486 env[1257]: time="2025-05-13T08:25:22.694436367Z" level=info msg="CreateContainer within sandbox \"f4b722dc7ed47e6b1082ae7e7060aab5c37bcae6c7d6fabdd1dc4ad9fe157ebe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 08:25:22.736867 env[1257]: time="2025-05-13T08:25:22.736817639Z" level=info msg="CreateContainer within sandbox \"f4b722dc7ed47e6b1082ae7e7060aab5c37bcae6c7d6fabdd1dc4ad9fe157ebe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2599866001dce304efa40a5b40c7bf9e5484831b811042e3bc514411ae7c6330\"" May 13 08:25:22.739732 env[1257]: time="2025-05-13T08:25:22.739684286Z" level=info msg="StartContainer for \"2599866001dce304efa40a5b40c7bf9e5484831b811042e3bc514411ae7c6330\"" May 13 08:25:22.760994 env[1257]: time="2025-05-13T08:25:22.760937621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4gx7v,Uid:28ddb01d-d467-41c6-ad04-44b89d09f8c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7db48db06f05b93002e3171d7c4d6bf35808662ea2f05086f5f817d4b90f7780\"" May 13 08:25:22.773207 env[1257]: time="2025-05-13T08:25:22.773147880Z" level=info msg="CreateContainer within sandbox \"7db48db06f05b93002e3171d7c4d6bf35808662ea2f05086f5f817d4b90f7780\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 08:25:22.817656 env[1257]: time="2025-05-13T08:25:22.817476656Z" level=info msg="CreateContainer within sandbox \"7db48db06f05b93002e3171d7c4d6bf35808662ea2f05086f5f817d4b90f7780\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f15537508f9072b5526c97a4b1972cb2adfeb417a57fcb1b646dc88cf7375a7\"" May 13 08:25:22.819850 env[1257]: time="2025-05-13T08:25:22.818461925Z" level=info msg="StartContainer for \"1f15537508f9072b5526c97a4b1972cb2adfeb417a57fcb1b646dc88cf7375a7\"" May 13 08:25:22.836337 env[1257]: time="2025-05-13T08:25:22.836292402Z" level=info msg="StartContainer for \"2599866001dce304efa40a5b40c7bf9e5484831b811042e3bc514411ae7c6330\" returns successfully" May 13 08:25:22.935152 env[1257]: time="2025-05-13T08:25:22.935103550Z" level=info msg="StartContainer for \"1f15537508f9072b5526c97a4b1972cb2adfeb417a57fcb1b646dc88cf7375a7\" returns successfully" May 13 08:25:23.572814 systemd[1]: run-containerd-runc-k8s.io-7db48db06f05b93002e3171d7c4d6bf35808662ea2f05086f5f817d4b90f7780-runc.8f5jsO.mount: Deactivated successfully. May 13 08:25:23.935146 kubelet[2163]: I0513 08:25:23.934897 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jgxhv" podStartSLOduration=33.934858212 podStartE2EDuration="33.934858212s" podCreationTimestamp="2025-05-13 08:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:25:22.92894794 +0000 UTC m=+47.505766134" watchObservedRunningTime="2025-05-13 08:25:23.934858212 +0000 UTC m=+48.511676436" May 13 08:25:23.936347 kubelet[2163]: I0513 08:25:23.935323 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4gx7v" podStartSLOduration=33.935308891 podStartE2EDuration="33.935308891s" podCreationTimestamp="2025-05-13 08:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:25:23.930870038 +0000 UTC m=+48.507688292" watchObservedRunningTime="2025-05-13 08:25:23.935308891 +0000 UTC m=+48.512127115" May 13 08:26:26.270327 kernel: hrtimer: interrupt took 3367011 ns May 13 08:29:11.024484 systemd[1]: Started sshd@7-172.24.4.152:22-172.24.4.1:38870.service. May 13 08:29:12.159105 sshd[3516]: Accepted publickey for core from 172.24.4.1 port 38870 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:12.165039 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:12.188912 systemd-logind[1242]: New session 8 of user core. May 13 08:29:12.193444 systemd[1]: Started session-8.scope. May 13 08:29:12.957834 sshd[3516]: pam_unix(sshd:session): session closed for user core May 13 08:29:12.961380 systemd[1]: sshd@7-172.24.4.152:22-172.24.4.1:38870.service: Deactivated successfully. May 13 08:29:12.963454 systemd[1]: session-8.scope: Deactivated successfully. May 13 08:29:12.964051 systemd-logind[1242]: Session 8 logged out. Waiting for processes to exit. May 13 08:29:12.965460 systemd-logind[1242]: Removed session 8. May 13 08:29:17.977915 systemd[1]: Started sshd@8-172.24.4.152:22-172.24.4.1:52934.service. May 13 08:29:19.139903 sshd[3530]: Accepted publickey for core from 172.24.4.1 port 52934 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:19.142578 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:19.157085 systemd-logind[1242]: New session 9 of user core. May 13 08:29:19.159219 systemd[1]: Started session-9.scope. May 13 08:29:20.179902 sshd[3530]: pam_unix(sshd:session): session closed for user core May 13 08:29:20.192925 systemd[1]: sshd@8-172.24.4.152:22-172.24.4.1:52934.service: Deactivated successfully. May 13 08:29:20.204304 systemd-logind[1242]: Session 9 logged out. Waiting for processes to exit. May 13 08:29:20.204588 systemd[1]: session-9.scope: Deactivated successfully. May 13 08:29:20.208039 systemd-logind[1242]: Removed session 9. May 13 08:29:25.220871 systemd[1]: Started sshd@9-172.24.4.152:22-172.24.4.1:50788.service. May 13 08:29:26.808889 sshd[3545]: Accepted publickey for core from 172.24.4.1 port 50788 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:26.814141 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:26.835457 systemd-logind[1242]: New session 10 of user core. May 13 08:29:26.840702 systemd[1]: Started session-10.scope. May 13 08:29:27.680971 sshd[3545]: pam_unix(sshd:session): session closed for user core May 13 08:29:27.687641 systemd[1]: sshd@9-172.24.4.152:22-172.24.4.1:50788.service: Deactivated successfully. May 13 08:29:27.690941 systemd[1]: session-10.scope: Deactivated successfully. May 13 08:29:27.696336 systemd-logind[1242]: Session 10 logged out. Waiting for processes to exit. May 13 08:29:27.702854 systemd-logind[1242]: Removed session 10. May 13 08:29:32.703876 systemd[1]: Started sshd@10-172.24.4.152:22-172.24.4.1:50804.service. May 13 08:29:34.109161 sshd[3559]: Accepted publickey for core from 172.24.4.1 port 50804 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:34.114008 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:34.127123 systemd-logind[1242]: New session 11 of user core. May 13 08:29:34.131103 systemd[1]: Started session-11.scope. May 13 08:29:34.866235 sshd[3559]: pam_unix(sshd:session): session closed for user core May 13 08:29:34.871532 systemd[1]: Started sshd@11-172.24.4.152:22-172.24.4.1:42112.service. May 13 08:29:34.878284 systemd[1]: sshd@10-172.24.4.152:22-172.24.4.1:50804.service: Deactivated successfully. May 13 08:29:34.885850 systemd[1]: session-11.scope: Deactivated successfully. May 13 08:29:34.891075 systemd-logind[1242]: Session 11 logged out. Waiting for processes to exit. May 13 08:29:34.899703 systemd-logind[1242]: Removed session 11. May 13 08:29:36.193286 sshd[3570]: Accepted publickey for core from 172.24.4.1 port 42112 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:36.193864 sshd[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:36.201033 systemd[1]: Started session-12.scope. May 13 08:29:36.201078 systemd-logind[1242]: New session 12 of user core. May 13 08:29:37.146731 sshd[3570]: pam_unix(sshd:session): session closed for user core May 13 08:29:37.169047 systemd[1]: Started sshd@12-172.24.4.152:22-172.24.4.1:42118.service. May 13 08:29:37.170067 systemd[1]: sshd@11-172.24.4.152:22-172.24.4.1:42112.service: Deactivated successfully. May 13 08:29:37.173971 systemd[1]: session-12.scope: Deactivated successfully. May 13 08:29:37.174497 systemd-logind[1242]: Session 12 logged out. Waiting for processes to exit. May 13 08:29:37.183558 systemd-logind[1242]: Removed session 12. May 13 08:29:38.466713 sshd[3583]: Accepted publickey for core from 172.24.4.1 port 42118 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:38.470781 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:38.496851 systemd-logind[1242]: New session 13 of user core. May 13 08:29:38.499767 systemd[1]: Started session-13.scope. May 13 08:29:39.319040 sshd[3583]: pam_unix(sshd:session): session closed for user core May 13 08:29:39.323810 systemd[1]: sshd@12-172.24.4.152:22-172.24.4.1:42118.service: Deactivated successfully. May 13 08:29:39.325454 systemd[1]: session-13.scope: Deactivated successfully. May 13 08:29:39.325496 systemd-logind[1242]: Session 13 logged out. Waiting for processes to exit. May 13 08:29:39.327121 systemd-logind[1242]: Removed session 13. May 13 08:29:44.340729 systemd[1]: Started sshd@13-172.24.4.152:22-172.24.4.1:57806.service. May 13 08:29:45.736096 sshd[3598]: Accepted publickey for core from 172.24.4.1 port 57806 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:45.741883 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:45.756831 systemd-logind[1242]: New session 14 of user core. May 13 08:29:45.759121 systemd[1]: Started session-14.scope. May 13 08:29:46.580271 sshd[3598]: pam_unix(sshd:session): session closed for user core May 13 08:29:46.586516 systemd[1]: sshd@13-172.24.4.152:22-172.24.4.1:57806.service: Deactivated successfully. May 13 08:29:46.589531 systemd[1]: session-14.scope: Deactivated successfully. May 13 08:29:46.597148 systemd-logind[1242]: Session 14 logged out. Waiting for processes to exit. May 13 08:29:46.600511 systemd-logind[1242]: Removed session 14. May 13 08:29:51.605161 systemd[1]: Started sshd@14-172.24.4.152:22-172.24.4.1:57822.service. May 13 08:29:52.925715 sshd[3613]: Accepted publickey for core from 172.24.4.1 port 57822 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:52.943495 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:52.969770 systemd-logind[1242]: New session 15 of user core. May 13 08:29:52.976819 systemd[1]: Started session-15.scope. May 13 08:29:53.830590 sshd[3613]: pam_unix(sshd:session): session closed for user core May 13 08:29:53.840366 systemd[1]: Started sshd@15-172.24.4.152:22-172.24.4.1:40316.service. May 13 08:29:53.846233 systemd[1]: sshd@14-172.24.4.152:22-172.24.4.1:57822.service: Deactivated successfully. May 13 08:29:53.862570 systemd[1]: session-15.scope: Deactivated successfully. May 13 08:29:53.864072 systemd-logind[1242]: Session 15 logged out. Waiting for processes to exit. May 13 08:29:53.869773 systemd-logind[1242]: Removed session 15. May 13 08:29:55.178643 sshd[3625]: Accepted publickey for core from 172.24.4.1 port 40316 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:55.183455 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:55.195148 systemd-logind[1242]: New session 16 of user core. May 13 08:29:55.199427 systemd[1]: Started session-16.scope. May 13 08:29:56.065418 sshd[3625]: pam_unix(sshd:session): session closed for user core May 13 08:29:56.071278 systemd[1]: Started sshd@16-172.24.4.152:22-172.24.4.1:40326.service. May 13 08:29:56.075042 systemd[1]: sshd@15-172.24.4.152:22-172.24.4.1:40316.service: Deactivated successfully. May 13 08:29:56.080036 systemd-logind[1242]: Session 16 logged out. Waiting for processes to exit. May 13 08:29:56.086771 systemd[1]: session-16.scope: Deactivated successfully. May 13 08:29:56.092510 systemd-logind[1242]: Removed session 16. May 13 08:29:57.354598 sshd[3636]: Accepted publickey for core from 172.24.4.1 port 40326 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:29:57.357986 sshd[3636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:29:57.372342 systemd[1]: Started session-17.scope. May 13 08:29:57.373002 systemd-logind[1242]: New session 17 of user core. May 13 08:30:01.429816 sshd[3636]: pam_unix(sshd:session): session closed for user core May 13 08:30:01.484635 systemd[1]: Started sshd@17-172.24.4.152:22-172.24.4.1:40338.service. May 13 08:30:01.489419 systemd[1]: sshd@16-172.24.4.152:22-172.24.4.1:40326.service: Deactivated successfully. May 13 08:30:01.495933 systemd-logind[1242]: Session 17 logged out. Waiting for processes to exit. May 13 08:30:01.511952 systemd[1]: session-17.scope: Deactivated successfully. May 13 08:30:01.519608 systemd-logind[1242]: Removed session 17. May 13 08:30:02.951545 sshd[3656]: Accepted publickey for core from 172.24.4.1 port 40338 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:02.955961 sshd[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:02.976907 systemd-logind[1242]: New session 18 of user core. May 13 08:30:02.979309 systemd[1]: Started session-18.scope. May 13 08:30:03.876102 sshd[3656]: pam_unix(sshd:session): session closed for user core May 13 08:30:03.880792 systemd[1]: Started sshd@18-172.24.4.152:22-172.24.4.1:39926.service. May 13 08:30:03.889340 systemd[1]: sshd@17-172.24.4.152:22-172.24.4.1:40338.service: Deactivated successfully. May 13 08:30:03.896853 systemd[1]: session-18.scope: Deactivated successfully. May 13 08:30:03.899206 systemd-logind[1242]: Session 18 logged out. Waiting for processes to exit. May 13 08:30:03.902850 systemd-logind[1242]: Removed session 18. May 13 08:30:05.554100 sshd[3667]: Accepted publickey for core from 172.24.4.1 port 39926 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:05.557450 sshd[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:05.569648 systemd-logind[1242]: New session 19 of user core. May 13 08:30:05.570643 systemd[1]: Started session-19.scope. May 13 08:30:06.387947 sshd[3667]: pam_unix(sshd:session): session closed for user core May 13 08:30:06.397360 systemd-logind[1242]: Session 19 logged out. Waiting for processes to exit. May 13 08:30:06.400019 systemd[1]: sshd@18-172.24.4.152:22-172.24.4.1:39926.service: Deactivated successfully. May 13 08:30:06.410692 systemd[1]: session-19.scope: Deactivated successfully. May 13 08:30:06.413071 systemd-logind[1242]: Removed session 19. May 13 08:30:11.393991 systemd[1]: Started sshd@19-172.24.4.152:22-172.24.4.1:39936.service. May 13 08:30:12.939186 sshd[3682]: Accepted publickey for core from 172.24.4.1 port 39936 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:12.942478 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:12.955767 systemd-logind[1242]: New session 20 of user core. May 13 08:30:12.958756 systemd[1]: Started session-20.scope. May 13 08:30:13.853986 sshd[3682]: pam_unix(sshd:session): session closed for user core May 13 08:30:13.860189 systemd[1]: sshd@19-172.24.4.152:22-172.24.4.1:39936.service: Deactivated successfully. May 13 08:30:13.866358 systemd[1]: session-20.scope: Deactivated successfully. May 13 08:30:13.866786 systemd-logind[1242]: Session 20 logged out. Waiting for processes to exit. May 13 08:30:13.885782 systemd-logind[1242]: Removed session 20. May 13 08:30:18.873292 systemd[1]: Started sshd@20-172.24.4.152:22-172.24.4.1:53780.service. May 13 08:30:20.137913 sshd[3697]: Accepted publickey for core from 172.24.4.1 port 53780 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:20.142632 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:20.159807 systemd-logind[1242]: New session 21 of user core. May 13 08:30:20.161898 systemd[1]: Started session-21.scope. May 13 08:30:20.924163 sshd[3697]: pam_unix(sshd:session): session closed for user core May 13 08:30:20.933851 systemd-logind[1242]: Session 21 logged out. Waiting for processes to exit. May 13 08:30:20.934504 systemd[1]: sshd@20-172.24.4.152:22-172.24.4.1:53780.service: Deactivated successfully. May 13 08:30:20.936828 systemd[1]: session-21.scope: Deactivated successfully. May 13 08:30:20.938172 systemd-logind[1242]: Removed session 21. May 13 08:30:25.938156 systemd[1]: Started sshd@21-172.24.4.152:22-172.24.4.1:42236.service. May 13 08:30:27.065012 sshd[3713]: Accepted publickey for core from 172.24.4.1 port 42236 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:27.075467 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:27.093865 systemd-logind[1242]: New session 22 of user core. May 13 08:30:27.094994 systemd[1]: Started session-22.scope. May 13 08:30:27.861201 sshd[3713]: pam_unix(sshd:session): session closed for user core May 13 08:30:27.872992 systemd[1]: Started sshd@22-172.24.4.152:22-172.24.4.1:42244.service. May 13 08:30:27.880141 systemd[1]: sshd@21-172.24.4.152:22-172.24.4.1:42236.service: Deactivated successfully. May 13 08:30:27.907330 systemd[1]: session-22.scope: Deactivated successfully. May 13 08:30:27.909040 systemd-logind[1242]: Session 22 logged out. Waiting for processes to exit. May 13 08:30:27.914059 systemd-logind[1242]: Removed session 22. May 13 08:30:29.142167 sshd[3724]: Accepted publickey for core from 172.24.4.1 port 42244 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:29.147013 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:29.159695 systemd-logind[1242]: New session 23 of user core. May 13 08:30:29.160814 systemd[1]: Started session-23.scope. May 13 08:30:31.805987 env[1257]: time="2025-05-13T08:30:31.805449868Z" level=info msg="StopContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" with timeout 30 (s)" May 13 08:30:31.809711 env[1257]: time="2025-05-13T08:30:31.808258955Z" level=info msg="Stop container \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" with signal terminated" May 13 08:30:31.861831 systemd[1]: run-containerd-runc-k8s.io-cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913-runc.ldwsFt.mount: Deactivated successfully. May 13 08:30:31.940028 env[1257]: time="2025-05-13T08:30:31.939206952Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:30:31.945489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e-rootfs.mount: Deactivated successfully. May 13 08:30:31.953234 env[1257]: time="2025-05-13T08:30:31.953191518Z" level=info msg="StopContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" with timeout 2 (s)" May 13 08:30:31.954125 env[1257]: time="2025-05-13T08:30:31.954079784Z" level=info msg="Stop container \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" with signal terminated" May 13 08:30:31.973761 systemd-networkd[1035]: lxc_health: Link DOWN May 13 08:30:31.973827 systemd-networkd[1035]: lxc_health: Lost carrier May 13 08:30:32.026512 env[1257]: time="2025-05-13T08:30:32.026443594Z" level=info msg="shim disconnected" id=ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e May 13 08:30:32.026900 env[1257]: time="2025-05-13T08:30:32.026870484Z" level=warning msg="cleaning up after shim disconnected" id=ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e namespace=k8s.io May 13 08:30:32.027061 env[1257]: time="2025-05-13T08:30:32.027029392Z" level=info msg="cleaning up dead shim" May 13 08:30:32.070969 env[1257]: time="2025-05-13T08:30:32.069552681Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3783 runtime=io.containerd.runc.v2\n" May 13 08:30:32.082935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913-rootfs.mount: Deactivated successfully. May 13 08:30:32.085068 env[1257]: time="2025-05-13T08:30:32.085011201Z" level=info msg="StopContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" returns successfully" May 13 08:30:32.086523 env[1257]: time="2025-05-13T08:30:32.086488412Z" level=info msg="StopPodSandbox for \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\"" May 13 08:30:32.086842 env[1257]: time="2025-05-13T08:30:32.086805076Z" level=info msg="Container to stop \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.089683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c-shm.mount: Deactivated successfully. May 13 08:30:32.102148 env[1257]: time="2025-05-13T08:30:32.102090762Z" level=info msg="shim disconnected" id=cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913 May 13 08:30:32.102551 env[1257]: time="2025-05-13T08:30:32.102528383Z" level=warning msg="cleaning up after shim disconnected" id=cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913 namespace=k8s.io May 13 08:30:32.102762 env[1257]: time="2025-05-13T08:30:32.102740290Z" level=info msg="cleaning up dead shim" May 13 08:30:32.131040 env[1257]: time="2025-05-13T08:30:32.130983437Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3815 runtime=io.containerd.runc.v2\n" May 13 08:30:32.142546 env[1257]: time="2025-05-13T08:30:32.142428051Z" level=info msg="StopContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" returns successfully" May 13 08:30:32.144349 env[1257]: time="2025-05-13T08:30:32.144303278Z" level=info msg="StopPodSandbox for \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\"" May 13 08:30:32.145558 env[1257]: time="2025-05-13T08:30:32.144565489Z" level=info msg="Container to stop \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.145558 env[1257]: time="2025-05-13T08:30:32.144597048Z" level=info msg="Container to stop \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.145558 env[1257]: time="2025-05-13T08:30:32.144617577Z" level=info msg="Container to stop \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.147288 env[1257]: time="2025-05-13T08:30:32.144636543Z" level=info msg="Container to stop \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.147288 env[1257]: time="2025-05-13T08:30:32.146807684Z" level=info msg="Container to stop \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.161857 env[1257]: time="2025-05-13T08:30:32.161794811Z" level=info msg="shim disconnected" id=ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c May 13 08:30:32.164615 env[1257]: time="2025-05-13T08:30:32.164585916Z" level=warning msg="cleaning up after shim disconnected" id=ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c namespace=k8s.io May 13 08:30:32.164794 env[1257]: time="2025-05-13T08:30:32.164773578Z" level=info msg="cleaning up dead shim" May 13 08:30:32.190082 env[1257]: time="2025-05-13T08:30:32.190011578Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3848 runtime=io.containerd.runc.v2\n" May 13 08:30:32.191274 env[1257]: time="2025-05-13T08:30:32.191226617Z" level=info msg="TearDown network for sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" successfully" May 13 08:30:32.191461 env[1257]: time="2025-05-13T08:30:32.191435489Z" level=info msg="StopPodSandbox for \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" returns successfully" May 13 08:30:32.258093 env[1257]: time="2025-05-13T08:30:32.258034088Z" level=info msg="shim disconnected" id=02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58 May 13 08:30:32.258983 env[1257]: time="2025-05-13T08:30:32.258901454Z" level=warning msg="cleaning up after shim disconnected" id=02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58 namespace=k8s.io May 13 08:30:32.258983 env[1257]: time="2025-05-13T08:30:32.258972858Z" level=info msg="cleaning up dead shim" May 13 08:30:32.281784 env[1257]: time="2025-05-13T08:30:32.281581257Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3877 runtime=io.containerd.runc.v2\n" May 13 08:30:32.283029 env[1257]: time="2025-05-13T08:30:32.282957168Z" level=info msg="TearDown network for sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" successfully" May 13 08:30:32.283150 env[1257]: time="2025-05-13T08:30:32.283024835Z" level=info msg="StopPodSandbox for \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" returns successfully" May 13 08:30:32.355854 kubelet[2163]: I0513 08:30:32.353250 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nl2n\" (UniqueName: \"kubernetes.io/projected/35c058ea-28d3-4987-ab2a-49510a55db2c-kube-api-access-4nl2n\") pod \"35c058ea-28d3-4987-ab2a-49510a55db2c\" (UID: \"35c058ea-28d3-4987-ab2a-49510a55db2c\") " May 13 08:30:32.362711 kubelet[2163]: I0513 08:30:32.362580 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35c058ea-28d3-4987-ab2a-49510a55db2c-cilium-config-path\") pod \"35c058ea-28d3-4987-ab2a-49510a55db2c\" (UID: \"35c058ea-28d3-4987-ab2a-49510a55db2c\") " May 13 08:30:32.362711 kubelet[2163]: I0513 08:30:32.362711 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-lib-modules\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.362956 kubelet[2163]: I0513 08:30:32.362794 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-cgroup\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.362956 kubelet[2163]: I0513 08:30:32.362819 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-bpf-maps\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.362956 kubelet[2163]: I0513 08:30:32.362845 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-kernel\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.363296 kubelet[2163]: I0513 08:30:32.362909 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3b66d18-0e9b-4cff-85bc-782d516c6b42-clustermesh-secrets\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.363296 kubelet[2163]: I0513 08:30:32.363281 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hubble-tls\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.363763 kubelet[2163]: I0513 08:30:32.363736 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-xtables-lock\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.364722 kubelet[2163]: I0513 08:30:32.364641 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.370638 kubelet[2163]: I0513 08:30:32.370583 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35c058ea-28d3-4987-ab2a-49510a55db2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35c058ea-28d3-4987-ab2a-49510a55db2c" (UID: "35c058ea-28d3-4987-ab2a-49510a55db2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:30:32.374552 kubelet[2163]: I0513 08:30:32.374491 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.374789 kubelet[2163]: I0513 08:30:32.374551 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.374789 kubelet[2163]: I0513 08:30:32.374596 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.374789 kubelet[2163]: I0513 08:30:32.374632 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.383142 kubelet[2163]: I0513 08:30:32.381494 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c058ea-28d3-4987-ab2a-49510a55db2c-kube-api-access-4nl2n" (OuterVolumeSpecName: "kube-api-access-4nl2n") pod "35c058ea-28d3-4987-ab2a-49510a55db2c" (UID: "35c058ea-28d3-4987-ab2a-49510a55db2c"). InnerVolumeSpecName "kube-api-access-4nl2n". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:32.385227 kubelet[2163]: I0513 08:30:32.385147 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b66d18-0e9b-4cff-85bc-782d516c6b42-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:32.387725 kubelet[2163]: I0513 08:30:32.387626 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:32.465804 kubelet[2163]: I0513 08:30:32.465591 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-run\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.465851 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-etc-cni-netd\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.465934 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-net\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.466007 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cni-path\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.466099 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xsv8\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-kube-api-access-4xsv8\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.466166 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hostproc\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.466276 kubelet[2163]: I0513 08:30:32.466238 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-config-path\") pod \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\" (UID: \"d3b66d18-0e9b-4cff-85bc-782d516c6b42\") " May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466458 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4nl2n\" (UniqueName: \"kubernetes.io/projected/35c058ea-28d3-4987-ab2a-49510a55db2c-kube-api-access-4nl2n\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466533 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35c058ea-28d3-4987-ab2a-49510a55db2c-cilium-config-path\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466579 2163 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-lib-modules\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466622 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466721 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-cgroup\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466792 2163 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-bpf-maps\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.467368 kubelet[2163]: I0513 08:30:32.466837 2163 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hubble-tls\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.468214 kubelet[2163]: I0513 08:30:32.466876 2163 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3b66d18-0e9b-4cff-85bc-782d516c6b42-clustermesh-secrets\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.468214 kubelet[2163]: I0513 08:30:32.466916 2163 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-xtables-lock\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.468787 kubelet[2163]: I0513 08:30:32.468617 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.469080 kubelet[2163]: I0513 08:30:32.469039 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.469373 kubelet[2163]: I0513 08:30:32.469317 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.474096 kubelet[2163]: I0513 08:30:32.473979 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:30:32.474422 kubelet[2163]: I0513 08:30:32.474133 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.474422 kubelet[2163]: I0513 08:30:32.474186 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:32.480536 kubelet[2163]: I0513 08:30:32.480460 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-kube-api-access-4xsv8" (OuterVolumeSpecName: "kube-api-access-4xsv8") pod "d3b66d18-0e9b-4cff-85bc-782d516c6b42" (UID: "d3b66d18-0e9b-4cff-85bc-782d516c6b42"). InnerVolumeSpecName "kube-api-access-4xsv8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:32.568141 kubelet[2163]: I0513 08:30:32.568056 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-config-path\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.568645 kubelet[2163]: I0513 08:30:32.568606 2163 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-hostproc\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.569040 kubelet[2163]: I0513 08:30:32.568984 2163 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-etc-cni-netd\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.569347 kubelet[2163]: I0513 08:30:32.569294 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-host-proc-sys-net\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.569557 kubelet[2163]: I0513 08:30:32.569525 2163 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cni-path\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.569785 kubelet[2163]: I0513 08:30:32.569756 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3b66d18-0e9b-4cff-85bc-782d516c6b42-cilium-run\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.570073 kubelet[2163]: I0513 08:30:32.570036 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4xsv8\" (UniqueName: \"kubernetes.io/projected/d3b66d18-0e9b-4cff-85bc-782d516c6b42-kube-api-access-4xsv8\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:32.706225 kubelet[2163]: I0513 08:30:32.704869 2163 scope.go:117] "RemoveContainer" containerID="ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e" May 13 08:30:32.709994 env[1257]: time="2025-05-13T08:30:32.709897479Z" level=info msg="RemoveContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\"" May 13 08:30:32.782241 env[1257]: time="2025-05-13T08:30:32.781975053Z" level=info msg="RemoveContainer for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" returns successfully" May 13 08:30:32.789096 kubelet[2163]: I0513 08:30:32.788982 2163 scope.go:117] "RemoveContainer" containerID="ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e" May 13 08:30:32.790924 env[1257]: time="2025-05-13T08:30:32.790747925Z" level=error msg="ContainerStatus for \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\": not found" May 13 08:30:32.791515 kubelet[2163]: E0513 08:30:32.791451 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\": not found" containerID="ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e" May 13 08:30:32.791876 kubelet[2163]: I0513 08:30:32.791570 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e"} err="failed to get container status \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab3cfce492755ff606344f06349bd02c7c403bac48ddbee2f038de1c9fac3e6e\": not found" May 13 08:30:32.792008 kubelet[2163]: I0513 08:30:32.791882 2163 scope.go:117] "RemoveContainer" containerID="cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913" May 13 08:30:32.794818 env[1257]: time="2025-05-13T08:30:32.794242369Z" level=info msg="RemoveContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\"" May 13 08:30:32.825790 env[1257]: time="2025-05-13T08:30:32.825741292Z" level=info msg="RemoveContainer for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" returns successfully" May 13 08:30:32.826512 kubelet[2163]: I0513 08:30:32.826488 2163 scope.go:117] "RemoveContainer" containerID="4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3" May 13 08:30:32.828504 env[1257]: time="2025-05-13T08:30:32.828442317Z" level=info msg="RemoveContainer for \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\"" May 13 08:30:32.838790 env[1257]: time="2025-05-13T08:30:32.838718509Z" level=info msg="RemoveContainer for \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\" returns successfully" May 13 08:30:32.839258 kubelet[2163]: I0513 08:30:32.839219 2163 scope.go:117] "RemoveContainer" containerID="4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388" May 13 08:30:32.841837 env[1257]: time="2025-05-13T08:30:32.841793376Z" level=info msg="RemoveContainer for \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\"" May 13 08:30:32.852161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c-rootfs.mount: Deactivated successfully. May 13 08:30:32.852391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58-rootfs.mount: Deactivated successfully. May 13 08:30:32.852534 systemd[1]: var-lib-kubelet-pods-35c058ea\x2d28d3\x2d4987\x2dab2a\x2d49510a55db2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nl2n.mount: Deactivated successfully. May 13 08:30:32.852792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58-shm.mount: Deactivated successfully. May 13 08:30:32.853001 systemd[1]: var-lib-kubelet-pods-d3b66d18\x2d0e9b\x2d4cff\x2d85bc\x2d782d516c6b42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4xsv8.mount: Deactivated successfully. May 13 08:30:32.853186 systemd[1]: var-lib-kubelet-pods-d3b66d18\x2d0e9b\x2d4cff\x2d85bc\x2d782d516c6b42-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 08:30:32.853389 systemd[1]: var-lib-kubelet-pods-d3b66d18\x2d0e9b\x2d4cff\x2d85bc\x2d782d516c6b42-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 08:30:32.904104 env[1257]: time="2025-05-13T08:30:32.904008724Z" level=info msg="RemoveContainer for \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\" returns successfully" May 13 08:30:32.904848 kubelet[2163]: I0513 08:30:32.904776 2163 scope.go:117] "RemoveContainer" containerID="38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1" May 13 08:30:32.909094 env[1257]: time="2025-05-13T08:30:32.909010846Z" level=info msg="RemoveContainer for \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\"" May 13 08:30:32.919736 env[1257]: time="2025-05-13T08:30:32.919618499Z" level=info msg="RemoveContainer for \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\" returns successfully" May 13 08:30:32.920283 kubelet[2163]: I0513 08:30:32.920240 2163 scope.go:117] "RemoveContainer" containerID="5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351" May 13 08:30:32.922865 env[1257]: time="2025-05-13T08:30:32.922808331Z" level=info msg="RemoveContainer for \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\"" May 13 08:30:32.930197 env[1257]: time="2025-05-13T08:30:32.930126095Z" level=info msg="RemoveContainer for \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\" returns successfully" May 13 08:30:32.930777 kubelet[2163]: I0513 08:30:32.930703 2163 scope.go:117] "RemoveContainer" containerID="cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913" May 13 08:30:32.931711 env[1257]: time="2025-05-13T08:30:32.931482167Z" level=error msg="ContainerStatus for \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\": not found" May 13 08:30:32.932275 kubelet[2163]: E0513 08:30:32.932192 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\": not found" containerID="cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913" May 13 08:30:32.932275 kubelet[2163]: I0513 08:30:32.932227 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913"} err="failed to get container status \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf1565771401c3c87dfbf7d43e510de795a80152e5fd79c5302e5eb38e0c3913\": not found" May 13 08:30:32.932275 kubelet[2163]: I0513 08:30:32.932251 2163 scope.go:117] "RemoveContainer" containerID="4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3" May 13 08:30:32.932744 env[1257]: time="2025-05-13T08:30:32.932553016Z" level=error msg="ContainerStatus for \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\": not found" May 13 08:30:32.932869 kubelet[2163]: E0513 08:30:32.932788 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\": not found" containerID="4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3" May 13 08:30:32.932869 kubelet[2163]: I0513 08:30:32.932811 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3"} err="failed to get container status \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ba2a2f1d179a3001009f09608a11e1bc11b95e91e108d5f835711bf549517d3\": not found" May 13 08:30:32.932869 kubelet[2163]: I0513 08:30:32.932828 2163 scope.go:117] "RemoveContainer" containerID="4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388" May 13 08:30:32.933311 kubelet[2163]: E0513 08:30:32.933172 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\": not found" containerID="4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388" May 13 08:30:32.933311 kubelet[2163]: I0513 08:30:32.933202 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388"} err="failed to get container status \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\": not found" May 13 08:30:32.933311 kubelet[2163]: I0513 08:30:32.933217 2163 scope.go:117] "RemoveContainer" containerID="38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1" May 13 08:30:32.933719 env[1257]: time="2025-05-13T08:30:32.933025853Z" level=error msg="ContainerStatus for \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c908173ab4e468b0eb4b6426c7f1e737e22d50b54f2c6131ede3dbafadf7388\": not found" May 13 08:30:32.933719 env[1257]: time="2025-05-13T08:30:32.933375028Z" level=error msg="ContainerStatus for \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\": not found" May 13 08:30:32.933895 kubelet[2163]: E0513 08:30:32.933502 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\": not found" containerID="38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1" May 13 08:30:32.933895 kubelet[2163]: I0513 08:30:32.933592 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1"} err="failed to get container status \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"38201f976dc396afc2119e6a18e61ec646382aa136b907a38eb6f5530f1ed5a1\": not found" May 13 08:30:32.933895 kubelet[2163]: I0513 08:30:32.933609 2163 scope.go:117] "RemoveContainer" containerID="5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351" May 13 08:30:32.934191 env[1257]: time="2025-05-13T08:30:32.933804563Z" level=error msg="ContainerStatus for \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\": not found" May 13 08:30:32.934341 kubelet[2163]: E0513 08:30:32.933917 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\": not found" containerID="5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351" May 13 08:30:32.934341 kubelet[2163]: I0513 08:30:32.933949 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351"} err="failed to get container status \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f83bcc8fd7a57944ee1465df2e20a75370e693a0fdf3a51773a90045583f351\": not found" May 13 08:30:33.646928 kubelet[2163]: I0513 08:30:33.646777 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c058ea-28d3-4987-ab2a-49510a55db2c" path="/var/lib/kubelet/pods/35c058ea-28d3-4987-ab2a-49510a55db2c/volumes" May 13 08:30:33.651819 kubelet[2163]: I0513 08:30:33.649961 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" path="/var/lib/kubelet/pods/d3b66d18-0e9b-4cff-85bc-782d516c6b42/volumes" May 13 08:30:33.761811 sshd[3724]: pam_unix(sshd:session): session closed for user core May 13 08:30:33.771121 systemd[1]: Started sshd@23-172.24.4.152:22-172.24.4.1:47724.service. May 13 08:30:33.783525 systemd[1]: sshd@22-172.24.4.152:22-172.24.4.1:42244.service: Deactivated successfully. May 13 08:30:33.787584 systemd[1]: session-23.scope: Deactivated successfully. May 13 08:30:33.788145 systemd-logind[1242]: Session 23 logged out. Waiting for processes to exit. May 13 08:30:33.806829 systemd-logind[1242]: Removed session 23. May 13 08:30:35.057626 sshd[3894]: Accepted publickey for core from 172.24.4.1 port 47724 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:35.062737 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:35.078781 systemd-logind[1242]: New session 24 of user core. May 13 08:30:35.082878 systemd[1]: Started session-24.scope. May 13 08:30:35.694977 env[1257]: time="2025-05-13T08:30:35.694810776Z" level=info msg="StopPodSandbox for \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\"" May 13 08:30:35.695625 env[1257]: time="2025-05-13T08:30:35.695163057Z" level=info msg="TearDown network for sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" successfully" May 13 08:30:35.695625 env[1257]: time="2025-05-13T08:30:35.695257213Z" level=info msg="StopPodSandbox for \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" returns successfully" May 13 08:30:35.697472 env[1257]: time="2025-05-13T08:30:35.696526113Z" level=info msg="RemovePodSandbox for \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\"" May 13 08:30:35.697472 env[1257]: time="2025-05-13T08:30:35.696597858Z" level=info msg="Forcibly stopping sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\"" May 13 08:30:35.697472 env[1257]: time="2025-05-13T08:30:35.696781572Z" level=info msg="TearDown network for sandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" successfully" May 13 08:30:35.705240 env[1257]: time="2025-05-13T08:30:35.705182228Z" level=info msg="RemovePodSandbox \"02301a33c2cdf4239ee5b033fa1e401c471e69abd24c2595e7eca385e60feb58\" returns successfully" May 13 08:30:35.708704 env[1257]: time="2025-05-13T08:30:35.708606640Z" level=info msg="StopPodSandbox for \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\"" May 13 08:30:35.709157 env[1257]: time="2025-05-13T08:30:35.709067755Z" level=info msg="TearDown network for sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" successfully" May 13 08:30:35.709293 env[1257]: time="2025-05-13T08:30:35.709264374Z" level=info msg="StopPodSandbox for \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" returns successfully" May 13 08:30:35.709980 env[1257]: time="2025-05-13T08:30:35.709909413Z" level=info msg="RemovePodSandbox for \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\"" May 13 08:30:35.710196 env[1257]: time="2025-05-13T08:30:35.710126531Z" level=info msg="Forcibly stopping sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\"" May 13 08:30:35.710503 env[1257]: time="2025-05-13T08:30:35.710425051Z" level=info msg="TearDown network for sandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" successfully" May 13 08:30:35.718974 env[1257]: time="2025-05-13T08:30:35.718382604Z" level=info msg="RemovePodSandbox \"ceaf2c9f354c23208229150437c2bf451e43e7112fe51bec24c658245b5d224c\" returns successfully" May 13 08:30:35.866365 kubelet[2163]: E0513 08:30:35.866288 2163 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 08:30:36.686362 kubelet[2163]: I0513 08:30:36.686275 2163 topology_manager.go:215] "Topology Admit Handler" podUID="f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" podNamespace="kube-system" podName="cilium-f95gm" May 13 08:30:36.686793 kubelet[2163]: E0513 08:30:36.686774 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="mount-bpf-fs" May 13 08:30:36.686906 kubelet[2163]: E0513 08:30:36.686886 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="clean-cilium-state" May 13 08:30:36.686997 kubelet[2163]: E0513 08:30:36.686980 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35c058ea-28d3-4987-ab2a-49510a55db2c" containerName="cilium-operator" May 13 08:30:36.687111 kubelet[2163]: E0513 08:30:36.687098 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="apply-sysctl-overwrites" May 13 08:30:36.687200 kubelet[2163]: E0513 08:30:36.687188 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="cilium-agent" May 13 08:30:36.687281 kubelet[2163]: E0513 08:30:36.687269 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="mount-cgroup" May 13 08:30:36.687487 kubelet[2163]: I0513 08:30:36.687460 2163 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c058ea-28d3-4987-ab2a-49510a55db2c" containerName="cilium-operator" May 13 08:30:36.687581 kubelet[2163]: I0513 08:30:36.687569 2163 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b66d18-0e9b-4cff-85bc-782d516c6b42" containerName="cilium-agent" May 13 08:30:36.710162 kubelet[2163]: I0513 08:30:36.710025 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-run\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.810885 kubelet[2163]: I0513 08:30:36.810844 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-ipsec-secrets\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.811451 kubelet[2163]: I0513 08:30:36.811383 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-etc-cni-netd\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.811612 kubelet[2163]: I0513 08:30:36.811584 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-xtables-lock\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.811791 kubelet[2163]: I0513 08:30:36.811773 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-kernel\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.811937 kubelet[2163]: I0513 08:30:36.811904 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvk4\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-kube-api-access-tmvk4\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812096 kubelet[2163]: I0513 08:30:36.812080 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hostproc\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812314 kubelet[2163]: I0513 08:30:36.812295 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-cgroup\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812430 kubelet[2163]: I0513 08:30:36.812414 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-clustermesh-secrets\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812573 kubelet[2163]: I0513 08:30:36.812556 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hubble-tls\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812747 kubelet[2163]: I0513 08:30:36.812712 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-config-path\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.812936 kubelet[2163]: I0513 08:30:36.812911 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-lib-modules\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.813159 kubelet[2163]: I0513 08:30:36.813136 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-bpf-maps\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.813313 kubelet[2163]: I0513 08:30:36.813297 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cni-path\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.813434 kubelet[2163]: I0513 08:30:36.813418 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-net\") pod \"cilium-f95gm\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " pod="kube-system/cilium-f95gm" May 13 08:30:36.818002 sshd[3894]: pam_unix(sshd:session): session closed for user core May 13 08:30:36.824090 systemd[1]: Started sshd@24-172.24.4.152:22-172.24.4.1:47728.service. May 13 08:30:36.826783 systemd[1]: sshd@23-172.24.4.152:22-172.24.4.1:47724.service: Deactivated successfully. May 13 08:30:36.838681 systemd[1]: session-24.scope: Deactivated successfully. May 13 08:30:36.840825 systemd-logind[1242]: Session 24 logged out. Waiting for processes to exit. May 13 08:30:36.845725 systemd-logind[1242]: Removed session 24. May 13 08:30:36.993772 env[1257]: time="2025-05-13T08:30:36.993230118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f95gm,Uid:f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d,Namespace:kube-system,Attempt:0,}" May 13 08:30:37.023797 env[1257]: time="2025-05-13T08:30:37.023513855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:37.023797 env[1257]: time="2025-05-13T08:30:37.023570051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:37.023797 env[1257]: time="2025-05-13T08:30:37.023585399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:37.024379 env[1257]: time="2025-05-13T08:30:37.024328052Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489 pid=3924 runtime=io.containerd.runc.v2 May 13 08:30:37.075762 env[1257]: time="2025-05-13T08:30:37.075704960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f95gm,Uid:f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\"" May 13 08:30:37.085628 env[1257]: time="2025-05-13T08:30:37.085026893Z" level=info msg="CreateContainer within sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:30:37.106933 env[1257]: time="2025-05-13T08:30:37.104149757Z" level=info msg="CreateContainer within sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\"" May 13 08:30:37.106933 env[1257]: time="2025-05-13T08:30:37.105465895Z" level=info msg="StartContainer for \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\"" May 13 08:30:37.177678 env[1257]: time="2025-05-13T08:30:37.177583973Z" level=info msg="StartContainer for \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\" returns successfully" May 13 08:30:37.220096 env[1257]: time="2025-05-13T08:30:37.220026242Z" level=info msg="shim disconnected" id=612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655 May 13 08:30:37.220482 env[1257]: time="2025-05-13T08:30:37.220458002Z" level=warning msg="cleaning up after shim disconnected" id=612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655 namespace=k8s.io May 13 08:30:37.220607 env[1257]: time="2025-05-13T08:30:37.220586753Z" level=info msg="cleaning up dead shim" May 13 08:30:37.232167 env[1257]: time="2025-05-13T08:30:37.232072896Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" May 13 08:30:37.767518 env[1257]: time="2025-05-13T08:30:37.767367576Z" level=info msg="CreateContainer within sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:30:37.798991 env[1257]: time="2025-05-13T08:30:37.798886729Z" level=info msg="CreateContainer within sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\"" May 13 08:30:37.801812 env[1257]: time="2025-05-13T08:30:37.800968303Z" level=info msg="StartContainer for \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\"" May 13 08:30:37.929181 env[1257]: time="2025-05-13T08:30:37.928990858Z" level=info msg="StartContainer for \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\" returns successfully" May 13 08:30:37.960971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e-rootfs.mount: Deactivated successfully. May 13 08:30:37.970080 env[1257]: time="2025-05-13T08:30:37.970003847Z" level=info msg="shim disconnected" id=0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e May 13 08:30:37.970513 env[1257]: time="2025-05-13T08:30:37.970491411Z" level=warning msg="cleaning up after shim disconnected" id=0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e namespace=k8s.io May 13 08:30:37.970646 env[1257]: time="2025-05-13T08:30:37.970627777Z" level=info msg="cleaning up dead shim" May 13 08:30:37.988426 env[1257]: time="2025-05-13T08:30:37.988129281Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" May 13 08:30:37.999916 sshd[3909]: Accepted publickey for core from 172.24.4.1 port 47728 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:38.000515 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:38.007049 systemd[1]: Started session-25.scope. May 13 08:30:38.007798 systemd-logind[1242]: New session 25 of user core. May 13 08:30:38.767307 env[1257]: time="2025-05-13T08:30:38.767084579Z" level=info msg="StopPodSandbox for \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\"" May 13 08:30:38.767307 env[1257]: time="2025-05-13T08:30:38.767228609Z" level=info msg="Container to stop \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:38.767307 env[1257]: time="2025-05-13T08:30:38.767270087Z" level=info msg="Container to stop \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:38.775970 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489-shm.mount: Deactivated successfully. May 13 08:30:38.845804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489-rootfs.mount: Deactivated successfully. May 13 08:30:38.873301 systemd[1]: Started sshd@25-172.24.4.152:22-172.24.4.1:47732.service. May 13 08:30:38.886370 sshd[3909]: pam_unix(sshd:session): session closed for user core May 13 08:30:38.889416 systemd[1]: sshd@24-172.24.4.152:22-172.24.4.1:47728.service: Deactivated successfully. May 13 08:30:38.891450 systemd[1]: session-25.scope: Deactivated successfully. May 13 08:30:38.891498 systemd-logind[1242]: Session 25 logged out. Waiting for processes to exit. May 13 08:30:38.893186 systemd-logind[1242]: Removed session 25. May 13 08:30:38.914677 env[1257]: time="2025-05-13T08:30:38.914474961Z" level=info msg="shim disconnected" id=024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489 May 13 08:30:38.914677 env[1257]: time="2025-05-13T08:30:38.914641183Z" level=warning msg="cleaning up after shim disconnected" id=024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489 namespace=k8s.io May 13 08:30:38.914677 env[1257]: time="2025-05-13T08:30:38.914666751Z" level=info msg="cleaning up dead shim" May 13 08:30:38.923976 env[1257]: time="2025-05-13T08:30:38.923905869Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4114 runtime=io.containerd.runc.v2\n" May 13 08:30:38.924339 env[1257]: time="2025-05-13T08:30:38.924288256Z" level=info msg="TearDown network for sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" successfully" May 13 08:30:38.924339 env[1257]: time="2025-05-13T08:30:38.924328381Z" level=info msg="StopPodSandbox for \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" returns successfully" May 13 08:30:39.035706 kubelet[2163]: I0513 08:30:39.035365 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-run\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036280 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-xtables-lock\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036415 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvk4\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-kube-api-access-tmvk4\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036440 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-bpf-maps\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036482 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-net\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036510 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-ipsec-secrets\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037275 kubelet[2163]: I0513 08:30:39.036532 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-config-path\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036581 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cni-path\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036619 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-clustermesh-secrets\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036638 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-lib-modules\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036685 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-etc-cni-netd\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036704 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hostproc\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.037980 kubelet[2163]: I0513 08:30:39.036734 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hubble-tls\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.038455 kubelet[2163]: I0513 08:30:39.036788 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-kernel\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.038455 kubelet[2163]: I0513 08:30:39.036808 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-cgroup\") pod \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\" (UID: \"f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d\") " May 13 08:30:39.038455 kubelet[2163]: I0513 08:30:39.036957 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.038455 kubelet[2163]: I0513 08:30:39.037014 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.038455 kubelet[2163]: I0513 08:30:39.037033 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.040386 kubelet[2163]: I0513 08:30:39.039562 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.040386 kubelet[2163]: I0513 08:30:39.039725 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.040946 kubelet[2163]: I0513 08:30:39.040886 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.041270 kubelet[2163]: I0513 08:30:39.041228 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.044830 systemd[1]: var-lib-kubelet-pods-f2a621c5\x2dad95\x2d45aa\x2d8d14\x2d6ccfb68c4e6d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 08:30:39.052615 systemd[1]: var-lib-kubelet-pods-f2a621c5\x2dad95\x2d45aa\x2d8d14\x2d6ccfb68c4e6d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 08:30:39.055844 kubelet[2163]: I0513 08:30:39.055778 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.056305 kubelet[2163]: I0513 08:30:39.056202 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.056948 kubelet[2163]: I0513 08:30:39.056899 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:39.057207 kubelet[2163]: I0513 08:30:39.057168 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.059915 kubelet[2163]: I0513 08:30:39.059829 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:39.062172 systemd[1]: var-lib-kubelet-pods-f2a621c5\x2dad95\x2d45aa\x2d8d14\x2d6ccfb68c4e6d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 08:30:39.065437 kubelet[2163]: I0513 08:30:39.065354 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:39.068605 systemd[1]: var-lib-kubelet-pods-f2a621c5\x2dad95\x2d45aa\x2d8d14\x2d6ccfb68c4e6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtmvk4.mount: Deactivated successfully. May 13 08:30:39.070476 kubelet[2163]: I0513 08:30:39.070368 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:30:39.070644 kubelet[2163]: I0513 08:30:39.070619 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-kube-api-access-tmvk4" (OuterVolumeSpecName: "kube-api-access-tmvk4") pod "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" (UID: "f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d"). InnerVolumeSpecName "kube-api-access-tmvk4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:39.138070 kubelet[2163]: I0513 08:30:39.137991 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-run\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.138533 kubelet[2163]: I0513 08:30:39.138469 2163 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-xtables-lock\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.138976 kubelet[2163]: I0513 08:30:39.138843 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tmvk4\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-kube-api-access-tmvk4\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.139376 kubelet[2163]: I0513 08:30:39.139312 2163 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-bpf-maps\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.139698 kubelet[2163]: I0513 08:30:39.139603 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-net\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.140043 kubelet[2163]: I0513 08:30:39.139980 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-ipsec-secrets\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.140303 kubelet[2163]: I0513 08:30:39.140239 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-config-path\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.140800 kubelet[2163]: I0513 08:30:39.140593 2163 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-lib-modules\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.141279 kubelet[2163]: I0513 08:30:39.141214 2163 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cni-path\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.141609 kubelet[2163]: I0513 08:30:39.141543 2163 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-clustermesh-secrets\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.141922 kubelet[2163]: I0513 08:30:39.141841 2163 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-etc-cni-netd\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.142205 kubelet[2163]: I0513 08:30:39.142140 2163 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hostproc\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.142455 kubelet[2163]: I0513 08:30:39.142359 2163 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-hubble-tls\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.142455 kubelet[2163]: I0513 08:30:39.142436 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.142757 kubelet[2163]: I0513 08:30:39.142468 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d-cilium-cgroup\") on node \"ci-3510-3-7-n-5ac23fdacd.novalocal\" DevicePath \"\"" May 13 08:30:39.777168 kubelet[2163]: I0513 08:30:39.776927 2163 scope.go:117] "RemoveContainer" containerID="0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e" May 13 08:30:39.791561 env[1257]: time="2025-05-13T08:30:39.791430963Z" level=info msg="RemoveContainer for \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\"" May 13 08:30:39.803552 env[1257]: time="2025-05-13T08:30:39.803470533Z" level=info msg="RemoveContainer for \"0e1f7f206123bf63b2d1865b46754715797940c25c29acb21f8e93aba051715e\" returns successfully" May 13 08:30:39.804426 kubelet[2163]: I0513 08:30:39.804378 2163 scope.go:117] "RemoveContainer" containerID="612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655" May 13 08:30:39.815081 env[1257]: time="2025-05-13T08:30:39.813963523Z" level=info msg="RemoveContainer for \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\"" May 13 08:30:39.823366 env[1257]: time="2025-05-13T08:30:39.823272413Z" level=info msg="RemoveContainer for \"612bbd6c7d650f4b31c363b7f8f813b26c3d3860446433c480ac9b69c1580655\" returns successfully" May 13 08:30:39.903534 kubelet[2163]: I0513 08:30:39.903484 2163 topology_manager.go:215] "Topology Admit Handler" podUID="72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e" podNamespace="kube-system" podName="cilium-8frnc" May 13 08:30:39.903987 kubelet[2163]: E0513 08:30:39.903968 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" containerName="mount-cgroup" May 13 08:30:39.904077 kubelet[2163]: E0513 08:30:39.904065 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" containerName="apply-sysctl-overwrites" May 13 08:30:39.904201 kubelet[2163]: I0513 08:30:39.904186 2163 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" containerName="apply-sysctl-overwrites" May 13 08:30:40.057871 kubelet[2163]: I0513 08:30:40.056863 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-bpf-maps\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.057871 kubelet[2163]: I0513 08:30:40.057239 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-etc-cni-netd\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.057871 kubelet[2163]: I0513 08:30:40.057382 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-xtables-lock\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058033 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-hubble-tls\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058146 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5nj\" (UniqueName: \"kubernetes.io/projected/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-kube-api-access-2q5nj\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058328 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-lib-modules\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058418 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-host-proc-sys-kernel\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058519 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-cilium-run\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059218 kubelet[2163]: I0513 08:30:40.058715 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-cilium-config-path\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.058832 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-cilium-ipsec-secrets\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.059021 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-cilium-cgroup\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.059175 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-cni-path\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.059291 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-clustermesh-secrets\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.059500 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-host-proc-sys-net\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.059839 kubelet[2163]: I0513 08:30:40.059586 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e-hostproc\") pod \"cilium-8frnc\" (UID: \"72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e\") " pod="kube-system/cilium-8frnc" May 13 08:30:40.157168 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 47732 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:30:40.160321 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:30:40.231046 systemd[1]: Started session-26.scope. May 13 08:30:40.231980 systemd-logind[1242]: New session 26 of user core. May 13 08:30:40.511224 env[1257]: time="2025-05-13T08:30:40.510215782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8frnc,Uid:72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e,Namespace:kube-system,Attempt:0,}" May 13 08:30:40.580027 env[1257]: time="2025-05-13T08:30:40.579931637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:40.580228 env[1257]: time="2025-05-13T08:30:40.580041503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:40.580228 env[1257]: time="2025-05-13T08:30:40.580078071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:40.580350 env[1257]: time="2025-05-13T08:30:40.580257758Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f pid=4148 runtime=io.containerd.runc.v2 May 13 08:30:40.752306 env[1257]: time="2025-05-13T08:30:40.752245850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8frnc,Uid:72dfb06d-897e-4f85-88c5-7bbcbd7c3e7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\"" May 13 08:30:40.758799 env[1257]: time="2025-05-13T08:30:40.758092606Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:30:40.780747 env[1257]: time="2025-05-13T08:30:40.780584250Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e86d00158fd826a83be40896fe59cfc1f67cb4a6c0d548aa8a481d55b5dd973\"" May 13 08:30:40.782130 env[1257]: time="2025-05-13T08:30:40.782090735Z" level=info msg="StartContainer for \"7e86d00158fd826a83be40896fe59cfc1f67cb4a6c0d548aa8a481d55b5dd973\"" May 13 08:30:40.868015 kubelet[2163]: E0513 08:30:40.867945 2163 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 08:30:40.879119 env[1257]: time="2025-05-13T08:30:40.878024303Z" level=info msg="StartContainer for \"7e86d00158fd826a83be40896fe59cfc1f67cb4a6c0d548aa8a481d55b5dd973\" returns successfully" May 13 08:30:40.921491 env[1257]: time="2025-05-13T08:30:40.921417368Z" level=info msg="shim disconnected" id=7e86d00158fd826a83be40896fe59cfc1f67cb4a6c0d548aa8a481d55b5dd973 May 13 08:30:40.921491 env[1257]: time="2025-05-13T08:30:40.921476890Z" level=warning msg="cleaning up after shim disconnected" id=7e86d00158fd826a83be40896fe59cfc1f67cb4a6c0d548aa8a481d55b5dd973 namespace=k8s.io May 13 08:30:40.921491 env[1257]: time="2025-05-13T08:30:40.921490095Z" level=info msg="cleaning up dead shim" May 13 08:30:40.931324 env[1257]: time="2025-05-13T08:30:40.931272362Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4243 runtime=io.containerd.runc.v2\n" May 13 08:30:41.645367 kubelet[2163]: I0513 08:30:41.645251 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d" path="/var/lib/kubelet/pods/f2a621c5-ad95-45aa-8d14-6ccfb68c4e6d/volumes" May 13 08:30:41.800715 env[1257]: time="2025-05-13T08:30:41.800547390Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:30:41.872896 env[1257]: time="2025-05-13T08:30:41.872707641Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca\"" May 13 08:30:41.879734 env[1257]: time="2025-05-13T08:30:41.874685981Z" level=info msg="StartContainer for \"3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca\"" May 13 08:30:41.952455 env[1257]: time="2025-05-13T08:30:41.952158867Z" level=info msg="StartContainer for \"3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca\" returns successfully" May 13 08:30:41.985798 env[1257]: time="2025-05-13T08:30:41.985746192Z" level=info msg="shim disconnected" id=3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca May 13 08:30:41.986159 env[1257]: time="2025-05-13T08:30:41.986125252Z" level=warning msg="cleaning up after shim disconnected" id=3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca namespace=k8s.io May 13 08:30:41.986275 env[1257]: time="2025-05-13T08:30:41.986257691Z" level=info msg="cleaning up dead shim" May 13 08:30:41.996779 env[1257]: time="2025-05-13T08:30:41.996738469Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" May 13 08:30:42.180932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a96fd6e015a4974b82f464421600b90eada12cb8e230adb039c2c542e9a06ca-rootfs.mount: Deactivated successfully. May 13 08:30:42.807964 env[1257]: time="2025-05-13T08:30:42.807286355Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 08:30:42.876787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434177190.mount: Deactivated successfully. May 13 08:30:42.897553 env[1257]: time="2025-05-13T08:30:42.897504857Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115\"" May 13 08:30:42.900685 env[1257]: time="2025-05-13T08:30:42.900627303Z" level=info msg="StartContainer for \"dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115\"" May 13 08:30:43.035170 env[1257]: time="2025-05-13T08:30:43.035124273Z" level=info msg="StartContainer for \"dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115\" returns successfully" May 13 08:30:43.065576 env[1257]: time="2025-05-13T08:30:43.065240759Z" level=info msg="shim disconnected" id=dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115 May 13 08:30:43.066145 env[1257]: time="2025-05-13T08:30:43.066122593Z" level=warning msg="cleaning up after shim disconnected" id=dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115 namespace=k8s.io May 13 08:30:43.066267 env[1257]: time="2025-05-13T08:30:43.066248649Z" level=info msg="cleaning up dead shim" May 13 08:30:43.081758 env[1257]: time="2025-05-13T08:30:43.081722402Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4363 runtime=io.containerd.runc.v2\n" May 13 08:30:43.180996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc34bb7ac771430605ea4fcb1ada43321ccd3e435df27a82ef63258b9ea3a115-rootfs.mount: Deactivated successfully. May 13 08:30:43.639911 kubelet[2163]: E0513 08:30:43.639180 2163 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-4gx7v" podUID="28ddb01d-d467-41c6-ad04-44b89d09f8c7" May 13 08:30:43.787331 kubelet[2163]: I0513 08:30:43.787148 2163 setters.go:580] "Node became not ready" node="ci-3510-3-7-n-5ac23fdacd.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T08:30:43Z","lastTransitionTime":"2025-05-13T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 08:30:43.837492 env[1257]: time="2025-05-13T08:30:43.837315219Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 08:30:43.885583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139477377.mount: Deactivated successfully. May 13 08:30:43.898768 env[1257]: time="2025-05-13T08:30:43.898575769Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8\"" May 13 08:30:43.900176 env[1257]: time="2025-05-13T08:30:43.900135665Z" level=info msg="StartContainer for \"903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8\"" May 13 08:30:44.033799 env[1257]: time="2025-05-13T08:30:44.033718603Z" level=info msg="StartContainer for \"903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8\" returns successfully" May 13 08:30:44.074017 env[1257]: time="2025-05-13T08:30:44.073960952Z" level=info msg="shim disconnected" id=903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8 May 13 08:30:44.074402 env[1257]: time="2025-05-13T08:30:44.074376893Z" level=warning msg="cleaning up after shim disconnected" id=903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8 namespace=k8s.io May 13 08:30:44.074488 env[1257]: time="2025-05-13T08:30:44.074471690Z" level=info msg="cleaning up dead shim" May 13 08:30:44.101809 env[1257]: time="2025-05-13T08:30:44.101748711Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4418 runtime=io.containerd.runc.v2\n" May 13 08:30:44.177982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-903454009145f8af2b10c8a8a78fa4b2e2166731bdf0e258207e6c4981aef1e8-rootfs.mount: Deactivated successfully. May 13 08:30:44.873313 env[1257]: time="2025-05-13T08:30:44.869809665Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 08:30:44.935127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707085469.mount: Deactivated successfully. May 13 08:30:44.945760 env[1257]: time="2025-05-13T08:30:44.945713311Z" level=info msg="CreateContainer within sandbox \"ec58da781dfe19059e326d814439458847ea291747db3d85bf44cadc0a71d47f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5\"" May 13 08:30:44.951348 env[1257]: time="2025-05-13T08:30:44.951162662Z" level=info msg="StartContainer for \"12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5\"" May 13 08:30:45.056686 env[1257]: time="2025-05-13T08:30:45.055301217Z" level=info msg="StartContainer for \"12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5\" returns successfully" May 13 08:30:45.557752 kernel: cryptd: max_cpu_qlen set to 1000 May 13 08:30:45.621902 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 08:30:45.641738 kubelet[2163]: E0513 08:30:45.639052 2163 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-4gx7v" podUID="28ddb01d-d467-41c6-ad04-44b89d09f8c7" May 13 08:30:45.880854 kubelet[2163]: I0513 08:30:45.880760 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8frnc" podStartSLOduration=6.880716559 podStartE2EDuration="6.880716559s" podCreationTimestamp="2025-05-13 08:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:30:45.878640305 +0000 UTC m=+370.455458509" watchObservedRunningTime="2025-05-13 08:30:45.880716559 +0000 UTC m=+370.457534743" May 13 08:30:47.125913 systemd[1]: run-containerd-runc-k8s.io-12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5-runc.yH1xmH.mount: Deactivated successfully. May 13 08:30:49.076705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 08:30:49.077265 systemd-networkd[1035]: lxc_health: Link UP May 13 08:30:49.078247 systemd-networkd[1035]: lxc_health: Gained carrier May 13 08:30:50.847687 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 13 08:30:51.702923 systemd[1]: run-containerd-runc-k8s.io-12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5-runc.4aLXE5.mount: Deactivated successfully. May 13 08:30:56.203792 systemd[1]: run-containerd-runc-k8s.io-12d0521128eb48c867945cf1ac9648eec379f0802039fe6a2914b7c7554a1dd5-runc.SvnC7l.mount: Deactivated successfully. May 13 08:30:56.603082 sshd[4111]: pam_unix(sshd:session): session closed for user core May 13 08:30:56.610413 systemd[1]: sshd@25-172.24.4.152:22-172.24.4.1:47732.service: Deactivated successfully. May 13 08:30:56.612510 systemd[1]: session-26.scope: Deactivated successfully. May 13 08:30:56.614470 systemd-logind[1242]: Session 26 logged out. Waiting for processes to exit. May 13 08:30:56.623736 systemd-logind[1242]: Removed session 26. May 13 08:31:35.728572 env[1257]: time="2025-05-13T08:31:35.728024822Z" level=info msg="StopPodSandbox for \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\"" May 13 08:31:35.734379 env[1257]: time="2025-05-13T08:31:35.733954828Z" level=info msg="TearDown network for sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" successfully" May 13 08:31:35.734753 env[1257]: time="2025-05-13T08:31:35.734389443Z" level=info msg="StopPodSandbox for \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" returns successfully" May 13 08:31:35.744640 env[1257]: time="2025-05-13T08:31:35.744368045Z" level=info msg="RemovePodSandbox for \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\"" May 13 08:31:35.745510 env[1257]: time="2025-05-13T08:31:35.745146256Z" level=info msg="Forcibly stopping sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\"" May 13 08:31:35.746286 env[1257]: time="2025-05-13T08:31:35.746188410Z" level=info msg="TearDown network for sandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" successfully" May 13 08:31:35.759933 env[1257]: time="2025-05-13T08:31:35.759833231Z" level=info msg="RemovePodSandbox \"024f91fd4f6d7a3e1381597522feccc31394203ace5d2301042fd38e5d4f9489\" returns successfully"